Browse the comments on virtually any story about a ransomware attack and you will almost surely encounter the view that the victim organization could have avoided paying their extortionists if only they’d had proper data backups. But the ugly truth is there are many non-obvious reasons why victims end up paying even when they have done nearly everything right from a data backup perspective.
This story isn’t about what organizations do in response to cybercriminals holding their data for hostage, which has become something of a best practice among most of the top ransomware crime groups today. Rather, it’s about why victims still pay for a key needed to decrypt their systems even when they have the means to restore everything from backups on their own.
Experts say the biggest reason ransomware targets and/or their insurance providers still pay when they already have reliable backups is that nobody at the victim organization bothered to test in advance how long this data restoration process might take.
“In a lot of cases, companies do have backups, but they never actually tried to restore their network from backups before, so they have no idea how long it’s going to take,” said Fabian Wosar, chief technology officer at Emsisoft. “Suddenly the victim notices they have a couple of petabytes of data to restore over the Internet, and they realize that even with their fast connections it’s going to take three months to download all these backup files. A lot of IT teams never actually make even a back-of-the-napkin calculation of how long it would take them to restore from a data rate perspective.”
Wosar said the next most-common scenario involves victims that have off-site, encrypted backups of their data but discover that the digital key needed to decrypt their backups was stored on the same local file-sharing network that got encrypted by the ransomware.
The third most-common impediment to victim organizations being able to rely on their backups is that the ransomware purveyors manage to corrupt the backups as well.
“That is still somewhat rare,” Wosar said. “It does happen but it’s more the exception than the rule. Unfortunately, it is still quite common to end up having backups in some form and one of these three reasons prevents them from being useful.”
Bill Siegel, CEO and co-founder of Coveware, a company that negotiates ransomware payments for victims, said most companies that pay either don’t have properly configured backups, or they haven’t tested their resiliency or the ability to recover their backups against the ransomware scenario.
“It can be [that they] have 50 petabytes of backups … but it’s in a … facility 30 miles away.… And then they start [restoring over a copper wire from those remote backups] and it’s going really slow … and someone pulls out a calculator and realizes it’s going to take 69 years [to restore what they need],” Siegel told Kim Zetter, a veteran Wired reporter who recently launched a cybersecurity newsletter on Substack.
“Or there’s lots of software applications that you actually use to do a restore, and some of these applications are in your network [that got] encrypted,” Siegel continued. “So you’re like, ‘Oh great. We have backups, the data is there, but the application to actually do the restoration is encrypted.’ So there’s all these little things that can trip you up, that prevent you from doing a restore when you don’t practice.”
Wosar said all organizations need to both test their backups and develop a plan for prioritizing the restoration of critical systems needed to rebuild their network.
“In a lot of cases, companies don’t even know their various network dependencies, and so they don’t know in which order they should restore systems,” he said. “They don’t know in advance, ‘Hey if we get hit and everything goes down, these are the services and systems that are priorities for a basic network that we can build off of.'”
Wosar said it’s essential that organizations drill their breach response plans in periodic tabletop exercises, and that it is in these exercises that companies can start to refine their plans. For example, he said, if the organization has physical access to their remote backup data center, it might make more sense to develop processes for physically shipping the backups to the restoration location.
“Many victims see themselves confronted with having to rebuild their network in a way they didn’t anticipate. And that’s usually not the best time to have to come up with these sorts of plans. That’s why tabletop exercises are incredibly important. We recommend creating an entire playbook so you know what you need to do to recover from a ransomware attack.”
Also would have accepted the title “don’t want your boss to yell at you after a server dies or someone fat finger deletes data? Test your backups”
As a Senior Administrator once told me long before ransomware was a thing:
“Until you have tested your backups by regularly verifying the restored data – it’s not a backup, it’s wishful thinking and a fast path to the unemployment office”
Yep. You gotta test! When I was first put in charge of a Lan working for a state agency back in the late ’80s, first Monday of every month I restore tested the tape from the previous night then took it off to a warehouse a hundred yards and locked it in a reasonably fire-proof safe.
It wasn’t absolutely disaster-proof, but it was the best we could do at the time.
Now, for my personal stuff, I have one set of backups in a fire-proof box at home, and another set in my desk at work, in addition to the live backup connected to my home computer. Tested. Similar setup for my laptops. It has served me well.
I use 7-ZIP to encrypt and back up my folders and files. I know for a fact that it allows you to test your backup’s integrity
I really hope you are joking.
That kinda depends on the circumstances, yes? Archiver encryption varies wildly, that’s true, but for an individual it may make sense to make a copy on external HDD, then unmount it and put it somewhere safe. In my case, most valuable files end up encrypted and kept locally plus sent to a cloud server, but I still need a local image to restore enough of my system to do a big restore.
I learned 40 years ago that a backup copy can only be as good as the last restore test. It’s still true today.
Very interesting topic. Thank you Brian
I’m currently engaged in a project of my own to create a local cloud backup using a Synology NAS, MinIO and Retrospect backup software in a walled off and hardened environment. One of the goals is to keep ransomware from touching the actual backup device. The other is to be able to restore everything in a local cloud environment, which should be as fast as possible, but realistically will take several days. It has been a tough project as far as learning curve and testing on a live system, but I plan to have it all working by the end of this month. I ‘m also working on specific plans about how to recover an entire network, which is a huge mental exercise. This is the first story I’ve read that explores the cloud backup/restore problem, which is simply ignored by most people. Kudos.
Schrodinger’s Backups.
The state of any backup is unknown until a restore is attempted.
“The Tao of Backup” is my favorite site covering the often-forgotten details of how to do backups properly.
There are a few things about on-line backups that make me nervous.
One is that some on-line backup company a few years ago shut down and gave everyone a month to retrieve their data. From what I’ve been told, many didn’t get it retrieved.
Another is the issue that the ones I’ve looked at are just trying to mirror the data on each computer. If you lose some files and don’t catch them in time, the on-line backup will often assume that after some period of time, they must have been intentionally removed and will delete their copy as well. For example, if you have some accounting files that are only used for the end of the year processing, they could be gone an entire year before you realize that they aren’t on your computer any more.
The speed issue you mentioned is definitely a good thing to keep in mind.
I think that I’ll just stick to using borg backup.
All valid concerns. One of the only cloud based back up options that (to me ) is worth its weight in gold is CODE 42/CrashPlan. As long as you don’t change your settings it will continue to keep ALL backed up content forever (or until they crash & burn)
Most services delete your files from back up say 30-90 days after you delete or move from your computer. NOT CODE 42 – its truly an archive service (although they deny it). I have approx. 400 GB on my hard drive at any given time ; my CODE 42 content is around 3 TB. I mean what good is a back up service if it deletes as you do??!!
No good in my book – I have no relationship with CODE 42 other than a long term very satisfied customer.
Backblaze is worth a look. Their personal backup solution allows you to retain old versions well beyond 90 days (although they do have a requirement that external drives need to be connected periodically or the data will be dropped).
They also offer the option to mail you your data on an external storage device if you need to quickly restore a full system. I don’t believe CrashPlan offers that.
I was a CrashPlan user for many years, still have one system backing up to CrashPlan. But I’ve otherwise migrated to Backblaze.
Perhaps “backups” should be renamed “restorables” to drive their purpose into consciousness.
Since the internet has not been around 69 years, the 69 year figure to restore isn’t possible. There is no reason why it would take longer to restore than to create. Clearly hyperbole, but the implication remains: backups should be local.
and rotated regularly to an external vault to protect them against fire, floods, earthquakes, asteroids, supernova gamma-ray bursts etc.
My preference is in a safe deposit box (if it will hold the backups) in a bank at least 20 miles away. You don’t want it too close or any disaster that wipes you out may wipe out the bank as well.
An underground salt mine might be better as long as nobody drills through the lake above it.
We went through that planning exercise back in the day, and decided the 20 (or whatever) mile requirement was not based in reality. If it’s a tornado or meteor strike, you’re going to have the rebuild the entire data facility. Since I worked for a government organization, the procurement alone for replacing so much hardware and infrastructure would take six months to a year, to say nothing of assembling and testing it all — assuming the knowledgeable staff survived the impact (2/3 chance, we hoped). Made more sense to simply duplicate all the mission critical servers, as well as the storage, somewhere that far off site.
I think you’re joking but I’m not positive. In case you’re seriously unsure as to how a backup could take longer to restore than to create: think about limited upload/download bandwidth. If you’re restoring tbs/pbs of data with a slow link to a remote location and you’re downloading the data at mbit/s, it could take forever just to download. It’s not hyperbole. Also, why would the internet need to be around for 69 years for it to take 69 years to restore a backup? That… is gibberish.
“69 years” was probably hyperbole from Bill Siegel, for comic effect. But as a 1-man company, it would take me a day or two to download all my important files from a cloud server. That’s after installing Windows and my image backup/recovery software, and restoring a recent image. Local copies can be crucial!
Can I afford to wait 2 days? Yes, but some businesses can’t.
I restore the full C: drive (OS) on the server from local backup (not cloud!). Takes about an hour. Use backup from a few nights prior, to give time for my protections provider to find any trojan that also got restored. Then can get workstations back logged on to domain (or do full restore from the server to them if they were also encrypted, adding 1-2 hours). Then get backup of data drives from local backup (simple high quality USB drive). I do recommend having a cloud backup provider for the data, but that is for backup of the local backup. If there is a lot of data, usually there are a few folders that are needed right away, so I do those first. Only use a backup software that creates proprietary files, and better yet, makes the backup drive invisible (like old, included in Windows Server, Microsoft Backup). Nothing fancy but it works.
I worked at a government agency where the head of the IT services required a periodic disaster recovery over a holiday weekend. The failures were post mortemed and incorporated into the recovery plan. Senior Management always questioned the use of time and resources but he insisted. It’s the kind of thing that’s too often overlooked or viewed as unnecessary.
A friend of mine works for a major insurance company in Omaha. Twice a year they do a disaster recovery drill and post mortem the result. From what I understand, they’ve never had a recovery drill go 100% to plan.
Don’t forget, ransomware is not just about inaccessible data anymore. It has taken on a secondary pain. Release of the company secrets to anybody from competitors, to the internet at large, or any person or organization in between and possibly one or more of the above. First ransom demand is to regain access to your data, second is to keep it from being released.
RE: “the 69 year figure to restore isn’t possible.”
It absolutely is possible for the untested restore. Unforeseen bottlenecks can easily cause that slow of a restore. I have seen many unexpected slow transfers or procedures due to poor planning. I have also seen last ditch remedies of driving many hours round trip to physically retrieve the backup media. Unfortunately, cloud backups typically do not segregate media by customer which eliminates that as an option. Again, it comes back to test your backups by performing all manners of restore.
The reason people pay ransoms has much less to do with whether a backup strategy is viable or timely, and much much more to do with the fact the extortionist has complete leverage and control over timelines, demands, leaked data, and home street addresses of execs, etc. Backups are the least of a victims worries in those midnight hours of decision making. The actors own more than the data; they own the keys and secrets too.
I recall testing my employer’s (a bank) disaster recovery plan at the hot site we had contracted with. These were once-a-year tests and naturally involved us from the IT dept. One year we even brought some users to the site to see how good things were. Mind you, this was all done with 9-track tape, so we had to call the offsite tape storage company to send the tapes from last night to the hot site in NJ (we were in Manhattan) where we would then start the process of recreating the production system.
We get the system up and running and ask folks to start doing their jobs. The first thing the woman in charge of the Accounting department asked us was “Hey, where are our reports?” So we learned we had to ensure the reports created during the end-of-day processing were added to the final backup job. And of course, we’d have to ensure we could print them at the site (this is back in the day of the green bar paper printing on 132-column printers).
No such thing as business continuity back then.
Reading articles like this, I am both intrigued and in awe of the complexity and scale… and also really happy that I am just a home user whose backup’s are a small bunch of spreadsheets, docs, PDF files, and digitized family photos and videos. Even so, I have one complete data set in my laptop drive, a second in a microSD drive (always inserted into my laptop), a third set in an external drive connected only when backing up, and a fourth set synced online (though with versioning).
As a standard user, I only have write access to my laptop drive – so hopefully that would be the same for any malware that manages to sneak in? Backing up or doing anything else with all other drives require the administrator – which of course, is also me, but requires a quick fingerprint approval.
How is it legal to run a company whose stated purpose is to help ransomware gangs get paid?
It is an old story! I have been in IT for over 40 years. In the old days backup was done to tape. Regularly a client would discover tapes were no good, e.g. after a disk crash. Until you know you can restore it, a backup is just wishful thinking.
Today’s systems are much very complex, with personal devices, PC’s, local servers, cloud based servers, virtual machines, many interfaces, and massive amounts of data. Often these systems have grown organically, with different generations of architecture and software. Often few people understand the sequence and tools needed of how things need to be restored.
Some core banking systems are still in assembler. ISO8583 (bitmap) interfaces are not uncommon. Old hardware goes to the scrapyard, but old software goes into production.
If you do incremental data backup to the cloud you will be in for a nasty surprise if you need to restore the whole system.
Bart – I would not disparage cloud backup so quickly.
1 – You can back up to the cloud, and still keep a local copy. That would help you restore quickly even though a golden, and perhaps air-gapped copy of the data lives in the cloud.
2- You could “restore in the cloud” using Cloud DR such that you failover from backup copies in the cloud. A number of vendors offer orchestrated failover (runbooks that describe the order of recovery, pre-post scripts, and even regular failover testing). Some vendors also support failback from the cloud to an on-premises system should you decide to re-build that way.
3- Some providers support exporting backups to a cloud appliance like AWS Snowball.
I recall being asked by someone to look at their server for their business which had crashed with a RAID failure. They’d dutifully been inserting tapes into the machine every night… BUT hadn’t tested the tapes, and it turned out that the tapes had failed to record awhile ago and they’d never noticed.
In the end they had to resort to digging through emails, and quietly getting clients to contact them to restore the records as the data was for the most part, just gone.
V 1, 2, 3, 3.1, 3.11, 95(a, b, c.d), 98(1, 2) Me, XP, Vista, 7, 8, 8.1 were all dumped before they were fixed. As we are getting close to 40 years of lack of perseverance, did anyone else notice a clear pattern of bait and switch?
It’s nearly impossible to expect any mid-size to large company to effectively recover from a disabling attack if the IT Admin people they’ve hired to oversee their operations have little or no rigorous training/certification in infrastructure continuity and disaster recovery. With today’s technology and hardware it shouldn’t be as painful a process as suggested.
Certainly the most important part of a backup, this is why its critical to always use the “verify backup” feature in most backup tools to make sure your not getting corrupt data.
This alongside some regular testing and multiple backups in other places is a good measure to have, heck you can even use some different backup tools to make sure if one fails you have another to fall back on.
Interesting.
The bit about one or two days, is accurate, for one person. Imagine a large company, unless you have an it person to help each person, you are down, a workweek at least. Remember, they have to restore their systems first. You are the gravy on their meat.
Secondly, how fast you can restore, is based on how fast your data can be uploaded back to you. Rember, upload is about ten times slower then download speed. It took you overnight to copy the data to your backup, how how long to get it back? And now to decrypt it, and do this for every machine? And that is if there are no problems. Now, double that time, you have to clean and verify those machines, restore the programs and verify the incoming data, reset, and have angry people looking over your shoulder, making suggestion, doubling the time to complete the project, easily a week per floor, of a average mom and pop business.
Twice a year we do a full switch between data centers, just as if the building was destroyed. It has varied from a 4 hour to 12 hour job, with failings on some things – but those were good lessons. After 6 years it is very solid now. But some of the discoveries during the exercises can Only be found by doing them. Some vendors don’t provide 24 hour support, (to change IP restriction/tunnels) load balancer that didn’t like to play under extenuating circumstances, it can be a huge challenge. If there is an even a tiny cong issue ouch. I feel really good that no matter (less no internet for anyone) that we would be able to provide full service to customers in 24 hours at the worst, 4 hours at best. As a FI we are regulated to lots of preparedness, but in the end it’s up to us if it fully supports the business and not just our legal requirements. Make life easier on yourself and be ready.
cyka bylat
decib mericans + ruskies have common – ego and pride
***Dont be Lazy***
create an image and verify test image
air gap image
backup data and verify test data recovery
air gap backup data
restore when needed
make cheese with babushka
play cod and
give middle finger to everyone trying to hack you including crap middle management and incompetent higher ups
so endith thee lesson
Ego and pride – it ain’t just the Ruskies and us.
Hmm, depending on a network connection to transfer any large amount of data, especially for a full restore is plain silly.
I ran IT for a 30 person company where I designed a fully open source backup system where;
All servers had a Backup Drive which received hourly live backups of the host (with transactional db).
Each night a transfer was done from each Backup Drive over the network to a backup server.
The backup server had the capacity to keep weeks of backups, and were copied daily to a Removable Drive.
Each Removable Drive was moved off-premise each day and kept for six days before being returned to a weekly cycling.
The backup separated programs from data and config files. Any file that was written to was included.
We had spare servers that were used for testing. I could restore anything/everything within hours, or as fast as we could rebuild. The data was easily recovered at bus speed since the restore was on drives and not over the network.
All it would take to make the whole thing useless would be an attack with a delayed activation of sufficient duration to infect every backup. SmartOS has it right with the OS being R/O and all it takes to restore it is a reboot.
I buy identical high end HP workstation laptops off-lease and keep 3 synched systems. A ready supply on ebay. When I got a blue screen (happened last week, first time in 3 yrs), I popped it out of the dock, stuck in a backup machine that I keep 200 yds away at my workshop/test lab, and was back in business in 5 minutes while I restored a system image to the machine that died.
…a number of years ago I did some work for a major airline – they had recently tested failing over to the backup system as they had been doing for every 6 months prior to that, except when they pulled the cables to the emc array and flipped the breakers as they had always done, the system did not fail over correctly. So they reversed the process, flipped the breakers back on, restored the cables, and viola – oops the system did not come back up on the normal operational hardware. then they tried to go to the warm site and that also failed. oh oh, they could not launch planes worldwide for 6 hours. later I came in to try and identify what other gremlins were waiting to haunt them…
…lesson – do the recovery process over and over until it becomes second nature, and then expect it to fail and learn how to recover, again…
Why do companies put their critical data in a place that is accessible through the Internet in the first place? Doesn’t common sense tell anyone that that is a bad idea?
That isn’t what happens. Hacks usually involve some pivoting.
The article and comments were really interesting to read. The most I have to worry about is my personal documents and pictures. (I have work stuff, but while it would fun to get involved in that part of it, it’s not part of my job, though my favorite projects are backed up. So, as the saying goes not my circus, not my monkeys.) My documents are split between a cloud storage provider and a NAS. The cloud storage is generally documents I’m actively working with. The NAS holds pictures, scanned files, music, etc. Once a week the NAS downloads my cloud files. Every night my photos from my phone backup to my NAS. The NAS is backed up once or twice a month, once to a drive that’s sitting on the shelf below (though it is always unplugged unless I’m backing up) and then to a hard drive that I keep in our fireproof box (Each use a different method/software). Finally, as a last resort I have a different cloud provider that isn’t connected to any computer that I’ve upload critical documents like mortgage information, marriage/birth certificates, pictures of our house, etc.