Browse the comments on virtually any story about a ransomware attack and you will almost surely encounter the view that the victim organization could have avoided paying their extortionists if only they’d had proper data backups. But the ugly truth is there are many non-obvious reasons why victims end up paying even when they have done nearly everything right from a data backup perspective.
This story isn’t about what organizations do in response to cybercriminals holding their data for hostage, which has become something of a best practice among most of the top ransomware crime groups today. Rather, it’s about why victims still pay for a key needed to decrypt their systems even when they have the means to restore everything from backups on their own.
Experts say the biggest reason ransomware targets and/or their insurance providers still pay when they already have reliable backups is that nobody at the victim organization bothered to test in advance how long this data restoration process might take.
“In a lot of cases, companies do have backups, but they never actually tried to restore their network from backups before, so they have no idea how long it’s going to take,” said Fabian Wosar, chief technology officer at Emsisoft. “Suddenly the victim notices they have a couple of petabytes of data to restore over the Internet, and they realize that even with their fast connections it’s going to take three months to download all these backup files. A lot of IT teams never actually make even a back-of-the-napkin calculation of how long it would take them to restore from a data rate perspective.”
Wosar said the next most-common scenario involves victims that have off-site, encrypted backups of their data but discover that the digital key needed to decrypt their backups was stored on the same local file-sharing network that got encrypted by the ransomware.
The third most-common impediment to victim organizations being able to rely on their backups is that the ransomware purveyors manage to corrupt the backups as well.
“That is still somewhat rare,” Wosar said. “It does happen but it’s more the exception than the rule. Unfortunately, it is still quite common to end up having backups in some form and one of these three reasons prevents them from being useful.”
Bill Siegel, CEO and co-founder of Coveware, a company that negotiates ransomware payments for victims, said most companies that pay either don’t have properly configured backups, or they haven’t tested their resiliency or the ability to recover their backups against the ransomware scenario.
“It can be [that they] have 50 petabytes of backups … but it’s in a … facility 30 miles away.… And then they start [restoring over a copper wire from those remote backups] and it’s going really slow … and someone pulls out a calculator and realizes it’s going to take 69 years [to restore what they need],” Siegel told Kim Zetter, a veteran Wired reporter who recently launched a cybersecurity newsletter on Substack.
“Or there’s lots of software applications that you actually use to do a restore, and some of these applications are in your network [that got] encrypted,” Siegel continued. “So you’re like, ‘Oh great. We have backups, the data is there, but the application to actually do the restoration is encrypted.’ So there’s all these little things that can trip you up, that prevent you from doing a restore when you don’t practice.”
Wosar said all organizations need to both test their backups and develop a plan for prioritizing the restoration of critical systems needed to rebuild their network.
“In a lot of cases, companies don’t even know their various network dependencies, and so they don’t know in which order they should restore systems,” he said. “They don’t know in advance, ‘Hey if we get hit and everything goes down, these are the services and systems that are priorities for a basic network that we can build off of.'”
Wosar said it’s essential that organizations drill their breach response plans in periodic tabletop exercises, and that it is in these exercises that companies can start to refine their plans. For example, he said, if the organization has physical access to their remote backup data center, it might make more sense to develop processes for physically shipping the backups to the restoration location.
“Many victims see themselves confronted with having to rebuild their network in a way they didn’t anticipate. And that’s usually not the best time to have to come up with these sorts of plans. That’s why tabletop exercises are incredibly important. We recommend creating an entire playbook so you know what you need to do to recover from a ransomware attack.”
For the person at home, backing up should be much easier. You can easily clone a drive and then store it offline. You can also keep your documents in one directory and only back up that and subdirectories. Most applications you don’t need to back up unless you are using something old and in that case it is probably insecure.
Companies run into tons of problems because they often rely on old apps (commercial products as well as in-house built apps) and if they lose those apps it is hard to recreate them and the data isn’t easily imported into other tools. Of course that is also why there are so many security issues, companies insisting on supporting old products because they don’t want to spend money to update things.
Most at home data doesn’t require much disk space until you get to photos and videos. People insist on keeping tens of thousands of photos that no one really cares about.
On Macs I liked SuperDuper although that is getting dated now.
I never really thought of the time involved in restoring backups, that is kind of amusing. I definitely would have tried to test things (restoration of data) but in a production environment people often resist that even though it is critical. I’ve never had to run a large company/facility so I can understand things get complicated very quickly but that isn’t an excuse to pretend problems don’t exist.
I had to restore my home computer from a backup and it was no big deal. My case wasn’t a ransom virus. I’ve never had one of those. Mine was a hard drive crash. My machine wouldn’t run. So I bought a new hard drive and I paid a tech to restore my system. Then I restored my files from by Google backup. That took minutes. Finally I had to re-load some programs that I use regularly which took a little longer. But I can see for a business with much more data this could be a bigger problem. I was very glad I paid for my Google backup which costs me about $2 per month for 100GB, I think?
Done this multiple times myself. I consider my local storage temporary.
Thank you for the article. I run IT at a defense contractor and have my private keys stored on an encrypted share on a network store – yikes! I just made a few copies of our private keys and tossed them on few USB drives.
The best way to ensure your computer won’t be held ransom, run anything but Windows and stop using email for anything important.
I’m just an ordinary Joe and always take time to read Krebs. If anything, there is ALWAYS something to learn & improve upon as we try and keep our “stuff” secure. Thanks again for these “nuggets.”
@Christian, while your advice is accurate, I don’t think either suggestion is realistic. But, if I were to expand on your theme: staying off the internet altogether is the most effective way to make sure your computer won’t be held for ransom. Sounds unreasonable to me, since, right this moment, I am on the internet. For business, none of these 3 recommendations are good. The better way to protect yourself (and your organization) is with a well thought out, vetted and operationalized security program that conforms to a well thought out risk acceptance posture. Testing backup recovery is one component of a security program that is frequently neglected due to deficiencies (excuses) such as time, money and resources. As this article points out, everyone should develop an end to end restoration plan, test it frequently, modify the plan as needed; lather rinse repeat as business as usual. “Are the accountants filing taxes now? We should be testing and maturing our incident response plan. now as well”
I agree 100% Christian..That’s why I switched to LInux over 20 yrs ago!..and continue to use multiple backups as well. -David
This is such an important contribution.
I shared this on a chemical enterprise blog.
Keep up this important work.
I guess it all boils down to profits instead of principles. If it is cheaper to pay for a key than restore, they take the less bitter pill to swallow.
I run an Photo – Cine’ Film – Audio – VHS archive. I’m also a photographer and have lots of my own data. I decided instead of going to LTO tape and constantly backing up the same data, I would go with optical disk to form a library. When a project is done I archive the data on M-Disc and don’t have to keep fooling with it and possibly having something lost in the translation after copying it for the umpteen time onto tape.
None of the projects are large enough to justify their own tape, so the data would have to be constantly recopied and merged with new data if I used tape. As the project is being developed I put it on AZA DVD or BD-R for temporary back up.
I had been using the various sizes of M-Disc, but the 4.7GB M-Disc has been discontinued. The rest of the M-Disc line has skyrocketed in price over the last few months. (100GB M-Disc has gone from $241 to $298) I used the 4.7GB size mainly for the VHS collection – each digitized tape transcribed with a DVD recorder got its own M-Disc. To make matter worse, new DVD recorders are pretty much extinct as well now.
I’m lucky as I can take my main computer offline and don’t worry much about attack’s to it. I started to do that after I had heard Microsoft’s forced updates deleted data on one of their updates. The second computer is not used for emails or web surfing, except for internet use on known entities I’ve dealt with before. The third computer is used for everything and data is constantly backed up from it and eventually transferred to the other machines.
But the ultimate goal for me is to put the bulk of the data on M-Disc and file it in the library. To back up the M-Disc I also use BD-R so there are at least 2 copies of a disc. As projects get worked on there are unusually more copies of the data in various stages of completion. Sometimes there are a dozen or more backups. BD-R is very archival and they cost about .40 each, so not a big deal.
I’m sure my system is no good for giant companies, but that is how I do it. And for the everyday fast backups it is HDD, SDD and some thumb drives. But these things are not archival, they are just short term.
I’m hoping they come out with laser engraved quartz backups. All of the AZO DVD’s I’ve tested deteriorate and become useless over time. If you put a AZO or Gold MAM-A DVD in the sun, it is ruined after 3 weeks. If you put a M-Disc in the sun, it is fine after a year in the sun. Data has to be engraved in a hard substrate to last for archival use. Then all you have to worry about is finding a reader for it.
The First Rule of Backup Systems: “Any backup system that is not tested regularly is not a backup system.” This is true whether we are talking about computers, electrical generators, sump pumps or anything else. Any IT organization that does not understand and practice this concept should have their competence questioned.
Here’s where people get into trouble: Not having software license codes backed-up!
Put this in a text document so any program can open them and then print it out and store it in a safe place.
Companies that use big data, like banks, call this “disaster recovery”. The ones that have their heads screwed on straight will have regular DR exercises. This involves simulating an actual disaster, and then seeing how fast and accurately they can get the system up and running again. This also usually involves bringing up the system from scratch using cold install versions of operating systems and software, and backups of data. They can’t afford to take production systems down for this testing, so they use redundant systems, which is a fair test since that’s what they might actually have to use after a real disaster. The point is that you learn an awful lot from these exercises, including whether your DR plan will actually succeed in producing a working revived system, whether your backup process is preserving everything you need for a full system restore, and how long the whole mess is going to take. If you’re just backing up your data and crossing your fingers then you’re betting against Murphy that you’ll be able to recover in a timely manner. Murphy usually wins.
Exactly right. My company did this, even back before the term “big data” was invented.
Occasionally Murphy is sent for a data center fire suppression system recertification, sets it off, and you get to test how fast your business continuity failover plan kicks in.
This is a tremendous contribution.
I also shared this in my Personal blog
Keep doing what you’re doing.
When I ran a network for a small government agency, I regularly tested my backups but always worried about the backups getting corrupted or infected; normally I would have several saved backups from several days. But backing up data is an absoluted necessity, and when a user corrupts or deletes an important file, you look like a hero when you magically bring it back.
I think this article misses the actual primary reason enterprises and corporations pay for decryption keys. Often they DO have backups, the backups are good and they CAN recover. The ransomware gangs are also extortionists and threaten to release your private data publicly if you don’t pay the ransom. In this case even if you can recover from backup, you pay the ransom to prevent public data breach (hopefully)
Tim,
You might have missed this part at the top of the story?
“This story isn’t about what organizations do in response to cybercriminals holding their data for hostage, which has become something of a best practice among most of the top ransomware crime groups today. Rather, it’s about why victims still pay for a key needed to decrypt their systems even when they have the means to restore everything from backups on their own.”
Tim’s point is one pays the ransom to avoid public disclosure of their private data which is completely missed here. Challenge of ‘Recovery from backup’ is a well known issue for the most given 99% of the orgs are filled with incompetent CISOs and cybersecurity staff who just have some useless certifications and lack security engineering background.
the problem isn’t how to backup or when but where? say you made a backup 10 years ago of your photos.it was made on tape. would that tape hardware used for the backup still working today? where there be same model available in case your old one doesn’t work anymore?
in an enterprise, admins can move the data from tape storage to hard drives as a sort of “backup” in case tape hardware stopped working when it’s needed. but at home? one doesn’t have that much resources to do it.
A great read with lots of good information and food for thought – thank you!
533ted@gmail.com
buy 7 evternal hard Drives with enough space to absorb youe live drive/data
Label them by the day of the week Do a full drive image Daily! Don’t just do the data files unless youwant to segregate the data and the OS. which will require both OS and data backups
Routinely test that the clone drive matches the live image
Repeat each drive withe the next drive in the sequence.Daily
MAKE SURE YOU REMOVE THE BACKUP AND LEAVE IT ALONE and UNPLUGGED!! preferably in a fireproof safe use more than 1 safe.
Again, I strongly recommend you do testing ! Randomly Put each external in a duplicate of the live machine for integrity confirmationl
lastly,I have run Linux Mint for years and a few months ago my computer got corrupted !
we are no longer safe.
It may sound complicated but it really isn”t tFor extra comfort take one or more drives home.
Test routinely
I think this article misses the actual primary reason enterprises and corporations pay for decryption keys. Often they DO have backups, the backups are good and they CAN recover. The ransomware gangs are also extortionists and threaten to release your private data publicly if you don’t pay the ransom..!!
This NIST report contains great backup and restoral advice that applies to anyone, not just MSPs. (much of it mirros the article) https://www[dot]nccoe[dot]nist[dot]gov/sites/default/files/library/supplemental-files/msp-protecting-data-extended%5Bdot%5Dpdf The NCCoE has also published more extensive publications on the subject here: https://www[dot]nccoe[dot]nist[dot]gov/projects/building-blocks/data-integrity/detect-respond
When I ran a network for a small government agency, I regularly tested my backups but always worried about the backups getting corrupted or infected; normally I would have several saved backups from several days. But backing up data is necessity, and when a user corrupts or deletes an important file, you look like a hero when you magically bring it back.
According to the Lazarus Heist, even if you pay they don’t give you the key, so you are completely stuffed so to speak. I have worked in the industry for 30 years and the things I have come across would make you shake your head in wonder. One place, when the UNIX admin went on leave I was asked to retrieve a large file from tape backup. I found 3 backups out of the previous month were not corrupted. That is 27 backups were useless. The admin never bothered checking. Another place where the software source files were deliberately deleted to “save space”. The decompiler was as useless as the IT manager. The development mainframe had no backups at all, this was the decision of the facilities manager. A loss of a drive suite meant all work for 45 developers was lost and had to be re-coded from scratch. The backups were on slow sequential read tapes because you never need them in a hurry… This was where we bought bill files for millions and our margin was 25%. The overnight money market paid 15%, so every day you didn’t process those files was the equivalent of shovelling money on a fire. Recovering the files from the tapes took eons. Note that an awk or sed on one of these files took 45 minutes per command to finish. Just finding one record on disk took 35 minutes. Recovering from a system backup..well it would be easier and quicker to get a job with another company. Then there was the director in a high profile government department that said “You will build the system without any security and put it live on the internet, do you understand that”. The government chief information officer did not agree.