November 9, 2015

One of the more common and destructive computer crimes to emerge over the past few years involves ransomware — malicious code that quietly scrambles all of the infected user’s documents and files with very strong encryption.  A ransom, to be paid in Bitcoin, is demanded in exchange for a key to unlock the files. Well, now it appears fraudsters are developing ransomware that does the same but for Web sites — essentially holding the site’s files, pages and images for ransom.

Image: Kaspersky Lab

Image: Kaspersky Lab

This latest criminal innovation, innocuously dubbed “Linux.Encoder.1” by Russian antivirus and security firm Dr.Web, targets sites powered by the Linux operating system. The file currently has almost zero detection when scrutinized by antivirus products at Virustotal.com, a free tool for scanning suspicious files against dozens of popular antivirus products.

Typically, the malware is injected into Web sites via known vulnerabilities in site plugins or third-party software — such as shopping cart programs. Once on a host machine, the malware will encrypt all of the files in the “home” directories on the system, as well backup directories and most of the system folders typically associated with Web site files, images, pages, code libraries and scripts.

The ransomware problem is costly, hugely disruptive, and growing. In June, the FBI said it received 992 CryptoWall-related complaints in the preceding year, with losses totaling more than $18 million. And that’s just from those victims who reported the crimes to the U.S. government; a huge percentage of cybercrimes never get reported at all.

ONE RECENT VICTIM

On Nov. 4, the Linux Website ramsomware infected a server used by professional Web site designer Daniel Macadar. The ransom message was inside a plain text file called “instructions to decrypt” that was included in every file directory with encrypted files:

“To obtain the private key and php script for this computer, which will automatically decrypt files, you need to pay 1 bitcoin(s) (~420 USD),” the warning read. “Without this key, you will never be able to get your original files back.”

Macadar said the malware struck a development Web server of his that also hosted Web sites for a couple of longtime friends. Macadar was behind on backing up the site and the server, and the attack had rendered those sites unusable. He said he had little choice but to pay the ransom. But it took him some time before he was able to figure out how to open and fund a Bitcoin account.

“I didn’t have any Bitcoins at that point, and I was never planning to do anything with Bitcoin in my life,” he said.

According to Macadar, the instructions worked as described, and about three hours later his server was fully decrypted. However, not everything worked the way it should have.

“There’s a  decryption script that puts the data back, but somehow it ate some characters in a few files, adding like a comma or an extra space or something to the files,” he said.

Macadar said he hired Thomas Raef — owner of Web site security service WeWatchYourWebsite.com — to help secure his server after the attack, and to figure out how the attackers got in. Raef told me his customer’s site was infected via an unpatched vulnerability in Magento, a shopping cart software that many Web sites use to handle ecommerce payments.

CheckPoint detailed this vulnerability back in April 2015 and Magento issued a fix yet many smaller ecommerce sites fall behind on critical updates for third-party applications like shopping cart software. Also, there are likely other exploits published recently that can expose a Linux host and any associated Web services to attackers and to site-based ransomware.

INNOVATIONS FROM THE UNDERGROUND

This new Linux Encoder malware is just one of several recent innovations in ransomware. As described by Romanian security firm Bitdefender, the latest version of the CryptoWall crimeware package (yes, it is actually named CryptoWall 4.0) displays a redesigned ransom message that also now encrypts the names of files along with each file’s data! Each encrypted file has a name made up of random numbers and letters.

Update: 6:09 p.m. ET: Bitdefender has published a blog post stating that the ransomware that is the subject of this post contains a flaw that let the company decrypt files that were encrypted by this malware. See their post here.

Original story:

And if you’re lucky, the ransomware that hits your computer or organization won’t be full of bugs: According to the BBC, a coding mistake in a new ransom threat called Power Worm means that victims won’t get their files back even if they pay up. Lawrence Abrams over at BleepingComputer.com (one of the first blogs added to this site’s blogroll) was the first to write about this innovation, and his writeup is worth a read.

Traditional ransomware attacks also are getting more expensive, at least for new threats that currently are focusing on European (not American) banks. According to security education firm KnowBe4, a new ransomware attack targeting Windows computers starts as a “normal” ransomware infection, encrypting both local and network files and throwing up a ransom note for 2.5 Bitcoin (currently almost USD $1,000). Here’s the kicker: In the ransom note that pops up on the victim’s screen, the attackers claim that if they are not paid, they will publish the files on the Internet.

Crim-innovations.

Crim-innovations.

Well,  that’s one way of getting your files back. This is the reality that dawns on countless people for the first time each day: Fail to securely back up your files — whether on your computer or Web site — and the bad guys may eventually back them up for you! ‘

Oh, the backup won’t be secure, and you probably won’t be able to remove the information from the Internet if they follow through with such threats.

The tools for securely backing up computers, Web sites, data, and even entire hard drives have never been more affordable and ubiquitous. So there is zero excuse for not developing and sticking with a good backup strategy, whether you’re a home user or a Web site administrator.

PC World last year published a decent guide for Windows users who wish to take advantage of the the OS’s built-in backup capabilities. I’ve personally used Acronis and Macrium products, and find both do a good job making it easy to back up your rig. The main thing is to get into a habit of doing regular backups.

There are good guides all over the Internet showing users how to securely back up Linux systems (here’s one). Others tutorials are more OS-specific. For example, here’s a sensible backup approach for Debian servers. I’d like to hear from readers about their backup strategies — what works — particularly from those who maintain Linux-based Web servers like Apache and Nginx.

It’s worth noting that the malware requires the compromised user account on the Linux system to be an administrator; operating Web servers and Web services as administrator is generally considered poor security form, and threats like this one just reinforce why.

Also, most ransomware will search for other network or file shares that are attached or networked to the infected machine. If it can access those files, it will attempt to encrypt them as well. It’s a good idea to either leave your backup medium disconnected from the system unless and until you are backing up or restoring files.

For his part, Macadar said he is rebuilding the compromised server and now backing up his server in two places at once — using local, on-site backup drives as well as remote (web-based) backup services.

Update, 9:48 a.m., ET: Added reference to BBC article.


66 thoughts on “Ransomware Now Gunning for Your Web Sites

  1. The Weather Network

    Excellent write up Brian.

    I Personally use the FOG project for full server image backups, and for forensics.

  2. Chris Nielsen

    I had thought that Bitcoin was just a pure-play cyber Ponzi scheme, but I see now how it’s real purpose is to make untraceable extortion payments possible.

    I can’t judge anyone for paying the extortion demanded, but I wish they wouldn’t even if the cost is much higher. I think it will result in us all paying more later on.

    1. Jerry

      I agree, hackers will be emboldened until the end-user(s) realize the vital importance of successfully backing up their files.

      1. Sunil Srivastava

        And Jerry, what if the backed up file includes a worm for encrypting the entire backup at a later date?? In that case simply backing up wont kill the issue for long..

    2. Philippe Verdy

      I also agree that Bitcoins should have never been authorized as a legal currency, because there’s simply no working regulator and no controls of frauds using it, and justice cannot investigate.
      Bitcoins should be killed by finding flaws in its algorithms so that its value will drop to almost zero.

      We really need currencies that are trustable and controled even if this has some costs in terms of privacy: but that’s a place where governments are better placed to control it, they cannot do what they want without public inspection of what has been done or without electoral sanctions. official currencies are certanily much more democratic than cryptocurrencies that should be banned and made illegal, so that their trading will stop. Criminals like above will have more difficulties to cover their tracks and there’s some place for judiciary actions, including with international cooperations.

      Campign for the abolition of cryptocurrencies, make their trading illegal on all currency markets, ban transactions with banks that process them or sanction them with huge fines several factors above the total of transactions, plus additional taxes per transaction.

      If these banks don’t want to pay or don’t want to stop their transactions, block them, or seize them by nationalizations and transfers of amounts to the IMF (in order to help finance poor countries, or help recover from financial and commercial disasters caused by those criminals) or to UN-lead humanitary programs.

      If it becomes impossible to buy legally Bitcoins, the criminals won’t be able to claim Bitcoins for their ransomwares because it will be impossible for victims to pay anything.

      But more urgently it is the worldwide system of banks that must be more protected, notably against credit card frauds. Many banks are defective on working on this correctly, they pay the insurances, but have this cost reapplied on the transaction fees or account holding fees that everyone must pay (with constantly rising costs).

      May be we will need to return back to paper and gold money, and electronic payments will collapse because there are too many frauds. The amounts of money stolen t othe economy and banks has never been so high since it has become massively electronic. In addition the electronic money has been abused on “legal” trade markets with too many abusive practices (including many fake investments don’t never help the economy and profit only to a few ones, such as derived products on secondary markets, massively used also four money laundry and for financing crimes and terrorism).

      1. Lee

        This is flawed in the sense that before bitcoins the attackers were demanding other forms of “anonymous” payment, such as money orders, cash cards, etc. Bitcoins aren’t the problem here, they’re just demonized because they’re the current easy-to-use method of accepting payment.

    3. timeless

      In principle, Bitcoin transactions (including ransoms) are traceable. This is sometimes called the ledger, or blockchain [1].

      They’re only “untraceable” if someone is willing to launder the transaction.

      But, for the purposes of basic “blame”, we could say that the original payment, is permanently tainted (See Receiving stolen property [2]), chase it to its current owners, and work backwards until we identify the launderer, or someone who fails to prove they aren’t the launderer. — yes, there may be a statute of limitations, but it’s probably 10+ years, and we’re talking about a crime that happened this year.

      When we identify the launderer, we can offer a “reduced penalty” for identifying the washed currency transaction. — See Money Laundering [3]

      Given that the time stamps and owners are identifiable through the ledger, this would be possible.

      From there, you can chase the currency to follow the “laundered” currency. Again, you chase it to its current owners and then demand that they identify the entity that provided them w/ the currency — i.e. prove they aren’t that entity. You follow this chain to the criminals.

      You can’t do this with paper currency because no one keeps a record of individual serial numbers, and there are too many cash transactions. But the bitcoin ledger keeps those records, and there are fewer than 150 transactions per minute, so it’s not really a huge search-space.

      [1] https://blockchain.info/wallet/bitcoin-faq
      [2] http://www.encyclopedia.com/doc/1G2-3437703677.html
      [3] https://www.irs.gov/irm/part9/irm_09-005-005.html

  3. Mike

    I don’t need to know anything about computers. That’s someone else’s job. If my website gets infected, it’s someone else’s job to fix it. It’s going to be someone else’s fault it got infected anyway.

    It’s all in the cloud and controlled by a dozen other companies that will never have your best interest in mind.

    Lock your footlocker!

    1. brodie7838

      > I don’t need to know anything about computers. That’s someone else’s job.

      > It’s all in the cloud and controlled by a dozen other companies that will never have your best interest in mind.

      Not sure if you’re being ironic, or if the hypocrisy was just lost. Logic would dictate that if you knew “about computers”, you wouldn’t be hopelessly obligated to use cloud services which you further know nothing about, apparently.

      Generally speaking, a system doesn’t just become infected on its own (at least, not for 99.99% of home and office users who employ a NAT router on their network). They become infected because the operator clicked on or otherwise activated malicious code on a system. This is especially true of CryptoWall variant infections, which use the classic and *ancient* attack vector of email attachments and download links – both of which would be 100% ineffective if end users would *just.stop.using.them* and think before they clicked.

      So until CryptoWall finds a way to move your mouse, download that attachment, and execute the binary inside, you’ve got no one to blame but yourself for not understanding the basic operation of the primary tool used by modern workers in nearly every industry in the world. What you’re effectively saying by denying that you need to know “about computers” is: “I don’t know how to use this hammer, and when I smash my thumb it’s anyone’s fault but my own.”

      Good luck with that, I suppose I shouldn’t complain too much as this just ensures job security for me.

      1. Allen Baranov

        I’m also not sure whether poster is naive or (subtly) acting that way.

        The fact is that when the sh.. hits the fan and your website is lost then the best you’ll get from any company is “Er, sorry,hey.”

        It is your stuff and if you are not backing it up yourself then tough luck.

      2. Mike

        That section of the comment was meant to mock those that usually have that kind of mentality with regards to a computer/server they are responsible for.

        I maintain multiple servers and even more client computers as a part of what I do. Making backups and having backups to those backups is a must (part onsite and part offsite).

        Many infections do happen as a result of the user doing things they shouldn’t, but sometimes it’s about being able to upload from a remote location. There are many ways in. This is the reason for having a responsible person dealing with things.

        I am not a fan of the cloud. I can see why it becomes a somewhat natural progression though. But I do completely agree with the idea of NOT working with a web server as root. I also agree that it is best to have your source files on another machine and update the server from that.

      3. Thr3atHunt3r

        Ransomware can also be delivered via Malvertising which does not require a user clicking on an infected attachment or malicious link. All they need to do is visit a website (and it can be a legitimate one) that is serving up malicious ads which exploits a vulnerable browser plugin (e.g. Adobe Flash) or some other vulnerability on said machine and it’s Game Over!

        Just thought I would add that to the conversation.

  4. Captain Obvious

    Two words: version control. If your website itself is the primary data source, you’re doing it wrong. Of course, nearly everyone is doing it wrong, so there’s that. Basically, if your website is hacked, you should just re-build it from the last thing you checked in to your version control system. You don’t back up: you build from a source that isn’t the website.

    1. Vito

      Actually, I couldn’t understand why this would even be a problem for a website. My initial thought was, “Wait…you mean there are actually people who don’t do configuration management (CM) for their websites?”

      Apparently so.

      I never upload content that isn’t already stored, versioned, and documented in my CM system. It never occurred to me to do it any other way. I mean, just the fact that I’m trusting someone else’s server to host my sites is enough reason not to upload anything that isn’t already safely stored on secured systems under my own control.

      If everyone did configuration management then website ransoming wouldn’t be possible.

      1. Ryan

        I’d love to see how you use CM/VC to store user-generated database content because generally speaking, most customers don’t want to issue a pull request to make a purchase on your website, and pretty sure PCI guidelines would frown on storing payment info in VC.

      2. B Tasker

        Huh, so it isn’t just me being paranoid then. That’s a good sign 🙂

        All my systems push backups to a least two locations – one local, the other pushed to cloud storage (but encrypted locally first) – in this case Amazon S3.

        I have incremental as well as snapshot backups with sane retention times.

        Most of the systems also run a daily sweep of binaries etc and alert if they’ve changed in any way (hashes are stored off-server). A sweep failing to run will generate an alert from the hashstore

      3. Cari

        Though, what about things like databases that are dynamically updated with regular use? They’re integral for the CMSes that are all the rage right now, so those for sure need to get backed up regularly.

        There’s also an inherent issues with CMS-use anyway, in that most users work on a live site because they’re modular. There’s usually a plugin/theme for whatever they need, so they generally don’t bother using dev environments nor versioning. Problem is, there’s a bazillion mommy-bloggers who are content with shared hosting and wordpress, and they don’t care about any of these concerns so long as it’s up.

        It’s also worth saying that it can be a bit of a pain to backup/restore databases, especially if they were allowed to get larger than you’d expect (see: wordpress transient data). If something’s too big a hassle, people are less likely to do it unless they get burned real bad. (In my experience, we had guys getting burned on a nearly-daily basis, but they just didn’t care because they weren’t doing any of this legwork.)

        Databases can be troublesome to restore from unless you’re importing a dump, too. You also have to consider that making a dump from the live environment in the first place incurs resource overhead on that live environment, so it usually is only done at off-peak times of the day/week, limiting the freshness of those backups. Because of that some admins like to take the whole SQL directory (the raw content). From that point they can either have the backup server run an SQL service from that raw content so it can generate a dump, or to restore the entire SQL directory to the production server to effectively turn back the clock.

        Anyway, neat stuff. It’s a nifty topic.

    2. timeless

      Version Control done wrong will not help you.

      If your only copies are local, or if your system has a limited number of copies, or if it’s possible for your system to garbage collect old versions, or if it’s possible for you to rewrite your current root as not relating/anchoring past history (Hi “git”), then you’re not protected.

      Also, if your system includes credentials to your version control system to enable backups, and someone captures control of the system, and thus access to the credentials and information about the location of the version control system, then if it’s misconfigured, they can attack the host of the version control system and attack it directly.

      Note: I’m not arguing against version control (I contribute to a DVCS), nor am I arguing that not using Administrative credentials is a bad idea (it isn’t). I’m just saying that these properties alone aren’t sufficient to protect you.

      The property that we really want is “append only”. We ask for this in (audit) logs [1], because without it, history can (and will be) rewritten by attackers when they compromise a host.

      FWIW, @Brian uses virtual machines with snapshots — he tends to use a rollback feature, but in principle, assuming your vmhost isn’t compromised, using an append only property (i.e. snapshotting) of a vmhost on your vm would protect you from the attacks here — as long as there’s a way for you to ask for a read-only copy of the vm from before the attack.

      [1] http://www4.ncsu.edu/~aayavuz/YavuzNingReiterLogFAS11.pdf

  5. HybridAU

    “they will publish the files on the Internet.”

    I was looking at this the other day someone suggested that ransom ware would start ransom ware start to collect documents from local disks instead of encrypting them and I though it was unlikely for few reasons.

    Firstly the amount of data on a machen, (e.g. consider a home box with 100GB of stuff) it’s not all going to be private enough that people will pay to stop it from beeing public, you can’t upload it all to a central server on a home ADSL connection and trying to figure out what to upload is hard even for a human but especialy for an automated program.

    Secondly, even if you could correctly identify some private documents and send them off somewhere (let’s say naked photos) then you need to find people who care, if you published stuff from some random person on a hacked server somewhere it’s not likely to make a huge splash. Maybe if you had a keylogger and their Facebook creds you could put stuff there.

    Lastly I think it will be harder to convince people to pay a ransom for files not to be published. With encrypted files you pay and do get your files back, you can verify that you have them, and you can back them up so you can’t lose them again. But if something has leaked and you pay someone not to publish it, you can’t ever verify that they have deleted it or won’t publish in future.

    As far as backup goes I use CrashPlan it can run on a headless Linux server and you can setup custom encryption keys so the data is encrypted before it’s uploaded. I’ve never found backup software I love, but CrashPlan is less bad then the others I’ve tried.

  6. JohnP

    I’ve been a Unix and Linux programmer/admin since 1993.
    Versioned backups are the MOST IMPORTANT things I do on every system.
    Even at home, all my systems get automatic, daily, versioned, backups. The backup tool I’ve been using for about the last 7 years is rdiff-backup. High-risk systems have more versions retained – 120 days. Lower-risk systems only have 30 days of backups. I backup only what is required to recreate the system within 30-45 minutes, not the complete OS. Only have 20 systems, but not backing up the OS saves about 4G for each system. **dpkg –get-selections** is the main trick.
    The best rdiff-backup how-to I know is: https://www.kirya.net/articles/backups-using-rdiff-backup/ Use the “pull” method, not the push for higher security. For 20 systems, less than 800G of backup storage is needed. Of course, that doesn’t include huge media files which are backed-up using plain rsync. Directory and file owner, group, and permissions must be part of the backup solution, so a simple copy of the files to a FAT32 or NTFS partition is almost never sufficient. The permissions for every file need to be versioned with every backup set too.

    Also, if your hosting provider still allows telnet and plainFTP, it is 10 years past when you should have fired them. ssh, sftp, scp are the tools to manage Unix servers.

    And Captain Obvious is correct. If your website cannot be deployed using a single script, then you’re not doing it following best practices. How else do we ensure that all images are web optimized, all javascript and css is compressed, and all new/touched files actually get deployed? We have scripts that handle these things in a repeatable way, of course.

    Lastly, backups are only 10% of this issue. 90% is the validated restore process. Backups that cannot be used to restore are a complete waste of time. Test the restore process.

    1. JohnP

      Also, nobody runs their web server as root. That hasn’t been the default for at least … 15 years? If you have webapp that supports WSGI or PSGI, then the backends would never be run as any privileged account.

      We all don’t know what we don’t know. I’ve met many designers who make pretty websites, but they know next to nothing about administration or security.

      If you are a web developer, working code is not the only requirement. For your language of choice, review the security guidelines from http://www.OWASP.org. If php is your poison – it appears very difficult indeed.
      https://www.owasp.org/index.php/PHP_Security_Cheat_Sheet

    2. Chriz

      Couldn’t have said better! Learned it the hard way about testing backups regularly (15 years ago) at my job Ouch!.

      As for me, all my sensitive files and documents are stored an encrypted image on my computer (think Boxcryptor on Windows or encrypted sparsedisk images on OS X). I mount the image when I need it and unmount it as soon as I’m done.

      That limits the risks of having my files leaked in case of ransom-gone-bad situation.

      And backups, backups, backups…

  7. HarryDyke

    But…but…Linux is bullet-proof! It can’t be infected with malware! Only M$ is vulnerable to that sort of thing!!

    1. bob

      Linux and MS are vague names for operating systems. The infected elements here are plugins for 3rd party software that run on those OSs. They’re often offered cheaply or for free. Which is great as long as you’re willing to make the effort to understand the tools and the motives of their creators. If that’s not something you’re interested in, you need to find a reputable company to do it for you – and pay for the privilege.

    2. Tim

      This is almost in the same league as me writing a script that destroys your website if you install it and run it as root. It doesn’t have any way to propagate itself. You literally have to download some source someone’s snuck it into and run it.

  8. LV Tyler

    I am surprised this has not yet been mentioned. Is it not exceedingly difficult to break AES-CBC-128 encryption?

    Based on further reading of translated websites, the new ransom ware is written in C and leverages the PolarSSL library. This is the same technology that underpins HTTPS security.

    These anti-virus companies and security researchers that claim to be working on a method to decrypt the files thusly bolster false hopes. Difficult to state with confidence that anyone will be able to successfully decrypt the files without the appropriate key.

    From other reports on this phenomena, I have gleaned that this ransom ware is affecting tens of thousands of servers to date. However, there is definitely a deficit of accurate information about the demographic details of web security threats.

    What do you think, Krebs? If security researchers are able to decrypt AES-CBC-128, it means that the technology underpinning web security is insufficient. Anyone else care to chime in on the futility of attempting to decrypt AES-CBC-128?

    1. Mike

      It is a mistake to think that HTTPS will solve all the webs’ problems. It is a false sense of security.

    2. bob

      It’s unlikely that the security companies working to decrypt data create by ransomware are directly attacking the cryptographic algorithms used. They’re more likely to be attacking the implantation of that algorithm or the communications used to facilitate the attack and/or ransom payment and subsequent decryption.

    3. Jos

      They don’t break the cipher, they either reverse engineer the malware code to get the decryption key (in poorly designed malware) or they compromise the C&C / decryption servers to get them (RSA private keys).

      The later is more common nowadays but requires some kind of “offensive” security or collaboration with law enforcement in takedown stings.

  9. Moike

    Be sure at least one backup set is OFFLINE! There is a history of the attackers encrypting / deleting everything that is accessible from the running computer. This means all external and network drives. There’s no reason that this would not include the cloud accounts. And very few if any cloud accounts offer a rollback option.

    1. JohnP

      For backups on Unix systems, storage isn’t usually mounted to the machine being backed up. An encrypted, authenticated, network protocol is used to stream the changed data to another server, often using a non-public network link.

      Sure, mom and pop setups might mount backup storage, but that is NOT the way most professional Unix admins do it. Only people I know that do it this way are programmers and usually because they don’t know any better.

      1. Moike

        > An encrypted, authenticated, network protocol is used to stream the changed data to another server, often using a non-public network link.

        But can the attacker get access to the backup program with the backup user’s credentials and issue commands to remove / overwrite backup sets?

        1. timeless

          Frequently: “yes”.

          If you’re using a cloud system, they can often “terminate the account” which quickly results in the data being fairly inaccessible.

          To be more insulting and annoying, they could change all the account details on the account first, making it hard for the service provider to authenticate someone asking to recover data from the “closed” account.

        2. JohnP

          Only if they “push” backups.

          If they “pull” backups using a server from a protected, non-public, network, then the credentials aren’t at much risk.

    2. Mark

      Ideally you want to restore from a HARDWARE write-protected device. Some particularly evil malware is very persistent and will survive a re-format re-install of the OS and when you connect your backup/restore device, the malware will gleefully encrypt it.

  10. Crawdad

    I wonder if he (in original story) was using open source Magento. Magento open source users experienced a large compromise 2 or 3 weeks ago affecting 7,000 websites. The perils of open source and a lackadaisical approach to backups means these types of compromise/infection are to be expected. Website code needs to be carefully vetted before deployed in your environment and you must continuously monitor a site to detect compromise, especially the 3rd-party vendors…as you don’t know when they are compromised until it’s too late.

  11. Chris P Bacon

    I am a strong believer in doing weekly and daily backups

  12. Dan Bright

    I’m a web developer and run both Apache and Nginx on Linux OS (Centos, Ubuntu, Debian). One of my backup solutions is Tarsnap, which can be found at https://www.tarsnap.com.

    Tarsnap is highly cost effective, encrypted remote backup solution, with deduplication.

    There are no fixed monthly fees – you simply pay for the number of bytes stored and the transfer bandwidth. It’s a CLI based (no GUI), so you can schedule automatic remote backups as often as you like using simple cron jobs, for example. Encryption is client-side, so the security of your backups is excellent – just be sure not to lose your keys, as only you have them, and only they can be used to access your data!

    1. Colin

      I used tarsnap for a couple of years, and there were a number of things about it that I liked. But I found it so glacially slow at restoring files that it was unusable. When I was using it, tarsnap would have taken weeks to restore a server, at least.

      1. Dan Bright

        I restored a server from my Tarsnap backup a couple of months ago and although data retrieval may not be blazing, it was fine. Think I remember reading something about recent improvements on the retrieval speed side of things. When it comes to backups, I’d trade retrieval speed for data security and reliability, and to date I’ve found Tarsnap to be the best option to satisfy my requirements in that regard.

  13. Sasparilla

    A related subject here – new ransomware program was coded assuming a global decryption key was used/retained (probably per version or something) however the encryption section of the program produces a unique decryption key for every “installation” and then throws it away (because the author was assuming a global key he already had would work). So there’s no way to recover the files even if you pay the ransom. Fortunately you check if you have this version. Article below:

    http://news.softpedia.com/news/epic-fail-power-worm-ransomware-accidentally-destroys-victim-s-data-during-encryption-495833.shtml

    This is a good nudge to remind me to back up my home PC’s.

  14. Eaglewerks

    Interesting story Brian. I do have a question or two that I did not find answered in your news article.

    1) Once a server has become infected, does the infection sit in stasis for a time while the infected server is perhaps backed-up one or more times?

    2) If Q #1 above is affirmative, is there any way of determining when the server was infected?

    Thanks

    1. BrianKrebs Post author

      YES! It is very possible that an automatic backup system could end up backing up some if not all of the encrypted files. Hopefully, it’s not configured to write over the previous backup!

      Your second question is a tough one because you’re asking how you would know if malware rooted your server. Unless you’re running file-integrity monitoring tools (e.g., a Tripwire solution or something like it), the first hint you get of infection might be the ransom note.

    2. Dan Bright

      If you use backup software that uses deduplication (for example, as per my reference to Tarsnap, above), that allows storage of multiple backups for a very reasonable cost, owing to reduced data storage. I run with at least a month’s worth of daily backups, then keep monthlies for however long is required.

      As regards file integrity monitoring, you may want to check out ConfigServer, at http://configserver.com/. It is free to download and use, even commercially.

      The package includes CSF firewall (which is essentially a CLI frontend for IPtables) and LFD monitoring daemon. LFD can monitor files for changes using md5sum comparison tests, and also monitors your processes to alert you to potential issues.

  15. IBMJunkman

    Should Windows based web site run on GoDaddy servers be worried? I have my whole website code and data on my home PC where I do development and maintenance on it.

    1. Phelbone Noovely

      For the purpose of ensuring that your content can’t be held for ransom under penalty of your losing it for not paying the ransom, it doesn’t matter what system the server software uses. Linux happens to have the vulnerability in this case, but it could just as easily be a Windows vulnerability.

      So, the system running on the host server is beside the point. If you build and maintain your website on a system that is completely separate from the host server, then you have nothing to worry about as far as losing your content is concerned.

      If the hosting server (in your case, GoDaddy) is compromised, then it’s GoDaddy’s problem to fix it and keep it secure. If they don’t, then take your business somewhere else.

      The point is that you’re building, maintaining, and storing your content SEPARATELY from the hosting server. As long as you don’t make your site changes directly on the host server—that is, you never upload anything that you haven’t already stored securely on your own system BEFORE you upload it to the host server—your content can’t be held for ransom under penalty of losing it.

      Now, I suppose it’s still possible that someone might steal some proprietary content and threaten to publish it unless you cough up a ransom, but if that’s the case your site wasn’t designed properly in the first place. Proprietary content that you don’t want to be promiscuously (universally) disclosed should be secured so the bad guys can’t get to it.

      But if your web site content is already “free” (that is, everyone can already see it), then a ransom demanding “pay up or we’ll publish this” is an empty threat.

  16. Mike

    This type of attack has been around for a while, just not commonly seen in-the-wild. I wrote about “ransomweb” back in February 2015; I will try and find my old write-up.

  17. J

    On Linux Rsnapshot is a very good tool to carry out differential backups (just the things that have changed). http://rsnapshot.org

    It runs a command called rsync to carry out the file copy and runs over SSH, so is encrypted and can run over the internet. So the servers don’t need to be on the same LAN and the backup location won’t be mounted on the infected machine so no chance of the malware encrypting your backup share. It is command line though don’t let that put you off.

    It’s very easy to set up hourly, daily, weekly or monthly backups so if need be you can go back to many different points in time. There’s a very simple write up here -http://www.tecmint.com/rsnapshot-a-file-system-backup-utility-for-linux/

    We use it to backup multiple servers and hundreds of websites so it scales well.

    J

    1. Moike

      > So the servers don’t need to be on the same LAN and the backup location won’t be mounted on the infected machine so no chance of the malware encrypting your backup share.

      Under the assumption that the attacker can get root or the same user credentials the backup software uses, they can access the remote storage via RSync with those same privileges.

      I’d only trust such a system having write-only, no-read, no-modify access to the remote backup.

      1. J

        With Rsnapshot running rysnc over SSH its is the backup server logging into the web server with a key and pulling the files onto itself.

        The backup server has credentials to log into the web server but the potentially compromised web server cannot log into the backup server.

        The backup server would just be running rsnapshot and SSH (properly hardened) so is a tougher but to crack than the squishy webserver.

    2. timeless

      Rsnapshot is a good example of a bad suggestion.

      A good attacker would force sufficient additional snapshots to cause Rsnapshot to garbage collect your important backups, resulting in you only having the locked data.

      See Rsnapshot explanation [1]:
      > … because rsnapshot only keeps a fixed (but configurable) number of snapshots, the amount of disk space used will not continuously grow.

      From reading the source code, it looks like it also risks either arbitrary code execution (using your ssh keys), or probably total destruction of your backup repository.

      This really isn’t the right solution.

      If you’re going to do something like this, you really want to look at zfssnap [2] or something similar. Note that the “limiting snapshot count” problem applies everywhere (including zfssnap). You really need an approach where you get alarm bells that force you to do something when you get halfway to the limit (which should probably be based on free space and not snapshot count, since you don’t really care if nothing happened day to day) instead of an approach where a count number of changes results in you losing all your good backups, or a lot of garbage backups results in you losing all your data.

      FWIW, I *like* rsync. It’s just that here, in this instance, the credentials involved, and the approach of using ssh + ssh keys + arbitrary rsync commands = bad.

      [1] http://rsnapshot.org/
      [2] https://bitbucket.org/mmichele/zfssnap

  18. Karen Bannan

    The security paradigm keeps changing (as evidenced in this video: http://bit.ly/1WVcqqt) and IT has to keep up. This isn’t surprising, although it might be upsetting and annoying for those who get caught.

    –KB

    Karen Bannan, commenting on behalf of IDG and Dell

  19. Allen Baranov

    Just one thing to note (and aimed at the “never run a webserver as root” comments) is that whatever user your webserver runs as will need to be able to read (and probably write) all your web content.

    So even if you don’t run your webserver as root – whatever user you are running it as will probably be able to manipulate *all* your web content. There is no (easy) way around this.

    If you don’t run as root then the rest of your system will (should) be fine and even if you back up your web content to the same box – should be easily restored.

    1. timeless

      +1

      @Brian:

      “It’s worth noting that the malware requires the compromised user account on the Linux system to be an administrator; operating Web servers and Web services as administrator is generally considered poor security form, and threats like this one just reinforce why.”

      While it may be the case that this poorly implemented ransomware requires administrative privileges to harm data, @Allen Baranov is correct, that in general, it’s sufficient to just be the user who “owns” the data that matters. (I have no interest in checking to see how good/bad this specific ransomware is.)

      If I run a business, it doesn’t matter if my OS / Programs are ransomed, all that matters is if my data is ransomed. And in order for that data to be useful to me, I need to be able to read+write it, which means whichever program runs on my behalf will almost certainly be able to read+write it. Thus, when the program running in the web server to serve my data is compromised, it will have the ability to destroy my data.

  20. JimBob

    The bitcoin debacle should be shut down. What started as a cool experiment has devolved into fraud and outright crime. It’s main purpose these days is to empower criminals to steal money and cover their tracks.

  21. KBVE

    I actually operate everything in its own VPS / OpenVZ container.
    This way, I just run a command called vzdump, which backs up the whole operating system.

    The downside is that backups become way too big , lol

  22. Chris G. Sellers

    Instead of backups, use deployment methods. Embrace DevOps and make your entire site deployable from nothing to working site. This way, if someone holds your site ransom (either by compromising a premise or cloud based host) then you blow away the server and re-deploy (hopefully with injection hold closeD) and you are back in business.

    Benefit – you can recover from any disaster or occasional oops. Supports testing and QA, and scaling.
    Drawback – requires work up front to automate build, stand-up, and deployment of system.

Comments are closed.