February 17, 2010

The City of Norfolk, Virginia is reeling from a massive computer meltdown in which an unidentified family of malicious code destroyed data on nearly 800 computers citywide. The incident is still under investigation, but city officials say the attack may have been the result of a computer time bomb planted in advance by an insider or employee and designed to trigger at a specific date.

Hap Cluff, director of the information technology department for the City of Norfolk, said the incident began on Feb. 9, and that the city has been working ever since to rebuild 784 PCs and laptops that were hit (the city manages roughly 4,500 systems total).

“We don’t believe it came in from the Internet. We don’t know how it got into our system,” Cluff said. “We speculate it could have been a ‘time bomb’ waiting until a date or time to trigger. Whatever it was, it essentially destroyed these machines.”

Cluff said the malicious software appears to have been designed to trash vital operating files in the Windows\System32 folder on the infected machines. Cluff said a healthy, functioning System32 directory weighs in at around 1.5GB, but the computers infected with this as-yet-unidentified malware had their System32 folders chopped down to around a third of that size, rendering them unbootable. Cluff added that city employees are urged to store their data on file servers, which were largely untouched by the attack, but he said employees who ignored that advice and stored important documents on affected desktop computers may have lost those files.

IT specialists for the city found that the system serving as the distribution point for the malware within the city’s network was a print server that handles printing jobs for Norfolk City Hall. However, an exact copy of the malware on that server may never be recovered, as city computer technicians quickly isolated and rebuilt the offending print server.

“Obviously, our first reaction was to shut it down and restore services, and at least initially we weren’t concerned about capturing [the malware] or setting it aside,” Cluff said.

Cluff said the city is treating the incident as a crime, and that it has notified the FBI. “We will be quarantining several PCs from various locations and tracking their chain of custody to assist in any forensics analysis,” he said.

Only those PCs that happen to have been “shut down” between 4:30 p.m. and 5:30 p.m. Tuesday, Feb. 9 were impacted by the attack, Cluff added. That’s in part because of the data destruction, but also because the malware also modified the “boot.ini” file, an essential file that tells the computer the location of the Windows operating system.

“This was the amount of time it took our network and security engineers from discovery to containment,” he said. “So all those employees who were being ‘green’….we now know who they are.’”


123 thoughts on “‘Time Bomb’ May Have Destroyed 800 Norfolk City PCs

  1. Steve

    I see variously that the PCs were destroyed, and that the data were destroyed. I assume it was just the data (including the operating system) that was destroyed, and that the operating system, at least, can be restored.

    As far as user data: “Cluff added that city employees are urged to store their data on file servers, which were largely untouched by the attack, but he said employees who ignored that advice and stored important documents on affected desktop computers may have lost those files.”

    As a policy, “urging” sounds a little wishy-washy to me. For those employees who do not, despite the urging, what is the policy for their backing up data?

    Perhaps they should try cajoling.

    1. JS

      The big news is the vector of attack was an “ancillary system” that probably no one really cared about unless it wasn’t working. It would be interesting to see what level of print server was violated. If it was a PC with sharing or if it was a dedicated piece of vendor equipment like a Print Server/NAS. Would be highly interesting if it was a violation of the embedded engine suach as like in one of those big printers by Xerox, Konica Minolta or a Ricoh which only businesses can afford.

      Now the kicker:

      Given the recent Tdss rootkits I suspect, once rooted, many diligently made full backups on WinTel’s are not functional yet “appear” to be so until its too late.

      Let the disaster recovery risk assessments begin!

      As for the loss of data:

      PC users are typically unable to really discern if their data is being backed up and poor application design often doesn’t help. Apple’s Time Machine is the best GUI I’ve ever seen for aiding users by giving a sense of what is being backed up

      Unless the accounts were setup to use the Domain properly there are many ways to lose data. Given the write up I suspect the Home Directory of the accounts wasn’t in the Domain.

      Critical business files often get stashed onto the desktop, or in local folders like my downloads or my photos etc, ever thought about backing up that Sticky notes app? Bookmarks & Cookies, from browsers also are a pain to re-create. Users are lazy and forget all their web-logins let alone the site to log in to, obscure fonts are added over time sometimes with all the little productivity apps. User Settings, forms and templates for business productivity apps have to also be considered as being littered all over the system.

      Unless the PC is configured properly these files are not going to get backed up or sync’d 100% of coverage 100% of the time.

      1. Paris Hilton

        Let the disaster recovery risk assessments begin! We don’t have time! This isn’t a game!

      2. Kludge

        What are the chance that this actually IS a piece of
        malware here, and that it’s not just a bad update that Norfolk pushed out to their own machines?

        With the “print server” cleared, there’s no evidence that it was ever actually implicated, and I have seen more than one case of people blaming machines that were unrelated to a problem just because they saw some funny network traffic from it.

        At this point the cleanup has been so horribly bungled that there isn’t even any evidence of the original problem; if I were a paranoid sort I would suspect this to be an update gone wrong and a bunch of IT people attempting to cover their rear ends by blaming it on malware.

      3. oe

        Time to dump Active Directory and go to Network File System (NFS) or Andrew FIle System (AFS) for true data redundancy. Of course that means dumping that other OS for some flavor of ‘NIX, Linux, NetBSD, etc…I know at my worksite I trust my data on the CentOS machine at my desk (NFS mounted filesystem) over the WinXP machine any day (an AD royal mess…) and both are supported by competent IT folks.

        1. bob

          “Time to dump Active Directory and go to Network File System (NFS) or Andrew FIle System (AFS) for true data redundancy.”

          Active Directory is an LDAP implementation, not a network file system… SMB… DFS…..

    2. JamesB

      I keyed in on the data comments as well. First they say the systems won’t boot because the System32 directory is trashed but any IT person worth a damn knows you can boot off CD/USB Boot disk and recover local data. Secondly I agree what was the policy because saying you urge users to do something when those systems contain sensitive and private data is not an IT policy.

      Whole story sounds fishy and either they have their facts wrong or they need some new IT management.

      1. WR

        You are absolutely correct!!! I automatically boot from CD\USB key in these types of incidents to try to save any worthwhile data while at the same time look for anything suspicious…

        I mainly work on people’s home PCs but I used to work for City government; this smells of a lot of CYA’ing…

        If they had the slightest inkling that a SERVER (of all things) may have been infected, a case should have been started, image created, etc…

        Just saying

        1. L.T.

          It is absolutely true that you can boot to a recovery CD (my favorites are Ubuntu or Knoppix) to recover non-encrypted data. Sometimes this is a little difficult to do if the Big Wig or Highly Regarded Politico has “accidentally” gained access to the PC repair lab, and is rocking back and forth from one foot to the other, all the while moaning about “when am I getting MY laptop back?”

          In that case it is more expedient to load a fresh image, and let them get back to work …

  2. chuck

    Destroying data does not destroy machines as effectively as headline hyperbole destroys credibility. No wonder it is so hard to get people intersted.

    1. qka

      A computer that has had data files deleted and its operating system rendered unusable is no longer available. For all intents and purposes, that is destroyed.

      Remember the A in CIA. (An NO, I am not talking about an “intelligence” agency.)

      1. janos

        yeah, because i can’t just mount that drive up under another machine and pull anything off of it that i want! no way nuh uh!

        1. Jim

          Or, stick a rescue disk into the machine and copy the data to an external drive.

    1. Big Geek Daddy

      Maybe because whoever placed the Malware scheduled it for that day knowing a large number of computers would be shut down and restarted due to Microsoft Updates?

    2. No one

      “Interesting that it came on Patch Tuesday”. Agree. There was news about anwindows update that came out that same Tuesday that effected users running XP. Microsoft quickly recalled it. This sounds like the same issue. Why are they saying this is a virus though?

  3. jbmoore

    The data is not gone. There are free tools such as sleuthkit, scalpel, and foremost which can recover deleted files from disks or disk images. We aren’t told whether users’ data was intentionally deleted, or just system files were deleted. If the latter, then user’s data is easily recoverable by just copying over an identical system32 folder from an intact identical system. Even their deleted data would be recoverable provided the drives were imaged before they were restored from backup or rebuilt. But their document folders should have been on network shares any way. It seems that their system admins were not paranoid enough or diligent enough to preserve data or evidence so that they can minimize the damage and find out how the attack occurred. Chances are that whoever did this will escape and that this attack or a variation of it will happen again.

    http://wiki.sleuthkit.org/index.php?title=FS_Analysis#Manual_Deleted_File_Recovery
    http://www.forensicswiki.org/wiki/Scalpel
    http://www.forensicswiki.org/wiki/Foremost

  4. No expert

    I’m no computer expert, but if the only thing damaged on the machines was the Windows\System32 folder, why not just start them up with a linux live CD and copy the users’ data files to an external drive?

    1. BrianKrebs Post author

      Because it’s 20x faster just to restore the same, known good image to a bunch of PCs. The issue is, when a computer system gets compromised, particularly with such obvious, destructive intent, it’s always a good idea to flatten the system and reinstall.

      1. No expert

        Totally agree with you about the flattening. What I’m saying is, for those users who can’t access their data because the OS is hosed, here’s an easy way to grab it before flattening/reinstallation.

        1. BrianKrebs Post author

          Ah, I get it. Well, I don’t think the City of Norfolk has much patience with that method at the moment. They’re basically trying to get back up to full operational state as quickly as possible. What you’re describing would probably take way too long on a per-machine basis than they are willing to expend at this point.

          1. WolfWings

            Easy solution: Grab hard drives from machines not in use/spare hard drives, image those, put them in the workstations.

            If stuff is compromised, first reaction should always be containment and wipe with fresh storage media, and archive/hold the old media until it can be safely reviewed. Don’t just nuke-in-place on the compromised system, nuke from a different ‘known good’ platform.

          2. Jason Ward

            The only problem with retrieving the data off of the infected computers is that they still have not identified the code that trashed the computers to begin with. Any data that is removed from the trashed computers would have to be treated as infected and I would not recommend letting the users access that data until the malicious code is identified. Returning to a newly installed image will ensure that the malicious code does not return to those machines will provide peace of mind and serve as a training aid for those who need explanations as to why they should utilize server resources.

          3. decora

            How long did it take the city employees to create the files in their My Documents folders? Weeks? Months? Years? How long does it take to copy My Documents? About 5 minutes per machine? Boot a Linux liveCD, plug in a USB hard disk, copy the files.

            Im just wondering if they had a backup system in place to backup people’s documents? Or was it all voluntary? Oh well.

            live and learn I guess.

          4. MPS

            Depends on how much the data is worth and how much time it takes to recreate the data.

            On the other hand it is a good opportunity to tell lazy users it serves them right and let that be a good lesson for them.

          5. Matt

            Well thats what PXE is for, you could set up DSL (D*mn Small Linux) on a NFS or SMB server (only 40M) and set it up to backup what was left on the drive into “quarantine” space on a server, reformat the drive, reboot into PXE, chain boot into a network install of Windows to re-install / re-ghost the OS and apps. All the user has to do it press the necessary key at boot time and select network boot, go grab lunch or what ever. In the mean time the quarantined files are virus checked, executables and drivers et al removed, and then placed on a network share for the user to cherry pick back onto their new clean system. This is the sort of things sysadmins should be paid for. This is also the type of setup I have used before (but for Linux desktops not Windows desktops, but it should work just fine).

      2. JS

        epidemic – I think this is the proper analog to the Norfolk case.

        It sounds like despite vigilant prevention a pandemic plan of action was missing.

        When all internal boxes are suspected as tainted, how to compartmentalize and quarantine newly restored devices from being re-rooted or infected? Especially when the vector of attack has no residual evidence left, since the flattening and rebuilding was destructive, to analyze and synthesize a defense.

        I’ve had scenarios that until you are absolutely sure all the active infections, worms, etc are taken care of, the first machines rebuilt are still at risk to be taken out once again by an infected laptop that just rolled into the site because it was off site during the epidemic.

        Would internal compartmentalization and internal hierarchical trust relationships prevented this widespread epidemic?

        Should Norfolk’s citizens, suppliers, vendors, contractors, etc now have to fail-closed, ala self-quarantine to ensure they are not at risk.

        Anyhow, doesn’t Lenovo/IBM/HP/Sony offer a hidden checksumed partitioned based rescue and restore even on PCs? Im curious if most support teams wipe this vendor solution out, reclaim the space, and re-deploy their own kit-bashed solution.

        1. Evan

          Most wipe it out on PC’s in the enterprise. Because instead of restoring to a vanilla MS system, it restores to a “factory” MS system which is not setup for the environment and often has unneed/wanted extra’s. And over the last 3 years (about half a corporate lifecycle) MS has release 3 OS’s and a 4 on the way before the end of 2012, so often that partition is also for the wrong OS.

          1. JS

            It looks like this is dissipated energy on OEM’s part if corporate shops just nuke the recovery partition.

            On paper it seems like a great idea but as you cited the execution of that idea is already stale out of the box. It would be better to torrent an image regularly and occassionally blasting an image when upgrading into the recovery partition .

            This shoots disk requirements through the roof. 1 chunk for active, 1 chunk for recovery, 1 chunk for virtualized install/upgrade. Reorder the boot sequence to get to a working/upgraded OS. However this would mean built in pre-OS support to redirect everything. I’ve done this with AIX and Linux but now recovery becomes an extended form of revision control. Perhaps disk will be cheap enough one day or data will be so expensive one day to warrant the investment for consumers.

            Its amazing how many shops have to be so inventive for a certain product line’s deficiencies.

    2. Paris Hilton

      Windows\System32 is not an “only thing”. You might as well destroy the registry. Oh wait… But please, lest someone misunderstand: I love my Windows 7 computer!

  5. Henry S. Winokur

    What I can’t understand is an IT division that would allow their users a choice of where to store their documents and files. In a corporate infrastructure, it seems to me that the expense of replacing data, however it’s done, far outweighs the choices given to a user. It ought to be corporate policy that the data is always stored in a central location, or barring that, backed up every day, prior to shutdown.

    1. n3ujj

      @Henry S. Winokur

      I am an administrator for a medium sized network (about 160 computers, 200 plus users).
      I have redirected “My Documents” to the users “home share” on the network, and have office templates that default where to save Word & Excel files. So every user has to “GO OUT OF THEIR WAY” to save files where they won’t get backed up.
      This works for 80% of the users, but you would not believe how many files we have lost because they were saved to the desktop or the local drive.
      No matter how hard the IT department tries, if a user wants to save files locally, they will.

      1. Wesley Miaw

        @n3ujj

        If you made it so the only areas of the system that are writable by the user are on network mounts, then they would be forced to write their data to the file server/share. This is easily accomplished in environments where the user has no administrative privileges and entire home directories are network mounted (and not just a My Documents folder).

        If you worry about removable media, don’t allow non-administrators to mount. Although I doubt any users would make an effort to permanently store files on a USB stick instead of their home directory.

        1. Ryan

          *face palm*

          I don’t know where you get your ideas… Imagine the strain on your servers if you had 200 people pulling profiles off the network and storing all information to a mapped drive.

          Even with smaller group of users, the chances of profile corruption are far too high.

          Now let us factor in older (and some newer) software that requires admin rights to the local machine.

          The point of this is that no matter how you look at it, there is no foolproof system to prevent a situation like this.

          1. AlwaysLearning

            @Ryan:
            Roaming Profiles were made for just this reason. The load is only applied during login (copying from server to workstation) and logout (copying from workstation to server).
            If the file system has the appropriate ACLs in place, Wesley Miaw’s technique of My Documents/Desktop folder redirection is perfectly valid and correct – except for notebook users who work off-site.

          2. JP

            @ryan – I’ve administered networks with 30-40k people and had single servers managing home directories for 10-15k users at any given time w/o a sweat – it’s not hard, but it’s not windows either.

            Notice what was compromised, people. A benign print server. Security of every system can never be overlooked. The damage is done. The time is now to review the security policy and put one in place in an attempt to prevent this in the future.

            With M$ solutions in play (read CLOSED, proprietary) you never know what you will get so you have to keep the kimono as closed as possible.

          3. N3UJJ

            @Rya
            You hit the nail on the head “there is no foolproof system to prevent a situation like this”

        2. N3UJJ

          @Wesley Miaw
          In our organization, we run a lot of “Legacy” applications, which forces use to allow writing to both the windows folder as well as other folders on the local drive. Until get can get the “legacy” applications to catch up we are at the mercy of the applications.

      2. Nate

        There are several valid reasons for users to store files locally rather than on a network server.

        1. Speed. With some larger documents on a slower fileserver, it can be frustrating in the extreme to save frequently. Of course, you SHOULD copy the file up when you are done with it, but people get lazy.

        2. Server space. Most companies I’ve worked for have instituted some limit to the amount of data that can be stored in a /home or \My Documents directory on a server. Sometimes the only place you have room is on your local drive.

        3. Personal files. I’ve had requests to restore \My Documents folders at previous companies during computer changes, only to find out that the only things in them were pictures and music.

        But my cynical side says… Slacktime. Saying that you had an absolutely critical TPS report on your hard drive will either make your boss give you the time to reconstruct it, or will make them delay giving you back your computer while they retrieve the oh-so-critical file. Meanwhile, you get paid. And you don’t have to work.

        1. me

          I can understand the speed issue but not the storage issue. Storage is cheap and I mean very cheap today. Explain you need more space for x file + room for growth. Explain respectfully that you are forced to work with storage on your client. If you get refused then ask what happens when the hard drive fails. You still get refused then you e-mail your boss and advise that this big file with lots of work hours put into it is vulnerable to loss if your client fails.

      3. gojiboy

        Why not just NFS mount the drives and only use the hard drive for swap space. Then the users cannot store anything locally unless it is a USB drive which can be disabled from mounting as an ordinary user and requires the root password or sudo access.

      4. anon

        Why not have the Desktop in a user share too?
        And enforce policy when users loose their files for not complying with your requests?
        I’d say that free extra work hours until they replaced *all* the lost work would be enough for them to take care.

        1. N3UJJ

          @anon
          I actually tried that for a while, but shortcuts to applications were not consistent, created more problems than it cured.
          Also tried roaming policies, that didn’t work well either.
          It’s always a trade-off.

        1. Dae

          Simple: Remove the damn ‘shortcut’ files that enable that annoying ‘New’ menu. Too many applications like to clutter the damn menu anyway.

        2. ken

          Good point… unless they use roaming profiles, or force the profile to the server. In this case, the desktop folder is also on the server, along with the rest of their profile. The IT department is paying the price for failing to standardize their infrastructure.

          The real world always hurts…

      5. JackSombra

        It’s pretty simple to lock down systems so that only the most determined users can save stuff to anywhere besides the central servers
        1) Take away admin rights except to those that truely need it (outside of IT that might 2 to 3 users in an org of thousands)
        2) Redirect the Desktop, My Documents so forth to home shares on the network (some places just redirect the whole user folder to a network share, but unless you have good network don’t recommend it)
        3) Lock down file/folder creation on the hard drives in everything except Program Files/Windows Dir (locking down these two totally, can and will cause many app’s to break)
        4) Optional, If you are properly distributing app’s you can even lock most of the rights for file creation in Program Files/Windows
        4) Block mounting of drives (USB so forth)
        Then only way for users to create files on the hard drive is to go a few levels deep, vast majority cannot be bothered to go though that amount of effort and the few that will…well virtually nothing would stop them anyway

        Plus with a standardized build and centralised application distribution it makes not on support a lot easyer but also makes your next upgrade (either app and/or OS) a doodle because you not longer have to worry about backing up their personal files which is generally the most time consuming and labour intensive tasks when performing a roll out (because users can never seem to do their own back up’s)

        1. Dutch Uncle

          Why not just admit that you’d rather go back to dumb terminals attached to a mainframe . . .

        2. duuuuude!

          ok, many of these are based on corporate general office systems that can (and should) be highly locked down.

          The problems with the ‘solutions’ in so many of these comments is that they are presented as a panacea. Ok… not EVERYONE is promoting that but…

          In any event, there are corporate systems that just do not fit into this mold. Consider a division that is responsible for CGI or digital image/video. The data for any given user can be in the TeraBytes per user. Also, what about ‘corporate’ systems that are used to operate specialized systems (such as lab devices) and or develop code?

          Bottom line: to echo a previous sentiment, there is no Cure-All. No one system that meets the operational needs of ALL of the users across every org. Segmentation of environments to allow for containment seems the best overall, with each environment ‘locked-down’ to the maximum extent practicable (which may be very little in some cases)

      6. decora

        why dont you just remotely copy their files using the $C share? thats what I used to do.

    2. Charlie Alpha

      Thinking a policy will lead users to do the right thing is a bit euphoric in the average enterprise.

      Short of using AD to prevent users from storing files anywhere other than server shares, “urging” is about all you can do.

      Basically you make the policy and then urge people to follow it. The average organization is not going to formally reprimand someone for storing and losing files on their Win Desktop.

      I’m sure at some point they will have to get together some ballpark number (very ballpark) on what this incident has cost them. The mostly tangible numbers will come from re-imaging hours, not from lost work and lost opportunity expenses.

      One of the best lessons for people to glean from this is around preserving forensic evidence. In a case like this, I would have cold-imaged any servers involved and a handful of likely desktops. I just made up the term “cold-imaged”.

      Basically making a bit-level copy of the drives without booting in to the OS. The best way to do this is to pull the plug on the machine, not go through a formal shut-down. I would seal up the original HD, and then use the copy to attempt to find the evidence (obviously on a non-networked machine).

      Assuming you can identify the malware and link it’s placement back to a user, you can hand the actual drive over for prosecution.

      Cheers.

    3. Brian B

      You haven’t worked in a municpal government, have you? Once the City Manager’s porn browsing gets blocked, and his nephew the Facilities Sub-Director’s naked photos of his wife get saved to a shared folder, and his secretary’s 500mb Google Earth cache causes her to need 20 minutes to log in after lunch all of your nice secure policies get tossed out the window and you’re back to “recommending” secure computing policies.

      Security is nothing but an inconvenience to political appointees, since if there’s a problem it’s not their heads that are going to roll.

    4. MPS

      As a systems admininsrator, you can tell people where to put or save data, but what they actually do is another matter, and you have no control over what they do.

      That is one major reason to put stuff on a server. You can force stuff like automatic backups, making archives read only or provide write access only to maintainers etc.

  6. Angus Scott-Fleming

    Brian, you wrote in a comment-to-a-comment “Ah, I get it. Well, I don’t think the City of Norfolk has much patience with that method at the moment. They’re basically trying to get back up to full operational state as quickly as possible. What you’re describing would probably take way too long on a per-machine basis than they are willing to expend at this point.”

    I think they’re missing part of the point. If a user has unique data files stored on his local hard drive, with no copy on the server, the cost of recreating those can be MUCH higher than the cost of salvaging the files.

    If this hit one of my clients, and getting operational were time-critical on some machines, I would buy new hard drives for each computer (probably under $50 each in the quantities needed in this event) and recover the unique data from the removed drives as time permitted. The removed drives could then become replacement drives for other machines as their hard drives fail in the future.

    1. BK

      “…recover the unique data from the removed drives as time permitted.”

      In the end, you would end up recovering data from maybe 10 users’ drives in that way. Time really wouldn’t be available for a looong time. It will already take a long time to rebuild the machines. Then they have to catch up on projects set aside to do repairs.

      Just pulling and reinstalling the physical drives would take over 120 hours (10 minutes per machine). When you explained how many billable hours were going to be involved to your client, I suspect that they would have identified the 10 users they really cared about post haste.

  7. jbmoore61

    It can take 20-30 minutes to reinstall the operating system by itself. Using a drive imaging utility like GHOST will save more time because you reinstall the OS and the applications. The image can be reinstalled using a CD/DVD or over a network. Obviously, restoration via network can be slow if you are reinstalling 800 images all at once. If this is the case, then why not spend 20-30 minutes trying to recover the user’s files before you reimage the drive and destroy their data permanently? I am not even pointing out that the system administrators themselves are likely negligent in this instance. First of all, they destroyed the file server that was the hub of malicious activity. That’s destruction of evidence. Second, they should have global policies in place through Active Directory that MS Office always saves to a specified network share and that said network shares are always backed up.

    About data recovery, programs such as sleuthkit, scalpel, and foremost can recover deleted files. It is best to make an image of the hard drive, but replacing the hard drive and recovering the data from the old drive will do in a pinch as an above comment suggests, especially if the user data is important. These programs are free and are usually used for digital forensics, but they can be used for data recovery as well. The problem most users and admins face is that they are ignorant of their options. Data recovery is not taught or emphasized as a skill.

    1. BK

      I think you underestimate the scale. With one or two failed systems, you can take the time to do a careful data recovery. By the time you scale the problem to 100+ systems, it becomes a question of making the best use of available manpower.

      Their problem is with 700+ machines. At this scale, unless they have a dozen techs hanging on the payroll due to ‘stimulus (or whatever), they can really only do a comprehensive data extraction for maybe a dozen or so machines.

      To recover then reinstall data in a comprehensive way takes _at least_ 3 extra hours per machine – more if you test and verify. There is pain either way, but down time and recreation both have costs.

    2. Nate

      At best, they probably have ten techs on staff, working 8 hour days. My company has a few thousand scattered among a number of buildings and we have fewer than ten techs.

      Spending just an extra 15 minutes on each computer attempting to recover data just added at least a day, if not more, to recovery time. Even swapping out the hard drives (assuming you can find a supply of 800 new hard drives that can be delivered to your facility in a matter of a couple of hours) can add a couple of days.

      More importantly, during the recovery time, no one whose machine has yet to be recovered is getting any work done. And their bosses are calling you. Repeatedly.

      My current company keeps our number of techs low by enforcing a very simple policy.

      1. All company data is to be stored on a network server. All supported applications automatically save to the network. Users experienced enough to override this should know what they are doing and why, and back up important stuff to their personal network folder.

      2. If your computer goes bad, the company will initiate a reimage remotely. If that doesn’t work, you’ll get one from the spares bin, delivered same day (usually within a couple of hours). Actual swapouts take about 10 minutes plus travel time.

      If you violate policy 1, sorry. The company isn’t paying for enough techs for one to spend hours with you digging through your old hard drive finding and copying your data. They still have jobs to do setting up new machines for people, swapping out dead machines, etc.

      Data recovery is still taught. When things get really bad, it comes in handy. But good planning means you have backups. Resorting to recovery techniques means your backup planning was inadequate, or something really catastrophic happened that took out all of your copies, and you’re in deep doo-doo.

      You should never ever have fewer than three copies of anything valuable, each in separate locations. And it’s IT’s job to make sure policies are in place to ensure those copies exist.

  8. lrn2itnubs

    All it takes is a disgruntled employee setting up a simple VBscript running under a generic domain admin account to accomplish this. This is why you change admin passwords after someone quits or gets let go.

    1. Kenneth

      Could have been even simpler than that, like a simple batch file. Sounds like someone planted it as shutdown script in the domain’s group policy.

      If anyone should be in trouble, it should be the irresponsible IT staff that wiped and rebuilt the “print server” (probably some old crappy computer with a parallel port). Isolate and contain, not isolate and obliterate all evidence!!

  9. Ohng

    Perhaps the system administrators are just feigning the attack so that they can get the latest whoop-de-splash virus/malware/Mozart’sGhost software.

    In this case the destruction of the print server makes perfect sense and so does the claim that the users’ computers are totally destroyed and have to be rebuilt. After all — we can’t have any loss in productivity, but the loss of these documents, whatever they were, is critical. Let’s spend money.

  10. Hander

    What an unfortunate situation. If people used Linux, this could be avoided…

  11. Seeker

    While you’re all discussing how to keep users from storing files on their desktop you’re totally missing something much more important. (BTW, what about the users’ favorites, and archive.pst files that also are stored by default to the local hard-drive?).

    “IT specialists for the city found that the system serving as the distribution point for the malware within the city’s network was a print server that handles printing jobs for Norfolk City Hall. However, an exact copy of the malware on that server may never be recovered, as city computer technicians quickly isolated and rebuilt the offending print server.”

    What a MAJOR screwup on the part of the IT staff this was. Let’s see, should we get the print server back up quickly so users that don’t have running PC’s can print (duh) or should we cool our jets for a few minutes more to image the print server so an analysis can be done to find out how system security was breached so we can keep it from happening again?

    1. Nate

      My company’s .pst files for outlook are stored on our network folder, not on our local drives. It’s pretty easy to set it up that way.

    2. N3UJJ

      We use exchange (so there are no pst’s).
      Favorites are stored on users “Home” Drive
      I use group policy & scripts to hadle all redirection

  12. DW

    Users other than admins should not have the permissions to run something that could cause this type of damage. If the OS files were trashed, the data is still on the drive, it’s just not bootable. Still, recovering data from 800 systems is not a fun job for anyone. Sounds like an access control list on the print server could have saved a lot of time and money!

  13. Doug

    Yet another group may learn who difficult it is to secure Windows and just how expensive it is. I’ve heard people complaining how much more expensive a Linux or UNIX admin is compared to Windows admins. Maybe the people running Norfolk should rethink what they are doing on the desktop. After all, you might expect user data corruption but OS corruption and OS corruption across a network of computers?
    And any admin with his weight in salt will know even enough Linux to use it to fix screwups like this. Most of these workers could probably get back to work if these admins built a Linux liveCD with the tools most of the workers might need. Like SAMBA for remote data access, WINE to run some apps like MS Office off the servers and maybe even KVM or VMWare so they could run images off the network for those critical positions.
    Real admins know more then one OS but you get what you pay for when you hire most Windows admins. Experts at clicking icons and not much more.

    1. decora

      if every system was using linux, you could still write some program to hose it. if you have physical access to a machine, you have root. if you have root, you have the machine. slapping ‘linux’ on something doesnt make it safe.

      1. Paris Hilton

        You’re right. Stick with Windows. It’s safer by a mile.

    2. Paris Hilton

      Oh some of us will of course agree. But most will not because you hit them where it hurts – clicking icons (and discussing security here) is about all they can do. So they’ll just keep modding you down. It’s symbolic for sticking their heads deeper in the sand. Now watch this post go too. What a bunch of deniers.

  14. Archangel Michael

    A couple (or perhaps the same) people have mentioned data recovery on compromised hard drives. The solution is simple and should have been in place before.

    I’m a network analyst at a school district, and we have pretty much eliminated these types of problems. Here’s how:

    1) Setup WDS (RIS) to image computers
    2) Use AD policies to direct post WDS (RIS) setup of workstations. This includes redirecting “My Documents” to a network share.
    3) Scripted installs, MSI installer of all NOMINAL software used by most (all) computers.

    A complete reset of a workstation takes approximately 10 minutes of tech time, including installing all the software. Total workstation downtime is approximately 2 hours. I would suspect it would be a tad higher with that many machines being imaged at once.

    However, when the machine is done, it is FULLY patched, Antivirus is upto date, as is all the evil but necessary bits (flash, shockwave, Java, etc). I’ve even setup custom desktops for various departments because it takes less time to copy Default User Profile than it does to remember how stuff was setup the first time.

    The point being, if data is important, it will be on the server. And Data is expensive, more expensive than most people estimate. If it isn’t worth saving the HD to get data off, then the data wasn’t worth that much, and buying a $50 hard drive + installation time (as one poster suggested) isn’t worth it, then the data wasn’t worthy of being backed up in the first place.

    Just my $.02

  15. Greg C

    Any botnet can be issued commands to “kill” its members that would have this effect. Hopefully Norfolk is looking out for evidence of fraudulent online banking, wire transfers, etc…

  16. Scott

    I’m an admin for a service company consisting of 3000+ workstations and 600+ servers. Here it goes something like this:

    1. I tell my immediate supervisor “Our users HAVE to store their data on the file servers”.

    2. My immediate supervisor tells a business liason “Our users really need to store their data on the file servers”.

    3. The business liason tells a IT liason in the business unit “You know, your users really should store their data on the file servers”.

    4. The IT liason tells a manager in the business unit “IT says we might want to think about storing our files on the file servers”.

    5. The business manager says “Ok, I’ll think about it”.

  17. Rhavin

    Local Gov’t IT is chronically underfunded and understaffed. It’s easy to play armchair quarterback and sling around a bunch of “why didn’t they?” questions. I’d be surprised if their IT staffing ratios per desktop/server supported and total budget even came close to the average private sector firm. You simply can’t implement some best practices when you have neither the manpower nor the budget to do so.

    As to the attack itself, if it did come down to the TDSS rootkit being the issue, I’d be having a long talk with my AV security vendors.

    1. Paris Hilton

      Agreed. AV should stop rootkits. It’s no longer reasonable to expect the OS to stop them. That dream went out with the noughties.

  18. TCPDump

    Can you say digital forensics….

    apparently neither could they. A malware outbreak, they trace down the origin…..and rebuild the system rather than performing digital forensics 101?

    Only later do they consider it a crime and call in the FBI who will now ask why they destroyed the source…which will have more digital fingerprints on it than an infected machine including {possibly} the name and account that was used to put the malware on the system.

    1. Paris Hilton

      Well there’s a pattern to their behavior. We all see that.

  19. Fagundes

    Situations like this one make me wonder why people still use this crappy OS called Windowze…

    Hey you guys, there many other secure OSes out there!

    1. Scott

      Yeah, but this was an inside job. Even if you’re using the most secure OS in the world, is it really going to protect you from a malicious admin?

      1. Seeker

        How can you be sure it was an inside job? Had the IT staff not trampled all over the evidence a forensic analysis could have showed for sure. Maybe it was just designed to look like an inside job.

  20. M

    I’ll bet an admin ran a poorly written script.

    I’ve seen it several times: a company writes a script that, as its last step, does a del *.*.

    In testing the script works fine.

    Then they deploy through some automation method. Well, if the script doesn’t explicitly set its current working directory, the default is system32.

    You can imagine what happens when the script does a del *.* with system32 as the working directory.

    1. Paris Hilton

      I’m behind you all the way. Most scripts end with *.*. We’ve all seen that. To blame insiders or worse, to blame Windows, is simply uncalled for.

  21. jbmoore61

    Data recovery is a case by case basis. A high level supervisor, bureaucrat, or executive is not going to like having their workstation wiped and reimaged if they haven’t backed it up and there are important documents on it. Obviously, you would not try to recover a clerk’s system, and the majority of the systems would just be rebuilt, but there will always be certain systems that will be treated differently because of their function or their users. One would likely image any accounting systems that were sabotaged due to their function and as possible evidence.

    1. decora

      thinking that clerk’s systems don’t have important documents on them, well, thats about as dumb a thing as I think i’ve heard in a while.

  22. James

    How do you know it’s an inside job?

    I don’t care how tight your Windows security is, you will get hit eventually. It is not possible to lock down the Windows operating system. i.e. yeah you can set NTFS permissions on the C:\Windows\System32 where no user nor application should be allowed to delete files. But this breaks many old improperly written applications. But even then, the hackers can blow right past your fancy NTFS permissions with a simple privilege escalation hack to become the SYSTEM account (root on Windows).

    There are so many holes in Windows because it was never properly engineered to be a multi-user OS.

    I’ve found malware on users systems with no local admin rights and heavily locked down. All they did was surf the web and an evil advertisement included exploit code that blew through an unpatched IE browser using a zero day exploit. i.e. no fix from Microsoft till months later! The malware was a lot more then just spyware, it managed to get SYSTEM access, install a rootkit, keylogger, avoid detection by the AV software by using encryption, etc. Make secure encrypted connections to the outside world through browser ports, etc. It tunneled right through the firewall. It infected other nearby computers. It stole admin account passwords and infected files on the servers, etc.

    Time to abandon ship and get the heck off Windows! Run, don’t walk to Linux or a Mac. It’s your only hope!

    These sort of security vulnerabilities just don’t happen on other operating systems. It’s not because they are not targeted, it’s because Windows is so full of holes and therefore is a sweet target!

    1. Paris Hilton

      Here we are trying to help the city of Norfolk and all you can do is badmouth the system we love? How dare you!

      1. Paris Hilton

        The CNET article talks about bypassing ‘security protections in the operating system’. Right there you know this can’t be about Windows – Windows has no security protections whatsoever. And it was a compiler bug. As the readme said, there is no exploit if you’re just looking at the source code. And all you do is use the GCC flag fno-delete-null-pointer-checks and your worries are over.

        OK, one bug/potential hack down, 299 thousand to go. Oh wait – that’s Windows! lolz

  23. Jason Vasquez

    There is no proof this was an inside job, or that it was even malware. It’s all speculation at this point. Unfortunately they may never really know what happened because they did not follow basic security incident response practices.

    Hopefully what the City of Norfolk takes away from this is that they recognize how important it is to have a well funded and properly trained IT staff. The only way users are going to do anything with their computers such as backups, proper and secure PC usage, etc. is through IT resources (user training, proper infrastructure, etc.), and that starts at the top.

  24. pepepecas

    photorec can recover ANYTHING… ANYTHING… too bad it takes too long for 800 machines… my idea… no idea if this would work…copy /sys32 folder from a pretty machine to a usb… paste it onto the sad machine… reboot… could that be donw????

    Every admin should either make people save to a server on the network OR go to the machine and do the backup themselves… no excuses for lost data… and yes, it DOES TAKE TIME… tuff…

  25. odf

    No coincidence these were windows machines.
    Ditch it and go with something better.

  26. sighingSadly

    Mr. Krebs, some more precise language would be a good thing. Apparently the PCs survived and were not destroyed. They did not meltdown. Non-tech savvy users are confused enough, you don’t need to be making it worse.

    To all the commenters slamming “lazy” users, a pox on you. Most shops are understaffed these days, and blaming users for not wanting to waste time on some poorly-implemented data storage scheme is dumb.

    And why in the HECK is anyone still using Windows in any networked environment. That is INSANE. Look at all the suggestions in these comments for locking systems down to prevent mishaps…it’s all about removing functionality. And it still doesn’t work. Any shop that really wants a high degree of central control should set up terminal servers, and reserve PCs for staff that really need them, like developers, multimedia producers, and IT.

    This is sad.

  27. Tash

    Everyone on this thread needs to start here on incident response –> http://csrc.nist.gov/publications/nistpubs/800-61/sp800-61.pdf this is a link to the nist800-61 which will give you a basic idea on what you should be doing on incididents like this because you all suck at it and after all of you read that, you should join Cluff in reading 800-83 and then pop over to the 800-86 and give that a whirl AND THEN everyone sign up for this class right here http://www.sans.org/security-training/curriculums/incident-handler and learn some stuff cause ya’ll really suck at this…but hey, what would I know? I’m just a public high school English teacher…….

    1. Solo Owl

      You want us all to read 369 pages of tech stuff and then take a couple of 1-week out-of-town courses? You think time grows on trees?

  28. Dalmatian90

    2 Comments.

    First, regarding everyone storing all files on the server through techniques like roaming profiles:

    A lot of city computers are likely to be used in remote locations with only relatively narrow pipes back to the data center.

    Second, to re-emphasize what a few folks above posted…recovering files from individual machines simply doesn’t scale well when you need to recover 800 machines. Re-image and move on.

    You’d be looking at adding weeks to the recovery effort otherwise, or hiring a scad of techs at $75+/hour — neither one of which are good options.

    The city already budgeted for someone’s salary, they can sit there and recreate what they truly need and it doesn’t cost the city a dime extra. Sooner they get a working computer, sooner they start recovering. The city didn’t budget for outside help to recover the files.

    That’s a difference between a soft and hard cost that is very real in most municipal budgets.

Comments are closed.