April 8, 2011

When it’s time to book a vacation or a quick getaway, many of us turn to travel reservation sites like Expedia, Travelocity and other comparison services. But there’s a cybercrime-friendly booking service that is not well-known. When cyber crooks want to get away — with a crime — increasingly they are turning to underground online booking services that make it easy for crooks to rent hacked PCs that can help them ply their trade anonymously.

We often hear about hacked, remote-controlled PCs or “bots” being used to send spam or to host malicious Web sites, but seldom do security researchers delve into the mechanics behind one of the most basic uses for a bot: To serve as a node in an anonymization service that allows paying customers to proxy their Internet connections through one or more compromised systems.

As I noted in a Washington Post column in 2008, “this type of service is especially appealing to criminals looking to fleece bank accounts at institutions that conduct rudimentary Internet address checks to ensure that the person accessing an account is indeed logged on from the legitimate customer’s geographic region, as opposed to say, Odessa, Ukraine.” Scammers have been using proxies forever it seems, but it’s interesting that it is so easy to find victims, once you are a user of the anonymization service.

Here’s an overview of one of the more advanced anonymity networks on the market, an invite-only subscription service marketed on several key underground cyber crime forums.

When I tested this service, it had more than 4,100 bot proxies available in 75 countries, although the bulk of the hacked PCs being sold or rented were in the United States and the United Kingdom. Also, the number of available proxies fluctuates daily, peaking during normal business hours in the United States. Drilling down into the U.S. map (see image above), users can select proxies by state, or use the “advanced search” box, which allows customers to select bots based on city, IP range, Internet provider, and connection speed. This service also includes a fairly active Russian-language customer support forum. Customers can use the service after paying a one-time $150 registration fee (security deposit?) via a virtual currency such as WebMoney or Liberty Reserve. After that, individual botted systems can be rented for about a dollar a day, or “purchased” for exclusive use for slightly more.

I tried to locate some owners of the hacked machines being rented via this service. Initially this presented a challenge because the majority of the proxies listed are compromised PCs hooked up to home or small business cable modem or DSL connections. As you can see from the screenshot below, the only identifying information for these systems was the IP address and host name. And although so-called “geo-location” services can plot the approximate location of an Internet address, these services are not exact and are sometimes way off.

I started poking through the listings for proxies that had meaningful host names, such as the domain name of a business. It wasn’t long before I stumbled upon the Web site for The Securities Group LLC, a Memphis, Tenn. based privately held broker/dealer firm specializing in healthcare partnerships with physicians. According to the company’s site, “TSG has raised over $100,000,000 having syndicated over 200 healthcare projects including whole hospital exemptions, ambulatory surgery centers, surgical hospitals, PET Imaging facilities, CATH labs and a prostate cancer supplement LLC with up to 400 physician investors.” The proxy being sold by the anonymization service was tied to the Internet address of TSG’s email server, and to the Web site for the Kirby Pines Retirement Community, also in Memphis.

Michelle Trammell, associate director of Kirby Pines and president of TSG, said she was unaware that her computer systems were being sold to cyber crooks when I first contacted her this week. I later heard from Steve Cunningham from ProTech Talent & Technology, an IT services firm in Memphis that was recently called in to help secure the network.

Cunningham said an anti-virus scan of the TSG and retirement community machines showed that one of the machines was hijacked by a spam bot that was removed about two weeks before I contacted him, but he said he had no idea the network was still being exploited by cyber crooks. “Some malware was found that was sending out spam,” Cunningham said, “It looks like they didn’t have a very comprehensive security system in place, but we’re going to be updating [PCs] and installing some anti-virus software on all of the servers over the next week or so.”

Other organizations whose IP addresses and host names showed up in the anonymization service include apparel chain The Limited; Santiam Memorial Hospital in Stayton, Ore.; Salem, Mass. based North Shore Medical Center; marketing communications firm McCann-Erickson Worldwide; and the Greater Reno-Tahoe Economic Development Authority.

Anonymization services add another obstacle on the increasingly complex paths of botnets. As I have often reported, tracing botnets to their masters is difficult at best and can be a Sisyphean task. And as TSG’s experience shows, it’s far easier to keep a PC up to date with the latest security protections than it is to sanitize a computer once a bot takes over.

[EPSB]

Have you seen:

Reintroducing Scanlab (a.k.a Scamlab)…Many sites and services require customers to present “proof” of their identity online by producing scanned copies of important documents, such as passports, utility bills, or diplomas. But these requests don’t really prove much, as there are a number of online services that will happily forge these documents quite convincingly for a small fee.

[/EPSB]


130 thoughts on “Is Your Computer Listed “For Rent”?

  1. me

    Nice sleuthing, as always great postings Brian.

    1. DeborahS

      Well, I can’t even find the posts I’m replying to in all this mess. I only know what they said because I got their comments in email, so I guess I’ll reply to them all here.

      @JBV
      “You just don’t get it, do you, Deborah? Ed isn’t expressing his opinion, he’s quoting Brian’s post from last September, and pointing to another of Brian’s security advisories. If you weren’t so quick to post – and had looked at the links, you would have seen this. Ed is trying to do damage control, as are many other commenters here, including me, who are concerned that some nonprofessional reader of this blog may believe that you are well-versed in the current state of computer security. You aren’t!”

      What links was I supposed to look at? I didn’t see any. And if you think that any casual reader of this blog is going to go back and read through a year’s worth of posts and all their comments before they say a peep, well here’s a reality check for you. I am a good and faithful reader of things in general, and I can just barely keep up with the few threads here that I’ve posted on in the past week or so. To expect me to read and understand everything that’s gone on in the past 6 months is just plain unreasonable.

      And if you think that some poor innocent who might be struck blind and deaf by reading the words of an infidel, here’s another reality check for you – they won’t. People as blind, silly, and uncaring as you are so concerned about won’t wade through all this verbiage and take it seriously enough to be persuaded any one way or the other. Heck, I’ve been reading the email notices to this blog and the smattering of the few comments that post to them in the first few hours for months. And I had no idea that there was this depth of discussion going on until I took the plunge and threw my two bits into the ring.

      I’m not a security professional, and it has done nothing but astonish me that such a furor has been raised because a knowledgable, intelligent person has dared to stand up and say something in this crowd.

      Honestly, if you think that calling me a moron and “disliking” me into oblivion is your best strategy to deal with me, I’m tempted to think that all of you are simply morally and intellectually bankrupt. Because I am knowledgable, I am not stupid, and perhaps most significantly, I am not alone. Perhaps something that distinguishes me from my peers is that I have this quixotic bent to my personality that makes me say what I mean and mean what I say, and I’ll stand by it even if it seems like the whole world is against me.

      @Chad
      ” Two motorcycle riders. One wears a helmet and one doesn’t. Just cause the one without the helmet is a good driver does not mean that he/she should recommend riding bikes without helmets because he/she likes the breeze on his/her hair.”

      Oh, there is so much wrong with this little story on so many levels – where should I begin?

      In the first place, there are people, and I would be one of them, who think that back in the times when Americans liked to feel the breeze in their hair, and who knew that they’d better be good drivers or something terrible would happen, were better times and better people than we have and are now.

      Actually, I think I’ll leave my response at that, even though I could really tear into the subject of what pampered, bullied babies Americans have become (and how falsely glorified the bullies think they are). Either you get it or you don’t.

      @Nick P
      “Windows 7 uses about 350MB of RAM and works pretty quick on a formerly WinXP machine. It’s not just a Vista service pack: it’s Vista done *right* and one of few versions of Windows that people loved right out the door. I’ve switched all my relatives over to a hardened Windows 7 and have fewer malware- or reliability-related tech support calls.”

      I’ve said before that I’ll give Win7 a fair chance when I’m ready to. But times are hard now, in case you haven’t noticed, and rationalizing breaking free a couple hundred bucks for a new operating system that I don’t think I need is, well, tentative at best.

      And I’m not one of your relatives, who needs to call someone for help whenever their computer goes gunny bag. In fact, I haven’t had a serious problem of any kind for a lot of years now, and the ones I have had in the past 10 years I’ve been more than capable of dealing with myself. And I’m no freak. There’s lots of people like me.

      I do however look forward to a day when I can afford Windows 7 and take it for a test drive.

    2. DeborahS

      Well, if you ever thought you would persuade me of anything, you can forget that. This was an interesting little byway for me, but really all I found here was a bunch of snakes, ready and waiting to strike anyone who ventures too close.

      Maybe I should have just used my initials, or lied and identified myself as a male. I keep thinking that’s what I should do in the future, but it just rubs me the wrong way. I am female, and I disagree with you.

      Live with it.

    3. DeborahS

      And you think I would trust any baboon who calls himself a Security Professional, and believe me I will be hiring someday, after the bashing I’ve gotten on this forum?

      Fat chance, suckahs.

      1. DeborahS

        Sure, you can all dance your Tra-la-la dance now that I’ll be gone, but you’d better believe that I now have a very bad taste in my mouth for anything smacking of “Security Professionals”.

        Nice going guys. See you on the other side (of wherever that is.)

  2. Al Mac

    The Securities Group should give Brian a reward for the heads-up.

  3. CS

    I’m still waiting to to hear of bots being auctioned off or traded, etc. for the access the machines represent. I.e. being behind the network perimeter of a targeted organization. An attacker might seek specifically bots in small business networks for better spear-phishing opportunities for example. Or perhaps use the bot as a foothold to gain access to private corporate networks. If they matched up the access properties to those who find them valuable, they could presumably up the profit, but they’d have to streamline the discovery what access the bot represents. There’s a lot of commoditization opportunities that the botmasters seem to have overlooked. Seems they have graduated from smash-and-grab tactics though, from what you report here.

    1. Norskie

      I heard about that some years ago while conversing with a hacker.
      He had been following a Russian Online BBS, and he said that an individual announced that he had xxxx computers available in his bot net if anyone need some power.
      Some other individuals then responded, they “hackled” and agreed on the price and went “offline” to finalize payment and password etc.
      So it is done, probably more often than not in gated communities.
      The problem lies when these criminal bands start to cooperate, or syndicate themselves.
      Sometimes I wish I was a farmer…. :o)

      Great website BTW

      1. lurker

        “hackled” = haggled ?? Wonderfully descriptive, love it!

  4. JCitizen

    Reminds me of a service made available by download for repressed persons in Iran; where supposedly they could redirect the mission of several bot servers to remain anonymous in their opposition communications.
    I understood followers of the Dalai Lama were using it as well.

    The link is changed often so mine is obsolete.

  5. brucerealtor

    Customers can use the service after paying a one-time $150 registration fee (security deposit?) via a virtual currency such as WebMoney or Liberty Reserve.
    —————————————————
    And the REAL purpose of these virtual currency brokers IS ???

    So only CIA, NVD & Mossad monitor these guys ?

  6. DeborahS

    “And as TSG’s experience shows, it’s far easier to keep a PC up to date with the latest security protections than it is to sanitize a computer once a bot takes over.”

    Perhaps. But then again, what could be easier than simply restoring your most recent known-good system image, assuming you keep your system backed up and know within a reasonably short time that your system has been made a ‘bot.

    I’m not disagreeing with you, but here again it seems like there is at least a potential difference between best practices for business (and maybe mobile) computing and best practices for home computing. (And of course for people who really don’t want to think about anything. They should be geared up with all the latest and best security protections too.)

    In a business computing environment, quite likely there is more than one user per machine, and none of the users are particularly paying attention to it, thus none are especially likely to notice odd behaviors. Indeed, in a work environment this would probably be considered an unproductive use of time. And in some mobile computing environments, where multiple networks are being accessed, what’s happening where and when may simply be too much to keep track of. In these types of environments I can see the wisdom of relying on the latest security protections.

    But in the single user home computing environment, particularly if the goal is maximizing the usefulness of one’s computing resources, the argument can be made for substituting alertness and brain power for security protections.

    I’m still fairly new to this blog, and never have been terribly familiar with the security concerns of businesses, organizations or mobile users, though I’m very much interested in learning about them. But I see a lot of people “disliking” what I have to say, and honestly, I frequently can’t figure out what it is that they “dislike”. I’ve found that the best personal security strategy for me is to forego “canned” security solutions and do-it-myself. I’m basically satisfied with the outcomes I’ve had, but would like to learn more and share experiences with others.

    So if you are of the persuasion that no one should forego the latest security protections and you see me say something you disagree with, please don’t just hit the “Dislike” button. Say a few words about what you “Dislike” or disagree with and why.

    1. grumpy

      What’s to dislike is the assumption that more than a very small fraction of computer users are capable of doing what you do. The canned solution is so much better than anything 99% of users – and organizations too which is a scary thought – can do on their own. The last 1% is irrelevant to any discussion of security – pwned or not, they’re not going to make a difference in the grand total.

      Besides, just re-imaging when pwned is like rebuilding the house when it has been broken into. So much better to do a few things to make it harder to break in and steal whatever it is that has value. If the cost of breaking in (including risk to self) exceeds the value of the loot a rational thief will not bother. Most thieves are rational.

      1. Terry Ritter

        @grumpy: “The canned solution is so much better than anything 99% of users … can do on their own. The last 1% is irrelevant to any discussion of security”

        This statistical view of security is so right that I have to wonder why it avoids naming names:

        Most browsing occurs in Microsoft Windows, so, naturally, ALMOST ALL malware is designed to run ONLY in Windows. That means the “malware resistant computer” exists right now, and it is any computer which does not run Microsoft Windows online.

        Sadly, “canned solutions” (to malware) have been rendered generally ineffective. Can they stop bot malware from getting in? Probably not. Can they find a hiding bot? Almost certainly not.

        “Besides, just re-imaging when pwned is like rebuilding the house when it has been broken into. So much better to do a few things to make it harder to break in and steal whatever it is that has value.”

        As other comments mention, recovering a clean image assumes that one CAN know when a bot is present (and that it is not in the image). There is no such test and no such tool.

        But the response also assumes that a “few things” CAN be done to “make it harder to break in.” The time has long passed when significant improvement was easy:

        * Anti-virus scanners cannot keep up with new malware production.
        * Polymorphic malware “encrypts” its files so they cannot be found by scanners.
        * “Rootkit” technology prevents the OS from showing malware files, or showing modified file contents.
        * “White Hat” bots might possibly be helpful, but then you have to agree to live with an active bot.

        “If the cost of breaking in (including risk to self) exceeds the value of the loot a rational thief will not bother. Most thieves are rational.”

        That confuses the decision the attacker makes to take a particular distribution approach with what happens after the approach succeeds. Yes, of course, attackers will choose the most profitable approach to get their malware to run, which often will put a bot in place. But only AFTER the bot is in place can they look around to see what they have.

        Most bots are not going to stumble on a pot of gold, although some will. But a bot has worth of its own, for hiding identity, for distributing spam and malware, for DDOS attacks, for distributed computation and so on. The attacker who finds no gold does not simply remove the bot and slink away to find somebody else. The bot has worth anyway, and maybe gold will appear someday.

      2. DeborahS

        @ grumpy

        “What’s to dislike is the assumption that more than a very small fraction of computer users are capable of doing what you do. The canned solution is so much better than anything 99% of users – and organizations too which is a scary thought – can do on their own.”

        Fair enough. Unfortunately there aren’t any hard statistics to tell us what the computer savvy breakdown is out there.

        And it could be a matter of perspective. Ive been in university and technical environments my entire adult life, and personally know a lot of people who can do what I do, so it might be hard for me to see outside of the box that I live in. And if you hang out on Arstechnica, Slashdot, Techdirt, the How-To Geek, Overclockers, etc., you’ll see an amazing number of people who truly understand their computers and technology to quite a profound degree. Really, I often feel like a very small fry in those crowds. (Which might be why I have the time and interest to hang out here, while they’re too busy and absorbed in their technical projects… 😉 )

        So maybe we can compromise on those numbers a bit. Say, the technically capable being quite a bit more than 1%, (reaching for a tail to pin on the donkey) – ooh, maybe 10%? 15%? And a lot of those tech-jockeys are for hire, as I soon will be. Perhaps this number is still too low for you (a security professional?) to consider being significant. That I don’t know, but you’re probably safe in assuming that this technically savvy minority would have less need of professional security services and advice than the remaining majority does. And it’s also reasonable to assume that the longer we live with the newer technologies, the larger this pool of technically savvy people will grow.

        “Besides, just re-imaging when pwned is like rebuilding the house when it has been broken into. So much better to do a few things to make it harder to break in and steal whatever it is that has value.”

        Absolutely agreed. An ounce of prevention is worth a ton of cure, particularly if you have something of great value to steal. But one of the discussions going on is about the different forms that good prevention can take.

        As to whether there is such a thing as a known-good system backup, I think that’s been addressed in later comments. Ultimately that’s up to the system owner to decide, hopefully on a solid basis of knowledge about the system’s history, and a timeline of when and how the system was attacked. If you don’t know those things and cannot risk reinfection, then a clean install on a reformatted hard drive is most likely your only good solution. But that would be the case whether you had “canned” security in place prior to the attack or not.

        1. prairie_sailor

          @DeborahS

          Lets put it this way – in 4 years of doing virus removals (usually reformat as I don’t know what else may be lurking in the computer). I have yet to see a fully patched computer – windows, adobe, java etc. that was pwned. – though thats not to say I might have missed one or two or that the owner of such a computer just hasn’t brought it in because he doesn’t know it has a problem.

    2. InfoSec Pro

      @Deborah, my general dislike is that your posts seem to prescribe simplistic solutions that evidence a lack of first hand familiarity with the environments for which you prescribe. btw that’s a hot button for me, I’ve seen it too often in politics – when I was an elected local government official every time someone new joined discussions about significant problems the same thing happened, usually revisiting solutions that were already flogged to death multiple times. Good advice for newbies is to listen and learn, and remember that ’tis better to remain silent and be thought a fool than to speak and remove all doubt. Sorry to be so harsh, but you asked for it.

      Specifics in your post this time: you said “what could be easier than simply restoring your most recent known-good system image, assuming you keep your system backed up and know within a reasonably short time that your system has been made a ‘bot.”

      Both assumptions are false most of the time. Since only one has to be false to invalidate your suggestion it is seldom applicable.

      There is another assumption that you totally missed. You implicitly assume that a regular backup can be known to be good. It can be difficult to establish when a system was first compromised. Restoring a compromised backup is worse than useless. Read the separate response I will be posting to comment on Brian’s story for more insight about that.

      I could go on, but don’t want to dump on you. You are trying to understand and learn and contribute and those are all laudable. if you would want some private email exchange let me know and I will set up a one-shot address to establish contact.

      -ISP

      1. Veronica

        Infosec Pro – It’s rare to encounter someone, in real life or in cyber-ia, gifted with the ability to assertively rather than aggressively, point out the errors in someone’s submission while at the same time treating that person with dignity and respect. In addition, the your offer to review the subject in greater detail with Deborah offline, was a generous and kind gesture. You are either the parent of teenagers, someone with a lifetime prescription for xanax or just a truly nice person with no agenda. Maybe all three. But anyway, just thought I would say thanks for keeping it a safe zone here for the girls (or guys) who are not as knowledgeable as this group in general but want to participate. Comments like JBV’s just serve to belittle people and ultimately create a hostile environment. What can possibly be gained from using the word stupid in response to someone who is admittedly less than knowledgeable on the topic discussed?
        I’ve learned quite a bit from following the discussions on this blog and find most of you intelligent, articulate and refreshingly facetious. Thanks for all the knowledge and sharing. If I do post soon, I’ll formulate it as a question and not a statement. 🙂 As a side comment, I hold a primary position in a well known hospital’s emergency dept and I always tell people I meet to be nice to everyone you cross paths with in life because I’m consistently amazed at how small a degree of separation exists between people who have little to no knowledge of one another. Most of the time, my job puts me in the position of providing a cushion-like barrier between someone going home happy and healthy or buying a first class ticket on the reaper express. The person you insulted yesterday could be the person who will be saving your life tomorrow. I think it’s better to have that person WANTING to help you rather than BEING REQUIRED to help you. While it’s true professionally the quality of care will be equal, the delivery can be noticeably different. Whether people in the medical community, or people in general, choose to acknowledge that fact, well, that’s another story entirely. The bottom line is that positive energy is a powerful force, and the same can be said for negative energy. Peace & Love. V

        1. JCitizen

          Good post Veronica;

          I should be so bold as to be more considerate of folks. It takes a bold personality to do that. Probably the opposite opinion of most. :p

        2. Yar

          Good post, Veronica. This goes for everyone from the ER employee to the person serving or preparing your food.

          You never know if the person you belittle will be preparing or handling your food, washing your shiny car, or even interacting with your children later. Being kind as a rule can have a great effect on your life.

      2. DeborahS

        “my general dislike is that your posts seem to prescribe simplistic solutions that evidence a lack of first hand familiarity with the environments for which you prescribe. btw that’s a hot button for me, I’ve seen it too often in politics – when I was an elected local government official every time someone new joined discussions about significant problems the same thing happened, usually revisiting solutions that were already flogged to death multiple times. ”

        First of all, I’m not aware of having “prescribed” any solutions, though I believe I have proposed some topics for discussion that I haven’t seen brought up in the time that I’ve been “lurking” here.

        I too have been heavily involved in local politics, plus other highly charged social situations, but perhaps I take another view. It is perhaps a jaded view of public discourse that any important topic can be “flogged to death”. Almost always, anytime a subject is reopened for discussion, new facts and perspectives can come to light that are well worth the time and trouble. Or people previously unexposed to them can be brought into the discussion. You could say that the subject of whether big government is good or bad has been “flogged to death” a thousand times, yet it continues to be valuable for us to talk about it.

        Regarding the use of system backups to recover from an attack, I think the subject is being discussed in other posts. I was merely suggesting that if one has a known-good backup, restoring it is simple enough. Of course if you don’t have one or you don’t know that it’s good, the point is moot.

    3. helly

      In addition to the great responses already I would add that this particular statement bothers me “the argument can be made for substituting alertness and brain power for security protections”

      Your general approach seems to rely on noticing odd behavior created by an infection. This is seriously flawed when your dealing with an infection that doesn’t elicit odd behavior. If a trojan is sitting on your machine logging keystrokes and quietly sending them out to the internet how are you supposed to use alertness or brain power to detect that? I know I certainly couldn’t do that.

      You also advocate forgoing the latest security protections, that is extremely poor advice to anyone. Its akin to a soldier taking off a bullet proof vest and going into combat, thinking he can reason out where bullets will be fired and simply avoid them that way. On the internet use the security tools made available to you first, then you use common sense and alertness. Both concepts are flawed without the other.

      1. DeborahS

        @ helly

        “Your general approach seems to rely on noticing odd behavior created by an infection. This is seriously flawed when your dealing with an infection that doesn’t elicit odd behavior. If a trojan is sitting on your machine logging keystrokes and quietly sending them out to the internet how are you supposed to use alertness or brain power to detect that? I know I certainly couldn’t do that.”

        No, I might not be able to detect a keylogger’s behavior either, but I can stop it from sending its data out on the internet (that’s where the “brain power” part comes in), or at least be able to detect its attempts as something to look into. And that’s how I’ve caught the 3 or 4 malwares (is that a word?) I’ve been infected with in the last 8 years.

        “You also advocate forgoing the latest security protections, that is extremely poor advice to anyone. Its akin to a soldier taking off a bullet proof vest and going into combat, thinking he can reason out where bullets will be fired and simply avoid them that way. On the internet use the security tools made available to you first, then you use common sense and alertness. Both concepts are flawed without the other.”

        First of all, I don’t and haven’t “advocated” foregoing the latest security protections. I’ve merely stated that it’s an option that some people successfully use.

        I think the analogy of the bulletproof vest doesn’t quite hold up here. Or at least the implication that the latest security protections are equivalent to a bulletproof vest is highly suspect. There’s plenty of testament in comments on this blog that the latest security protections are unable to stop many of the bullets. And whether each and every “bullet” (malware) can mortally wound you seems a little hyperinflated to me. Nobody here is suggesting that you go out on the battlefield with no protection or precautions of any kind. What’s being discussed is what the options for protection might be.

        1. helly

          I am impressed that you were able to discover and stop outbound traffic from your PC related to a trojan without security tools. If you wouldn’t mind sharing how you did so I would be curious?

          I can certainly think of ways to do so, but I don’t know how I could communicate that to the average computer user as an ongoing recommendation. Which is also back to the basis of my response to your approach. Your recommendations, while they definitely can work, are not good for the average computer user. I spent a long time working with the general public on computer issues, and in that time I only met a handful of competent individuals who could do what you do successfully. My point is that if an average user is coming to this site looking for advice, your strategy could seriously mislead them.

          My examples are usually pretty bad, agreed. But I think this one holds up. A bullet proof vest is part of a layered protection system. I agree it won’t stop every type of attack and its not perfect, but if it stops attacks it should be used.

          The root of my argument is that you are discussing an optional security approach that works for you, that does not include using the general security tools available today. Common sense is a great way to stop threats, but it should only be one layer in a security strategy. If your approach works for you that is excellent, but don’t post it out as a fail proof recommendation for others less informed to follow.

          As a personal example I run a virtual machine for all of my browsing. In 8 years I have never had an unintentional infection to my computer. I don’t advocate my approach to everyone because it really isn’t functional for everyone.

          1. DeborahS

            @ helly

            “I am impressed that you were able to discover and stop outbound traffic from your PC related to a trojan without security tools. If you wouldn’t mind sharing how you did so I would be curious?”

            My first line of defense is ZoneAlarm, v6.5. When properly configured and functioning, it will notify you whenever a process asks for internet access, and suspend the request until you click “Allow” or “Deny”. It also has an Internet Lock, which among its other uses will trap packets and data on your system so you can take a closer look at them. Essential Net Tools has a monitor of all internet-active processes and connections, and a neat right-click function to open SmartWhoIs, which allows you to look up the IP address of anyone connected to your system. And, much less often, I use CommView to take a look at packets and what exactly is in them. Of course it helps that I tested network servers at Microsoft and basically know how to read the packets, and where to find parsers, etc., but the knowledge is available for anyone who knows to look for it. Tamosoft, the makers of ENT, SmartWhoIs and CommView used to have forums where such things are discussed. I don’t know if they still do, but I’m sure other such forums exist for anyone who wants to seek them out.

            Whether the malware can establish and use your internet connection is key to their success. Of course I could be wrong, or someone else could have a better answer, but a primary cornerstone of my strategy has been to guard that door.

          2. Terry Ritter

            @DeborahS: “My first line of defense is ZoneAlarm, v6.5. When properly configured and functioning, it will notify you whenever a process asks for internet access, and suspend the request until you click “Allow” or “Deny”.”

            I vividly recall hearing about these issues some years ago. At that time, malware had learned to LIE to the outgoing firewall about what process it was, and to run code in other processes, to get around being stopped.

          3. DeborahS

            @ Terry Ritter

            “I vividly recall hearing about these issues some years ago. At that time, malware had learned to LIE to the outgoing firewall about what process it was, and to run code in other processes, to get around being stopped.”

            Well, I was definitely not paying attention to discussion of security issues until relatively recently, so I missed out on that.

            Either I’ve been incredibly lucky, or ZoneAlarm’s methods of identifying processes is not what the malware writers expect, or one of my other tactics prevented that particular type of malware from getting into my system.

            In principle that could be done, but the success of that type of malware would depend on how well it anticipated each outbound firewall’s method of identifying and vetting processes. I do know that ZoneAlarm looks at the components of processes, and you can configure it to vet each one before allowing the primary process to access the internet. I don’t have anything terribly restrictive set up, but I still see entries in the log where ZoneAlarm has blocked a process automagically because it didn’t have the component dlls and child processes it’s supposed to have. I suppose if I was really paranoid I’d investigate every one of those log entries, but I’ve done that so many times before and it just turns out to be something innocuous. The ones I’m looking at for earlier today turn out to be the FiOS wireless router (in my new apartment – just moved here) begging for internet access by piggybacking on svchost.exe. I don’t know why it wants internet access, and it’s good to know for future reference that ZoneAlarm is blocking it, but it’s definitely not malware.

            And just because there once was malware designed to lie to firewalls doesn’t mean that they always succeeded. Nor does it mean that those types of malware are still in use – I wouldn’t know. But if they are no longer being used, it could well be because they didn’t work, or they didn’t work often enough. Anybody know if this type of malware is still in use?

    4. JBV

      Deborah: There are many readers of this blog who are new to the concept of computer security, in addition to business people, and the pros who frequently post here. You are doing them all great disservice with your sweeping generalities that really are saying “Who needs all this security anyway?”

      The answer is obvious – everyone needs as much security as they can have in today’s internet environment. The home users need regular updating for programs, antivirus protection, a firewall, and some basic rules for keeping protected and safe surfing. The business person needs to protect the system, its users, and environment.

      Your position is as stupid as if you were on a sinking boat with room left in the lifeboats, and you were saying, “Oh, don’t bother taking me along, I can swim to shore.”

      1. DeborahS

        @JBV

        “Deborah: There are many readers of this blog who are new to the concept of computer security, in addition to business people, and the pros who frequently post here. You are doing them all great disservice with your sweeping generalities that really are saying “Who needs all this security anyway?””

        Well, well. Oversimplification is perhaps an unavoidable evil in public discussions, and in this case I have to reject your reformulation of my position. I’ve never said that security isn’t needed, I’ve only proposed that there are options to using the “canned” security measures, and reasons why one may not want to use them. I haven’t said that one should use nothing at all, but the options that I use may not, and probably aren’t, suitable for everyone.

        If the assumption is that there is one and only one security strategy that’s going to work for everyone, I’m suggesting that this assumption is flawed.

    5. Faust

      DeborahS, your advice is perfectly reasonable. In fact, on my own machines, I regularly (every 6 mos or so) reinstall from known safe images.

      Ignore the dislikers. In addition to fanboys of other methods, there are plenty of black hat hackers on this forum who don’t want people to view good solutions.

      1. Tony Smit

        I agree. There is no reason for the huge number of “thumbs down” on DeborahS comments from the legitimate folks who regularly post here.

        Fully erasing the hard drive and reinstalling the operating system and subsequent updates and programs is the only sure way to get rid of malware. Making a disk image after all those installations gets one a “known good” image that can be re-installed in the future to save time, unless there have been hardware changes. It’s also a good way to recover from data corruption caused by hardware failures. An intermittently failing power supply does mean things to a hard disk.

        Malware can hide in any operating system, but cannot evade a full disk erasure.

        But the future is not good with all the counterfeiters and other criminals active in China; counterfeit routers have already been found by the military in their systems, and eventually we will be finding motherboards and add-in cards with embedded malware or backdoors on consumer equipment. That’s one good reason for moving the manufacturing of semiconductor chips and circuit boards back to the US.

        1. DeborahS

          @ Tony Smit

          “But the future is not good with all the counterfeiters and other criminals active in China; counterfeit routers have already been found by the military in their systems, and eventually we will be finding motherboards and add-in cards with embedded malware or backdoors on consumer equipment. That’s one good reason for moving the manufacturing of semiconductor chips and circuit boards back to the US.”

          Even though I was somewhat glib earlier today about finding all the blocked requests in ZoneAlarm from my new FiOS wireless router, later I wondered if in fact it might be malware. True, I just moved here and haven’t set up my network, so the router may have some functionality that I don’t need to connect one computer to the internet. But it could also be a virus or some kind of malware embedded in the router. For now, ZoneAlarm is blocking it and I don’t understand enough about what it’s trying to do, but malware is going to be high on my list of possible explanations when I do look into it.

          Completely agree that we should bring hardware manufacture back into the US, and for more reasons than just security. But so far as security goes, even hardware manufactured in the US has turned up with malware in it out-of-the-box.

          1. Al Mac

            @ DeborahS wrote

            “But so far as security goes, even hardware manufactured in the US has turned up with malware in it out-of-the-box.”

            The first virus which ever infected me was the “I love you virus.” I got it from a software vendor, right after I wrote a review praising one of their products, a few hours before warnings came out about that virus.

        2. Nick P

          Good points on malware and counterfeits. But, I might be able to clarify why Deborah’s posts get so many thumbs down. It’s just the things she’s said. She claims strong inside experience in the development processes of mainstream software like Microsoft’s products. Then, she makes claims that nobody with inside experience would make and seem nonsensical. Best example was her strong arguments against patching and her claim that she has an old OS that she doesn’t patch and she’s better off. She said her experience showed patches are extremely damaging/risky and she’s safer never patching her systems. Most people say patching can cause damage, must be done carefully, and doing it is usually better (because being a victim is easy & worse).

          It’s hard for me to hear her claims of experience and work at companies like Microsoft, then hear stuff like that where a tester should know better. So, if she makes nonsense claims, I give a thumbs down on it. Note that I usually don’t do that if I merely disagree with a point or someone seems unknowledgeable about the subject. In that case, I offer my views in hopes that we all might contribute and learn something. But, when people say they’re quality assurance professionals, then say we’re safer never patching, I have to give a humongous thumbs down.

          1. DeborahS

            @Nick P

            You raise a number of issues that seem to be quite simple to you and quite complex to me. I’ll do my best to do justice to that in hopes that we can understand one another.

            First of all, it’s been 8 years since I left Microsoft, so by no stretch of anyone’s imagination should I be thought of as a currently employed tester. I am a free agent now, and I can think whatever I want to think. I sometimes still jokingly call myself a tester, because I think that in many ways I’m a “tester” at heart, regardless of my current job title.

            Likewise, “quality assurance professional” is a label you put on me, not one that I ever claimed myself. My official job title at Microsoft was “Software Test Engineer”, and even when I was a tester at Microsoft I scoffed at that job title and the label of “quality assurance professional” that some software testers arrogate to themselves. After 2 years of formal education in Electrical Engineering and 7 years in Engineering Systems at Ingersoll Rand, I think I know what both an engineer and a quality assurance professional is. And I don’t think that I, or any of my Microsoft tester colleagues could honestly claim either label. True, there was a class of testers who could probably claim the title of being engineers, but at least when I was there they were an elite minority.

            As for software testers being quality assurance professionals. “Quality Assurance” is a job function that only has meaning when there is an engineering specification that is assumed to be a good design, and the QA job is to verify that the finished product conforms to the specification. That’s what you do in a factory. The engineering team designs the product, the manufacturing team produces the product according to the design, and QA verifies that it was done and to what precision.

            That’s not what happens in software development, or at least that isn’t what happened at Microsoft in the years I was there and in the development teams I worked on. For one thing, the “product”, the code, is generated by the engineers, the developers, presumably to the exact degree of implementation they had in mind for it. But the testers’ function is not to verify that the product conforms to the specification. No, the testers’ function is multi-pronged, and whether the code performs according to the developer’s original specification is only one task out of several, and arguably not the most important one. Actually, it would take me a very long time to detail all the prongs in the tester function, but they all revolve around the question of “does it work?”. So at it’s very basis, software development is radically different from manufacturing, because the only way to verify whether the design of a general purpose operating system is a good one is to test it. Sure, Computer Science degree programs attempt to give their students, our future developers, a huge array of tools to produce good design with, but in the end the crucible is in testing.

            Do I have a jaded opinion of BVTs (build verification testing), the ones who test the patches? Yup, I do. I don’t know if this is the place to air all my observations and prejudices, but yes, I’ll own up that I didn’t and don’t have much confidence in the ones that I used to work with. And I haven’t seen anything to convince me that anything substantial has changed since I was there. In fact, I’ve pretty much proved to my own satisfaction that, at least in Vista, they’re just as slap-dash, get-it-out-the-door-now as they ever were. BVTs and production testers are two different critters. Most people outside of software development don’t know what either one of them are.

            However. I’ve said before on this thread and I’ll say it again. For people who just use their computers relatively lightly, and for Windows users who basically only run Microsoft applications, patching poses no big risks, and is probably a good idea if they don’t want to pay attention to security problems. Their systems will eventually need to be reinstalled and they won’t know why, but so long as that doesn’t happen too often, and they don’t get hit with a zero day, it will all be ok.

            I could keep writing, but I’m beginning to wonder if all those “dislikes” that puzzle me so much are really more indicators that people don’t understand where I’m coming from than anything else. Well, I hope this fills the gap somewhat, though I’m not at all sure that it does.

            I can see and understand why end users of software, particularly operating systems, want and even need to feel that it’s all completely under control and it always works right, they just have to trust it. I just don’t have that confidence. We’ve raised a few of the issues in this thread alone, particularly DLL Hell, as structural reasons why relying on patching is a bad idea. I could elaborate further, but I think I’ll leave it at that for now.

          2. Nick P

            @ DeborahS on Apr 10

            I appreciate your reply. I’ll consider the new information during future conversations.

    6. john

      The reason why canned security is pushed is most users think to do anything with a computer requires a great deal of skills and high levels of education. It does help make working with computer easier. You don’t have to have an advanced degree to be smart online. Also most user don’t want to be bothered with being safe online. All that want the computer to do is work. They don’t care how it works or why so long as it does. The network I run I try to educate my user on why certain security measures are used or are now being added. If you do it in small doses they are more likely to listen. The best is if you can give them an example of why they need to be this way. Over the past year I have been making some real progress with my users. Many are starting to understand that there is no one fix to security, and that it must be layered.

      We IT professionals need to do a better job communicating with the end user why security is so important and why things are this way. Too many times we get frustrated when the user does not do it our way. We must remember they are not in the trenches every day like we are. They do not know how insucre computers really are.

      I have no problem with your approach but not every user can do this approach. We must do a better job educating the end user. Education is the only approach that will help reduce the risk to our networks.

  7. Clive Robinson

    @ Brian,

    “… it’s far easier to keep a PC up to date with the latest security protections than it is to sanitize a computer once a bot takes over.”

    On an end user machine maybe, however patches etc are known to make systems fail horribly, especialy when mission critical.

    The simple fact is unless you know it’s safe, installing any software onto many modern OS’s is a gamble.

    In reality the only way to know you are not going to break something is by testing…

    And this is the rub, testing takes resources and time, neither of which is generaly readily available in medium size or smaller enterprises.

    And time can be a real issue as zero day attacks have shown.

    Many organisations cannot just down their systems as and when a patch becomes available. They have to wait for an opportune moment (such as early sunday morning). Which leaves systems in a vulnerable state for longer.

    We have seen the time between the discovery of a previously unknown exploit and it being exploited from weeks to just hours, whilst the time to produce patches appears in most cases to be rising.

    At some point in the near future installing the latest patch will “be to late” for a significant number of people, no mater how fast they install it. Therefor we can expect the number of systems botted will rise.

    Thus we realy need to consider how we “build” systems so that they can be quickly re-built from scratch and data re-loaded.

    The default instals of most purchased systems unfortunatly do not make this easy to do 9if anything they appear to take a perverse pleasure in making it as difficult as possible).

    It is way past the time the software industry should have started sorting this out. Unfortunatly like Nero in many cases the companies are playing the fiddle and just watching, to see who gets burned before them.

    1. InfoSec Pro

      @Clive, I think that’s a self-serving rationalization.

      The risk of patching causing problems is greatly overstated in my experience, and conversely the risk from not patching is underestimated.

      The main driver is that the cost of patching is felt by the system owner, and even if there are no unforeseen side effects it takes time and effort so even problem-free patching causes problems for the owner.

      On the other hand the pain of unpatched systems is felt more by others than by the system owner (who probably neither knows nor cares that his system is part of a botnet).

      So how does the incentive align? It supports Clive’s argument.

    2. Terry Ritter

      @Clive Robinson: “we realy need to consider how we “build” systems so that they can be quickly re-built from scratch and data re-loaded.”

      Allow me to point out that a LiveDVD boot can be considered a “quick re-build,” and storing data in the clouds can minimize the need for data re-load. So we can already do what you want, just probably not in the way you want to do it. But is it even possible to do what you want in the way you want?

      “It is way past the time the software industry should have started sorting this out. ”

      I guess that would depend upon what “this” is. I take that to be “malware.”

      Unless we want to redo the complete design for the Internet, this time including security, and also repeal the law that large, complex systems inevitably have faults, software is not the solution. The fundamental problem is our HARDWARE, which does not offer protection for a stored system. That lack supports malware infection and bots. Then, of course, the OS must change to use that protection, which will in fact impact the way users see the system. No mere software patch can do what is needed.

      1. Clive Robinson

        @ Terry,

        No it’s not malware, the problem is the way we build software with all the dependencies for bits all over the system.

        One aspect of this is known as “DLL Hell” (which is not just an MS problem) whereby Dynamic Linked Libraries get updated and break some but not all earlier software dependent on them.

        You end up with the problem of stick with the old (possibly insecure) DLL and working software or switch to the new DLL and have broken software.

        DLL Hell is just one small aspect of this problem of software and patches / updates.

    3. Nick P

      Decent point, Clive, but I think that point is overused. In my experience, patching rarely disrupts the most typical applications. And lots of small to midsized enterprises just use run-of-the-mill apps like Exchange, Office, and Sharepoint. Updates rarely break such apps compared to app updating in general. So, unless they have many custom or 3rd party apps of questionable design, they should be updating by default because the risk is lower.

      If they are worried, they could update a small portion of the computers first, make sure everything works with minimal testing, then do a widespread deploy.

      1. DeborahS

        @ Nick P

        “In my experience, patching rarely disrupts the most typical applications. And lots of small to midsized enterprises just use run-of-the-mill apps like Exchange, Office, and Sharepoint. Updates rarely break such apps compared to app updating in general. So, unless they have many custom or 3rd party apps of questionable design, they should be updating by default because the risk is lower.”

        Again, the issue is about what environment security protection is needed for. I’m sure you’re right that in an enterprise environment, and/or one where only the standard issue Microsoft applications are used, patching may not cause any serious problems. But I’m here to tell you how much havoc they can wreak if you really use your computer with tons of 3rd party apps to do tons of different things. Those patches were never designed or tested for that kind of (ab)use, and they will break your system sooner rather than later. Just as an experiment, when I bought my last PC with Vista installed, I brought it up to date and let it automatically patch itself whenever its little heart desired, and it was irretrievably broken 2 months later. Maybe you could say that I shouldn’t be running all the software I was running on the poor thing, but heck – what are computers for if you can’t use them?

        1. Kooberfacer

          Most folks dont seem to update their programs.Its exploits in older programs that become vulnerabilities.Sure some hotshot will find a way into a newer OS or newer programs and not much we can do about that until the sploit is known.
          Many also DO NOT report to microsoft any errors in their OS.This too is flawed at the user level.
          I never had Vista.But i did run XP since its inception almost 10 years.Before that Win 98 and i kept that one patched up too.My last virus was a ripper virus on win 95.Thats going way back.
          Sorry Deborah i still have to disagree.As a user i reformat out of choice and back up my data offline.But from a company perspective this isnt a valid option sometimes.Especially with servers up 24/7 nowadays.
          I do win7 now also.Unlike others im not paranoid about Microsoft.The more reports they get ,the better it will interact will third party software i do use.

    4. Tony Smit

      “The default instals of most purchased systems …”

      The default installs of most consumer systems contain unwanted programs, called junkware, shovelware, and other unflattering names. These programs are deeply embedded into the computer and are difficult to completely remove as their uninstall routines don’t completely clean them out of the registry. Some of these have hidden executables that poll for updates and those aren’t uninstalled. It takes a lot of digging to get them out. Worse, the computer manufacturer provides a “restore disk” or “restore partition” that, when used, reinstalls all those unwanted programs, which frustrates attempts to start over with a clean system. Hard disks have huge capacities these days and the computer manufacturers should provide a “restoral partition” that provides a clean installation of only the OS and minimal device drivers.

      The OS being Windows.

    1. InfoSec Pro

      Hmm, interesting, could be applied to locating dissidents stuck behind the Great Firewall of China.

      One countermeasure would be to tweak the driver to artificially delay the SYN-ACK handshake, but that would have to be done at the perimeter, and the routing info still gets you pretty close.

      TOR is the obvious answer. Botnets also serve the same purpose, as Brian explained. The reported technique would locate the ‘bot not the crook, unless the ‘bot can be monitored successfully (there are countermeasures against that btw).

      1. Nick P

        Tor isn’t the obvious or best answer to situations like the Great Firewall. It’s just one of them. Tor’s operations have a signature so to speak that makes it more obvious that someone is using Tor. Iran recently temporarily blocking Tor users is an example of that, due to unusual packet parameters in that example. So, if you want deniability & reliable traffic, then Tor isn’t always the best option. Regular relays or proxies are often better.

        In the US, the best options are offshore proxies, hacked wifi networks, botnets and relays accessed via hacked wifi networks. (In that order). The last option provides a much higher privacy profile than Tor. That protocol is also much simpler than Tor: you can be sure that the only way you can get caught is if they track you or your computer in that physical area. With Tor, new attacks form every year. The recent BitTorrent source IP attack is a good example. Anyone wanting assurance of untraceability should avoid Tor or use Tor on top of a better scheme.

        1. JCitizen

          @Nick P;

          True – I had a link once to a software that would takeover a bot-net much the way competing criminals do to each other, to complete a way to communicate anonymously between dissidents. The link is changed every so many days, so mine is trash.

          But it was an interesting way to communicate! However the same innocent folks who are infected with bots were the target of police during some of the investigations against dissent in these countries.

    2. Clive Robinson

      @ David,

      The system actually only provides an improvment if you are a leaf off of a branch node where the geo-location of the branch node is known (such as a Uni Campus server room).

      Most network IP address to geo-location use an assumption of the distance data travels in a given time (ie round trip time).

      For many reasons that can be hellishly inaccurate, not just because the propergation speed in a data cable can vary wildly on the cable type (compare say optical fiber to Cat 5 twisted pair) but also due to buffering in unknown devices such as bridges (that could be actually gatways on and off an underlying network such as globe spanning ATM or older X25 networks).

      Further the likes of mobile phone companies overload IP addresses quite serverly and their might be in reality over 200 actual smart phones using the same IP address.

      The same issues also apply to ADSL networks as well in many places.

  8. Maizill0

    This is nothing there is a popular hacking site that u can pay $5 for a lifitime subscription of the bot metwork. Some of this people crypt their viruses so they are undetected. And if it gets detected they renew the stub and update so installing antivirus for some smart botnet owner isn’t gonna free up your computer

  9. David Shroyer

    Question for the group…. should we as an industry take a pro-active stance and identify and notify these targeted companies at the ISP or organization level, or allow the bots to continue to exist for better detection, as they are known? I’ve always found this to be an interesting debate, but when does it become such a hazard that action is taken (or has it already come to that)?

    1. helly

      Its tempting to just want to go out and notify the ISP for each individual bot, and I think maybe if there were some legal requirement for ISP’s to respond it could be effective. I’ve never had luck contacting small organizations for any security related issues, so I’m not certain that could be effective.

      Given the current state of things, I think the practical answer is to let these bots continue to exist for the moment. The efforts of industry and to a greater degree law enforcement should be on identifying bot net operators and C&C servers. Shutting down those nodes (and making arrests where possible) seems more likely to make an impact. Microsoft is a good example of the gains that can be made with this approach, and for the moment it seems like the best approach for the industry too.

  10. GekkeHenkie

    “…installing some anti-virus software on all of the servers over the next week or so.”

    This remark horrifies me in so many ways and on so many levels. No sense of urgency, only servers are updated, and no mention of any other hardening measures, user education or prevention.

    I sure hope they do more than that, b/c otherwise I wouldn’t trust Cunningham within a 10 mile radius of my systems…

    1. InfoSec Pro

      True, but if Cunningham tried to do it right there is a good chance his client would look at the price tag, fire him, and hire someone to do what he described. It’s just enough to provide a good alibi or court defense, that’s the goal not fixing the problem.

  11. InfoSec Pro

    Interesting part of Brian’s story is that the network had been scanned and a spambot removed a couple of weeks earlier, and their IT consultant “had no idea the network was still being exploited”.

    Obviously the consultant is sufficiently knowledgeable to pass as a professional, and yet they didn’t detect or prevent the compromise. This speaks to the difficulty of establishing a known clean baseline.

    Furthermore, if they are “updating [PCs] and installing some anti-virus software on all of the servers” they may or may not eradicate the malware successfully, and they may or may not remediate whatever vector was used to infect the network originally. In other words it is pure chance whether they fix the problem and it stays fixed.

    And that is using a professional IT consultant!

    It is really hard to go back and do forensics on a compromised network, especially if it was compromised long ago in an unknown fashion and the attacker has had warning to cover their tracks.

    The only way they can be sure they have a clean secure network is to rebuild from scratch and totally harden before introducing any user data that might have been exposed to the compromise – any bets on how practical that might be?

    GekkeHenkie said it perfectly well. Problem is that Cunningham is not atypical.

    1. Terry Ritter

      @InfoSec Pro: “Obviously the consultant is sufficiently knowledgeable to pass as a professional, and yet they didn’t detect or prevent the compromise. This speaks to the difficulty of establishing a known clean baseline.”

      Even taking just the comments on this blog, there seems to be a vast conventional-wisdom consensus that bots can be: 1) found, and 2) removed. Oddly, I dispute BOTH parts:

      * There is no test which guarantees to find a hiding bot. There is no way to certify being bot-free.

      * The only way to recover from infection is to re-install the system (or recover a known clean image), including applications.

    2. Nick P

      I partly agree with you and one-uped the post accordingly, but I can’t really believe he was a very competent professional. I know what passes for an IT professional in places like Memphis. You will get plenty of people who have basic experience, certifications and know how to talk to managerial types like they know shit. Then, you will get really technical people who can root out bots. Most of the people who get contracts and jobs in big companies in Memphis are the former type, which is why I sometimes have to drive out there and clean up their messes.

      There are numerous techniques for identifying bots. You have normal scans. Then, you have software like Strider Ghostbuster that can detect hiding rootkits. (I doubt he employed such an approach.) Then, there are whitelisting firms that have huge databases of known good files that should be on a system and can automatically highlight questionable files. This approach has been used with great success by a few firms.

      In all likelihood, the “professional” just ran an AV scan from a boot disc, looked at some attributes of the system from within the corrupt system, made some obvious changes and then declared it “clean.” Today’s botnets take a much more thorough approach, possibly utilizing every approach I mentioned above. Hence, I have my doubts that he was really skilled at botnet removal. I mean, the Best Buy geek squad kids I know remove infections like this all the time. So, why couldn’t this guy pull off a basic job? (Occam’s Razor: Lip service skills.)

      1. Terry Ritter

        @Nick P: “There are numerous techniques for identifying bots.”

        But they would seem to be more appropriately described as “wishful delusions” instead of effective bot-detection techniques:

        * “You have normal scans.”
        But Polymorphic malware “encrypts” malware files differently on each machine, making scans useless.

        * “Then, you have software like Strider Ghostbuster that can detect hiding rootkits.”
        The issue is not whether an occasional specific rootkit can be detected, but whether EVERY rootkit can be detected, because that is the only way to avoid having a bot. That answer is no, and it will always be no.

        * “Then, there are whitelisting firms that have huge databases of known good files that should be on a system and can automatically highlight questionable files.”
        I agree with the whitelisting approach, actually, but Windows presents problems:

        First, all such file comparisons need to be done outside the OS, since the OS itself may have been subverted. In general, that means by LiveCD, because even code which directly implements the file system may not be allowed to load correctly.

        Next, Windows goes out of its way to change essential files, like the Registry, which means whitelisting cannot help. Can whitelisting scan the Registry to assure that no malware is being started? I do not think so.

        Last, but not least, Windows has traditionally included a range of apps which seemingly could have no security implications at all. Consider the video and music players, and the word processor, all of which have in fact been used to start and run attack code. Or consider .PDF files. Can whitelisting scan the data files for those apps to show a lack of infection? I do not think so. The issue is not what we have found, since that should have been fixed, but what remains for malware to exploit.

        In summary, let me repeat, and louder this time, so people can hear me:

        THERE EXISTS NO TOOL WHICH CAN GUARANTEE THAT A BOT IS NOT PRESENT.

        THE ONLY CORRECT RESPONSE TO BOT SUSPICION IS A FULL OS INSTALL WITH APPS (or the recovery of an uninfected image).

        BOOTING A LIVECD IS ALMOST AS GOOD AS A FULL OS INSTALL BEFORE EVERY SESSION.

        1. Nick P

          I think wishful delusions is an unfair term. I’d call them “techniques for identifying rootkits.” A given technique may or may not work on a given rootkit. You’re certainly right in saying that there’s no tool that will guarantee that a machine isn’t subverted (well, at least on x86 & windows ;). But when’s the last time users, admins or even security gurus paid for near-perfect assurance? The market demands solutions which work well most of the time. So, that’s what I’m focusing on here because the repair guys I know rarely fail during a removal and offer money back or free service call for restoring to clean slate. That’s good enough for most users.

          So, on to the specifics. A normal, signature-based AV scan will certainly fail to catch polymorphic malware. A whitelisting scan on a LiveCD that compares whats on the system to a database of known good files will usually find things that aren’t supposed to be there. A technology like Strider compares what things look like on the inside with what a LiveCD see’s to identify discrepencies and the presence of a rootkit. Properly configured trusted boot and disabling VT in the BIOS can prevent other kinds of attacks.

          Many infections are also not quite as sophisticated and effective as a professional grade Zeus bot and delivery kit. The best make the news regularly, but many run-of-the-mill infections are cannon fodder. I’ve been able to clean quite a few infections without leaving the corrupted system because they weren’t very resilient or covert. The most sophisticated threats require a clean install, as I wouldn’t trust any removal techniques on them (or clean install is actually quicker than painstaking tracking + removal).

          So, I agree that there’s no tool that can guarantee a clean PC and up-to-date LiveCD (or other read-only boot media) is ideal for solving persistent malware issues. However, there are quite a few cases where taking down a system totally, reformatting, reinstalling and updating are extremely laborious and otherwise costly. An hour of probing, identifying and removing a threat we know we can handle is a better use of the customers’ time. If we are unsure, then we take more costly steps. I do prefer the clean slate method, but I mention the alternative because I’ve been in so many situations where clean slate wasn’t an allowed option.

        2. Clive Robinson

          @ Terry,

          Yes it can be shown to the level of a proof that a system cannot detect if it is infected by malware or not.

          However the practicality is to what percentage of malware can be found and how?

          First off you need to divide malware up into groups, firstly is the obvious “Known / Unknown” grouping that is the zero day issue. Secondly there is the issue of “Stored / Not Stored” between reboots. And yes there are further subdivisions.

          The point is over 90% of known malware that can run on a system can be caught by running software on the active system, even if it is already “infected”.

          Catching and removing this malware alone would make significant in roads into the issue. Which I think is one of the points Nick P is making, likewise Brian and several others.

          Further atleast 90% of the remaining known malware (ie root kits and the like) can be found by re-booting the system into a different OS that then scans the semi-mutable memory (ie HD’s and other storage media attached).

          You are effectivly detecting over 99% of the known malware that can be stored on a system, without doing anything overly complex.

          However some systems cannot be just stopped and re-booted because they are considered “mission critical”. But there is a way around this in that the same effect as the second method can be achived by running systems in a virtualized manner. That is where the base OS is designed as a hypervisor to effectivly run the “commodity OS” in a “sand box” and can thus monitor semi-mutable memory and communications paths during normal operation. This is a method which I know Nick P has made reference to in the past.

          Effectivly what you are doing is checking for where the malware code is stored for the time when the system is not active.

          Whilst not all malware does this a large amount of malware currently makes the assumption that it needs to be able to survive a re-boot on a standalone system and modifing semi-mutable memory is the only way it can do this.

          So the second method also alows you to pick up anomalies in the system where files have changed that should not have. Thus effectivly you are seeing the footprints of anomalous system operation, that may be indicating there is malware that is not currently known on the system.

          Thus the second method is not just reactive to known malware but also sensitive to unknown malware.

          There are various other techniques that can be used to check other semi-mutable memory on the system (ie not just that in the Flash ROM of the BIOS and I/O cards but also inside the CPU etc).

          Further there are also techniques which will pickup anomalous behaviour in the fully mutable memory (ie RAM) that you would expect from the likes of network worms etc, that don’t write code to the semi-mutable memory on a system.

          However all of these extended techniques tend to be difficult to implement and the reason for this is primarily to do with the way we design and build software including the OS.

          It is another reason why I said earlier it is well past time we started sorting this issue out.

          Because sorting out malware is a multistage process and stage one is detection. So if your OS and software is designed so that malware detection is not just difficult but extrodinarily difficult you have to stop and ask if this is the way we should go…

          1. Nick P

            The categorization you mentioned is a key point here. Many people look at rootkits and malware like the system is totally a black box that doesn’t leak useful information and the malware authors are so good at stego they could get honorary Ph.D’s on the subject. In most cases, this couldn’t be further from the truth.

            Most rootkits use well-known techniques to hide their presence. They *must* hook into or emulate certain API’s or subsystems to achieve the proper level of control or invisibility. And these subsystems do have consistent patterns of behavior that are disturbed by such hooks. That’s one avenue of detection.

            The code must also have a method of executing. Overflows, low level programming attacks, and poor permissions are common ways this occurs. This can be beaten with restricted execution setups, OS’s with reverse stack (e.g. SourceT), or higher level languages for application building. Para-virtualization or MAC help here too. Give the rootkit no way to run and it’s suddenly not an issue.

            Rootkits have to maintain their code somewhere to be persistent. Many important system files don’t change except during updates. A “known-good” analysis or a Tripwire-style differential analysis of critical system files can often detect rootkits. A good backup and rollback strategy can undo such changes if an unauthorized modification is detected.

            Then, of course, there’s the virtualization Clive mentioned. This is one of my primary methods for dealing with malware because there’s hardly any malware out there designed to outright defeat virtualization solutions. A properly isolated VM will restrict the malware from the start, can be analyzed freely by outside software (including volatile state), and can be rolled back to a clean state at the first sign of problems.

            With most malware, subversion is pretty easy deal with. It’s those subversive “people” that one must worry about. 😉

          2. DeborahS

            @ Clive

            “…sorting out malware is a multistage process and stage one is detection. So if your OS and software is designed so that malware detection is not just difficult but extrodinarily difficult you have to stop and ask if this is the way we should go…”

            I-am-not-a-developer, but I have worked closely with OS developers and tested OS code, so I have at least a passing familiarity with the subject you’re discussing. And it seems fair to say that the next generation of OSes (or a generation soon to come) needs to address several of the points you’ve raised. Not just for malware detection and management, but for overall system functionality, stability, maintenance over time, and performance.

            Our current OS paradigm is essentially one of a central kernel with DLLs, drivers and executables layered on top of it. (That’s a very loose way of putting it, but sufficient for this discussion.) As I understand it, this design keeps the most basic system operations in the kernel as simple and as abstract as is minimally possible, to maximize efficiency of communications with the CPU and all the hardware on the system. Then the vastly more detailed implementation of specific functions is farmed out to the DLLs, drivers and executables.

            This fundamental architecture is what produces not only the DLL Hell you mentioned previously, but also the difficulties in scanning for malware that you discuss in this post. In my experience, everyone who is aware of these problems would like to see them solved, but so far no one has had the brilliant flash of insight that would lead to a solution.

            I think the answer will be a more sophisticated kernel, one that can communicate with the CPU and hardware in more complex ways than they are designed to in the current paradigm. This would pull back many, if not most, core functionalities from the DLLs, drivers and executables into the kernel, where it would be inviolable. This would eliminate much of the need for developers to write new DLLs that break functionality for code dependent on the old DLLs, because it would introduce a built-in discipline of what functionalities would be modifiable and what wouldn’t be. This would not limit what developers could do, they would just have to do it in their own code, instead of messing with the system. And were such a kernel to be designed, that would be the time to design in capabilities such as you discuss, that would permit many system maintenance and management tasks, in addition to malware detection and removal – All locked away from access by anyone’s code, be they good or evil.

            As I understand it, the current generation in kernel design requires the degree of simplicity and abstraction that it does, basically due to hardware limitations. CPUs are getting to be very fast, but parallel to the increases in CPU speed that we’ve seen, we’ve also seen a leap in complexity of tasks we expect our computers to perform. So my guess is that the kernel continues to need to be as lean and mean as possible, so that the maximum of resources can be devoted to usable functionality. But in addition to CPU speed, there are also limitations in how fast the kernel can communicate with the other hardware components. So that’s why my guess is that it’s hardware limitations that constrain our current OS paradigm. Perhaps what needs to happen is that we reach some sort of plateau where we stop thinking up new things that we want our computers to do, let hardware catch up, and then revamp our OS paradigm.

            But what do I know – I’m just a tester.

          3. Nick P

            @ DeborahS

            On the Issues, Technical & Economic

            You’re right about the current paradigm. The solution, however, is to make things simpler, more modular, and better managed. The problems you discuss aren’t pervasive: they don’t affect many operating systems. The issue is security architecture and enforcement mechanism design. The Windows architecture lumps too much stuff together with too little restriction. A more sophisticated kernel would increase complexity and attack surface, resulting in more problems.

            It’s hard to get malware or rootkits to run on a capability-based system like EROS or Capdesk. Apps designed in a decomposed fashion can take advantage of mandatory access controls like these, SELinux and others to reduce damage (Chromium and djbdns are mainstream examples of this). The OS’s that Secure64 and Hydra firewall appliances use are immune to buffer overflows AFAIK. INTEGRITY is immune to resource exhaustion of critical processes. The use of safer programming languages (like Ocaml) and static verification technology (like Astree) can greatly reduce bug counts. So, it’s not that we need new technology or even that this is a common OS problem: it’s just a Windows, Mac and UNIX problem due in large part to backwards compatability and poor design.

            As for CPU’s and complexity, the modern platforms are *not* very simple and abstract. In spite of increased modularity, you still have drivers, framworks, kernels, OS functions, etc. lumped together with largely unrestricted privileges. Microkernels like QNX Neutrino and esp. Green Hill’s INTEGRITY don’t have these problems. Every system service is in its own process space, reachable through IPC, with restricted access whereever possible. And the systems are almost as fast as they are reliable when architected properly (see Blackberry Playbook, which uses QNX).

            Modern systems need a fundamental redesign. The companies leading this effort are largely in the embedded space. Most working on PC solutions are building ultra-secure hypervisors and middleware for isolating certain components, but not entire OS’s. The only remaining OS in the high robustness category is BAE’s XTS-400/STOP platform, but it’s availability is restricted. The truth is that there’s no serious market demand for manufacturers to sacrifice features, usability and low prices to create very resilient systems. Malware and DDOS attacks will be the norm unless people are willing to pay the price to stop them. Otherwise, they really need to stop griping about the choice they make every day.

          4. DeborahS

            @ Nick P

            “You’re right about the current paradigm. The solution, however, is to make things simpler, more modular, and better managed. The problems you discuss aren’t pervasive: they don’t affect many operating systems. … A more sophisticated kernel would increase complexity and attack surface, resulting in more problems. ”

            Actually, I think you’re quite right. It was even nagging at the back of my mind while I was writing that a more sophisticated kernel is really just an incremental improvement on the current paradigm, when what is needed at this stage is a complete paradigm shift. Well, I was a Windows tester, and the idea of a more sophisticated kernel was a pet theory I developed then, and I’ve carried it along since then without much critical review.

            The concept of modularity in an OS is very appealing. Modularity has long been the ideal in writing object oriented code, but applying the concept to OS architecture could yield exactly the kind of next generation OS we’ve been talking about. I hadn’t previously heard of the kernels and systems you mention, but I’m saving this post and definitely will be looking into them.

            “As for CPU’s and complexity, the modern platforms are *not* very simple and abstract. In spite of increased modularity, you still have drivers, framworks, kernels, OS functions, etc. lumped together with largely unrestricted privileges.”

            Again, completely agree. In Windows, only the kernel is simple and abstract, everything else is complex and ornate. And this complexity is what invites and accomodates attack, as well as errors and inefficiencies of every description. Perhaps the “mistake” of the current paradigm of OS architecture was in assuming that you must have a “central command”, the kernel, to run the show or you would have absolute, utter chaos. This isn’t all that surprising, if you think about it, since the first general purpose OS was written in and modelled on machine language and C, not C++. The concept of distributed management and communication between parallel processes (via IPC) simply wouldn’t have occurred to the early OS developers.

            “Modern systems need a fundamental redesign. The companies leading this effort are largely in the embedded space. Most working on PC solutions are building ultra-secure hypervisors and middleware for isolating certain components, but not entire OS’s.”

            I hate to say it, but we may need another “IBM” or “Microsoft” to build the next generation general purpose OS. Evil as those organizations are, it’s simply a fact that you need armies of developers and testers to build a mainframe or PC OS from the ground up. Since no one, that I know of, is stepping up to that plate (although I do need to look into the systems you mentioned, since some of them may be general purpose systems, or have the potential to be), it’s not surprising that the biggest steps forward are being made in embedded space. Smaller, dedicated purpose systems are well within the reach of smaller companies, so that’s no doubt why we’re seeing them first.

            Good stuff. Thanks for all the steers in the right direction

          5. Terry Ritter

            @Clive Robinson: “over 90% of known malware that can run on a system can be caught by running software on the active system,”

            But the topic of interest is ALL malware, INCLUDING UNKNOWN malware. We cannot know how significant the unknown ones are, because they are…unknown! We also cannot count them, and so cannot include them in our percentages or likelihoods.

            We need to know the number of unknown malwares before we can understand the probability of malware detection. We can reduce that number, by finding them, but malware authors can increase that number, because they can use our anti-vi scanners and tools. “All” they need do is run those tools, see the result, and change their approach until their malware is not detected. Mere attacker diligence thus defeats your detection rate. Simply hoping for attacker incompetence and lack of dedication is not a plan for winning.

            We do not need to get rid of 90% of the malware in a system, we need to get rid of it ALL. And since looking for it with tools and then deleting malware code does not do that, we should not be seeing that promoted as a solution.

            “Further atleast 90% of the remaining known malware…can be found by re-booting the system into a different OS…. You are effectivly detecting over 99% of the known malware that can be stored on a system, without doing anything overly complex.”

            That is like patching the 99 holes we can find in a boat hull, then being content to have the ship sink. Finding 99% is great, yet not good enough. We have to get them all. And if we cannot expect to do that, everyone depending on that necessarily takes on substantial risk.

            Even your handwave numbers do not address our real problems:

            1) It is great to find malware, but doing that really does not matter until we get the last one, and we have no tool to certify when or whether that happens. Without such a tool, claiming that manual removal is effective is no more than a breast-beating claim, with proof both lacking and impossible.

            2) Even if we do get all the malware, we still cannot know what a botmaster has done to the rest of the system, so we surely cannot claim to have put it back. In what way can allowing that be considered a solution?

            Then suppose we scan a system for malware: Either we find malware, or do not. But even if not, we KNOW there might be some which was not found (a “false negative”). So we end up with: “There may or may not be malware,” which is EXACTLY WHAT WE KNEW GOING IN. And that conclusion applies to any form of search which provides less than a guaranteed certification that no malware is present.

            Or, suppose we do find some malware, but just not the most-important and best-hidden malware. The stuff most easy to find and discard may be a deliberate distraction. This could turn out to be the attackers throwing computer techs some work, giving them something to find without having to look too hard. Going along with it amounts to cooperation.

            With current system designs, no form of testing can make us really sure of having a clean system. But we CAN become really sure by reformatting the drive and re-installing. We do not have to be smart to do that. We do not need expertise at teasing out hidden bots. We just have to bite the bullet and go though a re-format and re-install.

            The real expertise needed is to make this harsh reality easier on the user. User files will need to be saved, perhaps on DVD. Apps will need to be re-installed. Perhaps a new partition structure can make recovery easier for next time. Then make sure everybody has Microsoft Security Essentials, and that automatic updates are on during business hours. And that will still not solve their malware problem for the future.

          6. Nick P

            @ Terry Ritter’s reply to Clive

            “It is great to find malware, but doing that really does not matter until we get the last one, and we have no tool to certify when or whether that happens. Without such a tool, claiming that manual removal is effective is no more than a breast-beating claim, with proof both lacking and impossible.”

            That’s like saying a hammer isn’t an effective way to drive nails because it occasionally breaks one. That’s like saying a car isn’t an effective way to get from the house to the store because it occasionally breaks down. No process can be expected to work 100% of the time, especially in a computer. You’re argument is a weak one because most people would accept something cheap and convenient that works 99% of the time. With modern OS’s and software, they effectively accept less.

            “We just have to bite the bullet and go though a re-format and re-install.”

            It’s the best approach but people are often unwilling to do that. The main reason is that they might not have the original application discs or license keys. Loosing nice apps on a PC might cost hundreds of dollars, whereas a manual removal often cost $120 or less out here. There’s also the issue of the user being in between backups or not having a good backup plan and loosing critical data. Ideal or not, manual removal is an option even if it only works as often as a car.

          7. Terry Ritter

            @Nick P: “That’s like saying a hammer isn’t an effective way to drive nails because it occasionally breaks one. That’s like saying a car isn’t an effective way to get from the house to the store because it occasionally breaks down. No process can be expected to work 100% of the time, especially in a computer.”

            Exactly. But you are hiding the requirement behind your process.

            Yes, by “removing” bots, the process does seem to approach the goal of zero bots. But in this case we need to know that we have actually REACHED the requirement, not just gotten closer. And there is no test to tell us that.

            The security requirement is to be 100 percent bot-free. Not 98 percent, not 99 percent, but absolutely, completely bot-free. If your process cannot guarantee that, your process does not produce a known secure result. Simple as that.

            “You’re argument is a weak one because most people would accept something cheap and convenient that works 99% of the time. With modern OS’s and software, they effectively accept less.”

            Really? Do you think computer repair services advertise that they CANNOT really know whether they have gotten all the bots? If your argument is strong, revealing the awful truth would not be a problem. If your argument is strong, customers would be happy to pay for work that may or may not recover their security. But the truth is not even being accepted on this blog.

            When process is about producing a result, we can check that result and see if it is right. Then if it is not right, we can adjust to make it right. That is normal process, and does not have to be 100 percent perfect, PROVIDED WE CAN CHECK THE RESULT. But in this case there is no test. There is no test to guarantee that we have gotten the last bot.

            Since we cannot look at a system to see if it is bot-free, we cannot simply continue working at removing bots until we achieve our goal. And if we stop before removing even one last bot, the system is still insecure. Obviously. Yet for some reason insecurity is being accepted as perfectly reasonable.

            Unless and until there is some test which can reveal any possible infection, we do not have the luxury of process error, because we do not have a test which allows us to correct such errors.

            “Ideal or not, manual removal is an option even if it only works as often as a car.”

            That whole statement is a delusion: Absent a test which shows that manual removal has indeed “worked,” you have no idea how often it “works” and so cannot compare the results to other systems. All you know is that some time has been spent, and some money made. You do not know that the bot the attacker most wanted to be in place has been “removed”. You cannot know that your process is nearly as reliable as a car.

            Even the idea of “removing” bots is delusional: You have no idea whether the botmaster has changed other stuff in the system, yet you are perfectly happy to say the system “works” after removing bot code. That is deeply disturbing.

          8. Clive Robinson

            @ Terry,

            First off I’m not sure this reply is going to come up in the right place or not so my appologies if it does not.

            As I stated earlier it is not possible to say if a computer of the forms we currently use is infected with malware or not. You can actualy build on proofs from before electronic computers existed to show that this is the case.

            Therefore for the systems we currently use there can not ever be a test of the form you are asking for.

            So the level of security of your computer is probabilistic not determanistic.

            So what you are asking for by way of a test is impossible, tough but that’s the way of the world currently.

            However your subsiquent argument removal-v-reinstal fails because all a reinstal does is take you back to a past point in time.

            So if your computer is infected with malware, it is because it WAS vulnerable, reinstalling it a time before that does not make it any less vulnerable.

            Now there is some argument as to how long it takes a known to be vulnerable computer to become exploited, but some people have said it is as low as twenty minutes. Now I don’t know how long it will take you to reinstal your system but I can not remember having ever instaled just the OS in less than twenty minutes.

            Now if you accept that the time window for reinfection is small then you have to accept that a re-instal before every use is the limiting case.

            Currently there are two ways to do this effectivly,

            1, Put the OS on non mutable media (DVD/CD etc).

            2, Use virtual machine technology where the VM images integrity is protected in some way (crypto hash etc, etc).

            However both these solutions have their own specific problems, not least being instalation of patches in an on going manner. Further without wishing to bash Microsoft, the incessant need of their OS’s to write to disk at all times (including booting) makes it virtualy gaurenteed that it cannot be made secure in a way that is easily implementable (ie via standard write protect mechanisms that work well with other OS’s).

  12. Gary H.

    I’m a security n00b. Your blog is a great source of information and training.

  13. finack

    Infosec Pro has it, it’s difficult to blame the consultant because real remediation steps are likely so far outside of the company’s expected price range they’d likely end up doing nothing at all, or just hiring a different consultant with a cheaper answer. It definitely doesn’t sound like there is a good chance of them clearing all their issues, but for better or worse that is the state of the industry right now unless you’re hiring expensive niche professionals and/or reloading everything from scratch etc.

    This is the biggest reason that notification is a red herring for now. Consider how much time Brian put into notifying this one firm – and he was wildly successful as far as those things go as they responded and hired a person to deal with it. The fact that even after all of that success they have a very good chance of not fixing the problem speaks volumes about why notifying comcast et al. about zombies is mostly a non-starter. The ISPs don’t have the staff or equipment to deal with the issues and lack the motivation to disconnect anything but the worst behaving malware hosts.

  14. Mike

    My experience goes like this. Outside of a business, most of the people who are most vulnerable to picking up a bad bug here and there are not gamers or highly technical people, they are kids, regular adults, grand parents. Their computer is a “family” computer. One log in for everyone, always on, nearly always in use. If you look at the usage of these computers you’ll see a bit of word processing, lots of internet surfing -mail, youtube facebook and… ….and not much else. Often the virus makes it’s way onto the computer when someones trying to installed a crack, or keygen for some pirated software^. Or perhaps your six year old clicks on flashy add while playing online games and installs yet another IE toolbar, which eventually invites it’s friends along^^.

    The solution for many* of these people: invest the time you will eventually (an inconveniently) spend trying to mend a virus problem, blowing away your OS, re-installing everything etc… and learn how to use Linux. It’s not going to be that easy or stupidly simple to set up and get used to. But it won’t break** and once you get it set up and learn to live with the differences between it and Windows you will save time. And of course money. How often do people get a virus trying to crack a piece of software that is free under Linux.

    No cracks, no virus’, no cost – Just a learning curve, and then you are free. Seriously.

    ^ Yes, you probably shouldn’t pirate software. But why even bother arguing, if you need it you can probably get something equivalent and free under Linux.
    ^^ Yes, you can use multiple log ins with security settings to avoid this. But no-one seems to.
    * Many, not all, not most, just many.
    ** Of course you can break it, you can break anything.

    1. finack

      While I don’t want to disagree with you as your advice has some significant merit to people looking to do something on their own to reduce their odds of getting owned, it’s problematic because it only remains good advice as long as not too many people follow it. All modern operating systems regularly have security issues, indeed as we speak most consumer linux installations are currently vulnerable to CVE-2011-0997 which allows remote code execution. Other issues, like persistent flaws in Adobe flash or Java can expose the host to exploitation no matter what platform they are running.

      In many cases linux distributions lack the depth of defense that modern windows platforms enjoy. The fact that windows is still exploited more often just speaks to attackers having more motivation to target windows. If you managed to get a significant portion of those folks running linux, you’d just see the exploitation follow them there.

      That being said, for an individual (and not the industry) it is still good advice.

      1. JCitizen

        @finack;

        Your comments are especially valid when looking at the plethora of Apple, Unix, Linux, and Windows based operating systems for smart phones and tablet PCs. All of which are being pwned regularly in my client’s devices. Since they also regularly connect to PCs of all platforms, these devices hasten the spread of malware specific to such mobile solutions; but will likely be modified to pwn the host PC as well eventually.

        I saw my first virus in 1987 in a US Army Unix based SASS computer, and it got hosed so bad we couldn’t even start the thing. This was in the bad old days of the sneaker net.

        To assume the mobile device threatscape will not affect OSX, Linux, and any other FOSS solution is folly at best in my opinion. Perhaps you might agree?

        1. MadVirgo

          I’d have to agree to a point. Many of the newer attacks are ‘man-in-the-browser’, and are designed to exploit data that is passed through it, no matter if the OS is Windows/MacOS/Unix/Linux. It would only make sense because you now can design ‘drop-in’ exploit code without having to worry about tailoring it to a specific OS.

          1. Terry Ritter

            @MadVirgo: “Many of the newer attacks are ‘man-in-the-browser’, and are designed to exploit data that is passed through it, no matter if the OS is Windows/MacOS/Unix/Linux.”

            Well, yes and no.

            All attacks depend on running attack code in a part of the user machine which will support that code. Even a link to a bad page (HTML code) can be an attack, and all machines will run it, but that is not the “man-in-the-browser” bot stuff.

            Bot code needs to run on some specific code interpreter and within specific OS resources. So a Windows bot that executes Intel code is still not going to run on Linux, even on an Intel processor.

            In contrast, a Java bot might well run on a wide variety of fundamentally different systems, including Windows and Linux and on different processor architectures. A Java bot might pull down bot code for the target system, but attackers will have to maintain that code, which will only happen if that is worth doing compared to what else they do, and I doubt it would be.

            The main mobile problem may be bots in Java itself. Unfortunately, we have decades of experience in trying to make software which cannot be subverted, and the results have not been good. Patching is not going to solve the problem.

        2. finack

          I absolutely agree. Mobile security is a bigger mess than desktop, and the access these devices provide to private networks is substantial. I look at my own phone and know it has a range of unpatched vulnerabilities ranging from RCE’s through WebKit and local privilege escalations. To the extent that we haven’t seen mobile->desktop attacks, some part of it may just be due to a less evolved set of tools for dealing with mobile threats. As you probably know, threats to mobile are already substantial – a command and control system for 150k infected chinese smartphones was recently observed, and examples of android botnets that can download arbitrary code are available publicly. Worse, aside from banning them from your networks or premises there is little even the pros can offer in terms of mitigation.

          If you’re interested in mobile threats and happen to be in the DC area, you might be interested in attending a presentation on Smartphone Botnets being held Apr 19 downtown @issa-dc.org

        3. Nick P

          It’s funny you mentioned the Army. The first secure operating systems I got to look at were DOD sponsored starting around the 70’s. One was the Secure ADA Target, also known as the Army Secure Operating System (ASOS). Well, I think they were the same thing. Programs went through so many name changes. They had two versions: a regular security version and a version to be certified at or above A1/EAL7.

          Dude, I’ve downloaded about a thousand papers from back then. I swear that nearly every kind of vulnerability and overall approach to building secure systems was in a paper in some form from 1960 to the 1980’s. It’s like we figured almost all of it out and just never applied the knowledge. Just keep reinventing the wheel over and over and never put it on a vehicle and RIDE IT. They really need to stop doing that. Looking back at Data Secure UNIX and LOCK/ix, we could easily build a secure operating system today. There’s just no real demand or incentives.

          1. DeborahS

            @Nick P

            “I swear that nearly every kind of vulnerability and overall approach to building secure systems was in a paper in some form from 1960 to the 1980’s.”

            Very interesting, and I believe it. That was an extremely lively and productive time in computing, and a number of projects were picked up then, only to be dropped. Inroads in artificial intelligence, one in particular that I was involved with, made some impressive progress in that same time period, only to vanish some years later. My guess has been that significant pools of knowledge were lost in the transition from mainframes to PCs. Code that was developed to run on the old mainframe monsters was simply never rewritten for the PC. I’ve kept all my IBM 360/40, PL/1 and FORMAC manuals (4 crates of them) with the daydream that someday I would rewrite them to run on a PC, or at least the parts of them that I miss having.

            I don’t think it’s so much a matter of there not being the demand or incentives to resurrect these treasures, but more that the right person hasn’t come along with the knowledge, resources, and frankly, the passion to do it. If someone were to succeed at doing this for a secure operating system, I’d imagine that person would become as wealthy as Bill Gates, so there’s no lack of monetary incentives here. Perhaps one problem is that it might take a young person’s energy and vision to pull something like that off, and now everyone who has firsthand knowledge of those times is getting to be a little on the old side. But the party’s not over yet. I know my pet project is waiting until I can reach a stage in my life when I’ll have the time and freedom to do whatever I want to. Ha ha, we’ll just have to see if that ever happens.

      2. Terry Ritter

        @finack: “it only remains good advice as long as not too many people follow it.”

        That model does not represent reality as we see it.

        Normal, end-user malware distribution will always target the largest market it can reach. Microsoft Windows supports about 90 percent of browsing. It would take quite a lot of people changing to Linux to make Windows a lesser option.

        Microsoft Windows owns our malware problem, and by using something other than Windows we can move away from the target. In general, the attackers have no motive to follow until the strays become a major herd.

        1. finack

          @Terry – I was careful to say “for an individual (and not the industry) it is still good advice”.

          That being said, I know quite a few people who won’t run linux because they feel it’s too commonly exploited, opting instead for *bsd platforms or other unixes.

          From a meta or industry view, linux probably has just about as many structural changes it needs to make as MSFT or AAPL before it would have a significant leg up on the competition. Shuttling people around from one less common platform to another just isn’t a solution.

          In the short term, if someone was particularly worried about malware I’d probably suggest they get themselves a ChromeOS netbook when they come out. Google is leading the charge here for now, it seems.

          1. Terry Ritter

            @finack: “I know quite a few people who won’t run linux because they feel it’s too commonly exploited, opting instead for *bsd platforms or other unixes.”

            But I feel differently.

            In security, there is continual tension between absolute security (which is never available) and perfect ease-of-use (also not available). There are always tradeoffs. And somebody thinking of running a traditional Unix system must be prepared to provide substantial maintenance with deep technical insight.

            Simply using Linux should cut the probability of running malware (compared to Microsoft Windows) by something like a factor of 1,000x. Using a LiveDVD form all but eliminates malware infection, whereas pretty much anything booted from a hard drive can in fact be infected.

            “From a meta or industry view, linux probably has just about as many structural changes it needs to make as MSFT or AAPL before it would have a significant leg up on the competition.”

            For the most part, Linux is free, not a for-profit product, and the difference shows. But with respect to providing a malware-resistant platform, the OS mainly just needs to be different, yet reliable and run a modern browser, all of which Linux does very well.

            “Shuttling people around from one less common platform to another just isn’t a solution.”

            No, the solution is to move people from the massively-most-common platform (Microsoft Windows) to anything else, and so move them off of the malware target.

            I continue to believe that the best usable security currently available is the Puppy Linux LiveDVD, and it is free.

          2. JCitizen

            I agree finack;

            And the ChromeOS net-book maybe as good a solution as a LiveCD. That is, if I’m not wrong about its memory storage.

            At least I am sure it has no hard drive or writable disk installed as hardware.

            Each boot receives a fresh operating system/chrome browser with the latest updates from the cloud. I would much rather let Google keep their cloud clean and let me do my computing with less security maintenance on my end! They have the thousands of employees working world wide 24/7 /365 days a year to watch their stuff and leave me free to my web work!

            Business has learned this with dumb terminal/thin clients; why can’t the regular guy?

    2. Terry Ritter

      @Mike: “most of the people who are most vulnerable to picking up a bad bug here and there are not gamers or highly technical people, they are kids, regular adults, grand parents. Their computer is a “family” computer. One log in for everyone, always on, nearly always in use. If you look at the usage of these computers you’ll see a bit of word processing, lots of internet surfing -mail, youtube facebook and… ….and not much else.”

      A family would seem to be a great environment for Puppy Linux LiveDVD’s.

      All the usual browsing stuff is already in the Puppy distribution or easily installed. Each user can have their own DVD, and adults get security for online banking.

      A high security system might have no hard drive at all (perhaps a laptop with the drive removed). Data could be saved by email attachment, elsewhere “in the clouds,” on a USB flash drive, or even on the boot DVD.

      A system with existing Windows drives could read and write Windows files, and Windows itself would be available simply by removing the LiveDVD and restarting. Even when the Windows drive is infected, adults can still use that computer securely from LiveDVD.

  15. Jim J.

    The scary scenario is the hospitals or their records contractors being compromised as Epsilon. All medical records would be packaged for the intruders to sell-off as a complete private information portfolio.

    I suspect these medical ententes are just as slack and complacent as Epsilon is.

  16. John

    “recent known-good system image”

    There’s a problem. As made clear by the column most people don’t know when things are not good.

    And not plugging holes or changing behaviour doesn’t help at all.

    Also. Antivirus is only part of the protection. A cracker making use of your computer may not be making use of a virus at all.

    I’m starting to think that an Internet drivers license could be a good thing.

    PS. Thanks Brian, this is great stuff.

    1. Tony Smit

      When Microsoft released Windows XP 64-bit they required all drivers to be licensed – or, in their terms, be digitally signed.

      Computers can be “licensed” to use the Internet but there is no way to require people to be licensed to use the Internet; an “unlicensed user” would simply use a licensed user’s computer.

  17. Zhu Sha Zang

    Have some statistic’s study to determine the relation of the number of the machine affected and the operacional system that run in this machines?

    1. Terry Ritter

      @Zhu Sha Zang:

      Statistics do exist about the proportion of browsing done in different OS’s, as seen by various sites. One source is the Wikipedia article “Usage share of operating systems,” which shows Windows now in the 85 percent range, which is lower than it was.

      Counts do exist for the number of Windows malware signatures, versus others. In addition, a 13th September 2010 article in The Register titled “Windows malware dwarfs other viral threats” says: “The vast majority of malware – more than 99 per cent – targets Windows PCs, according to a new survey by German anti-virus firm G-Data.”

      An industry which lives on the Microsoft teat is not always eager to share these things, beyond a pinch here and a nip there.

  18. a problem with spam?

    reverse socks botnets arnt particuarly expensive (seen a few go for $400 – 500

    surely for the dedicated carder fraudster sorting out there own proxy net would be alot better

    anyone not sure if its reverse socks or not (im not a carder) but dosnt zues have a socks add on

  19. gordon

    Interesting that North Shore was hosting a bot, because I thankfully get very little spam courtesy of my local boutique ISP. I promptly forward what I do get to their abuse tracking address. Two weeks-ago I had one SPAM from North Shore.

  20. Lucy

    Seriously Brian ? Did you really have the need to create such a blog or oh wait, its for driving more traffic.

    There is another service which as been up since 2008 IIRC, and providing stable socks in different countries, even most exotic ones. Pretty handy service to help empty hacked LR accounts 🙂 It’s called vipxx and has many proxies online and cheap prices. Services like this one described pop in the scene every now and then.

    1. BrianKrebs Post author

      Lucy, you’re talking about vip72? Different service, different model. But you know, not everyone on this forum is actually…ya know, using these services, or even aware that they exist. So while it might not be news to people like yourself, most normal folk find this stuff interesting and new.

    2. Clive Robinson

      @ Lucy,

      There is an issue with all services that work through Russia which vip72.com and vip72.org appear to do.

      Under russian law the Russian Secret Service the FSB can simply request copies of all traffic and logs etc (no warrant etc required).

      Because of this law (which came to general attention when ICQ was sold to a Russian venture capatalist) it is very likley that contrary to what they claim some form of logs are kept by vip72.

      As has often been noted “you pay your money and you make your choice”.

  21. AlphaCentauri

    The only good news I see here is that it’s unlikely the bot herder knew what he had. He probably wouldn’t have rented out the bots with access to people’s medical records if he did, as they would be more valuable remaining under his sole control, not calling attention to themselves. But that doesn’t mean there aren’t lots of other computers in medical environments that are pwned by identity thieves and continuously hemorrhaging medical and financial information.

    That being said, have any of these companies notified people whose medical information was at risk that there has been a breach?

  22. Clive Robinson

    @ AlphaCentauri,

    “But that doesn’t mean there aren’t lots of other computers in medical environments that are pwned by identity thieves…”

    There probably are, but the bot herders probably don’t realise either.

    There is a reason for this and it’s simply the law of scarce resources.

    As was shown with a Spanish bot herder who had 1.3million PC’s i the herd, how do you know what’s on each machine.

    The answer is you don’t because you have neither the time nor the bandwidth to find out.

    Many bot herders don’t even know what machines they have zombified, all they know is that there are a lot looking at the control channel from various IP addresses or network segments. They just don’t have the time to find out who effectivly owns each and every machine.

    Which brings me onto a point I have been making for some years now, which is bot herders under value their investment because they don’t have an idea of how to better capitalize on them.

    Let me introduce an idea into the collective melting pot, which is how you would do APT by bot net.

    Some time ago I posited that directed attacks were inificient utilisation of manpower and it was also potentialy the least rewarding. I noted that “fire and forget” malware was the way for APT to go.

    The difficulty was how to get a return on the investment, simply getting each zombie to return even as little information as a credit card number leads to a very very noticable trail of traffic which is most definatly not covert.

    The solution is to devolve the control channel out and decouple it through a common third party system (one way was with search engines).

    Having done that successfully and also implemented “air gap crossing” techniques you would also have to put some kind of search engine into each zombie such that it could covertly “reach out and touch” all the data available in various common formats to the zombie.

    The APT bot herder would send out a query string that was a very very specific match on the control channel such that only a very few zombies would match any given query.

    However a succession of even quite simple enquires such as “Domain=xxx.yyy.com & IPmask=0x000000ff & IPadd=0x01”, would just get a few of the potential respondents, and slowly cycling through the 254 last octet IP address would eventually get all potential repondants.

    However it is likley that a directed feedback process from the first few responders would alow an experianced bot herder to narrow down to just a handfull of potential machines.

    This alows them to act as “Data Brokers” and take “orders” for information whereby a potential “data customer” would supply an organisation name and target name and a few other details and the bot herder could then negotiate what data was available to the “data customer”.

    This sort of “Ordered data copying” is potentially a very very lucrative market whereby top dollar can be asked and obtained by the bot herder.

    It also alows most of the botnet zombies to remain effectivly covert and not advertise their prescence except when responding to very specific requests.

    For bot herders who can utilise their resources (ie zombies) in this way they could get tens if not hundreds of thousands times as much money for the access to a zombie than they can get for just renting out for DDoS / SPAM…

  23. Randall Svancara

    Nice work. It is important to keep in mind, it is not just MS Windows that is vulnerable to these kind of exploits. However, there is something to say about putting a smaller target on your back and use something like FreeBSD and/or Linux.

    1. Nick P

      NetBSD is the slimmest, most portable and Linux compatible of the BSD’s. The site also features a nice testimonial of how a server administrator greatly increased his teams quality of life by switching to NetBSD, reducing all the support calls and nightmares. OpenBSD is a nice building block as well.

      I usually advocate the use of more obscure, but mature, alternatives to major services. Additionally, less complex software might be applicable to a given situation. An example would be using publicfile for publishing non-sensitive files (like OSS mandated source sharing) instead of a full-featured FTP server. Most of Bernstein’s tools, in particular, have much higher quality and reduced trusted code. Users also wrote plenty of extensions for them for those who want to trade security for convenience.

  24. Mike

    A number of the commentators here are quite misled with security. It makes me sad.

    First:

    Switching Operating Systems in no way makes you more or less secure. They don’t paint smaller or larger targets on your back. They don’t make you less desired by bot herders. In fact, some switching can make you more desired.

    As an example, using any sort of *Nix (Linux/BSD) generally means you’re on a dedicated machine. This also implies that you might also have dedicated internet bandwidth, and you’re always on.

    Because there are so many people out there that parrot the “SWITCH OFF OF WINDOWS IT’S INSECURE!” mentality and don’t truly understand what they’re saying, the aforementioned Linux machines can also be very easy targets since nobody will look for the malware in most cases; assuming, incorrectly, that their choice in OS makes them “more secure”.

    More importantly, not every flavor of Unix or Linux is handled similarly. As far as security patches go, for major software you’ll usually see a patch the same day. But for softwhere where ideaologies outweigh practicality, you aren’t going to see updates.

    An example of this is Ubuntu with ProFTPD. ProFTPD has been updated and made more secure often, particularly with its SSL renegotiation changes, but they wouldn’t release a patch to the standard repositories. If you went into the IRC channels and mentioned FTP you’d get flogged for even considering using the protocol.

    So at the end of the day you’re left with potentially vulnerable software and you have to keep up with each individual package on each machine that you run.

    Keeping this sort of detail is extremely difficult, costly, and time consuming. In addition to all of the other IT duties that we professionals have to deal with it can make it painful.

    Where security needs to change is in the business. Companies need to care about having proper security. But they only care about money, so the penalties for not keeping a best effort security system should be steep.

    And even if you, as an end user, spend insane amounts of money to have secure systems with monitoring–you are still vulnerable. Yep, that’s right. There are always the infamous “0-day” attacks that I think have lost meaning these days.

    It’s an extremely complicated business and I hope people read my comment after reading others to get a better idea of what is requried.

  25. Chad

    @Deborah Seriously you are using Zone Alarm 6.5. They are on version 9.2.102.000 for the free version. Update zone alarm or switch (because of there scareware tactics by there marketing people) You are using technology from several years ago. What do you think updates are made for, just useless time wasting activities so programmers can keep there jobs? Would love to know what version of flash and java you are using.

    I think that some people just don’t update because of laziness and the fact that an interface might change.

    1. DeborahS

      @Chad

      Hey hey – yes, it is late! And depending on what time zone you’re in, your time might be even later than mine.

      Seriously though. I’m sure there are lots of people who don’t update out of sheer laziness, but that wouldn’t be me. In point of fact, I have tried v.9 of ZoneAlarm, and various flavors of versions 7 & 8. I even rolled back to v4.5, but it had some obvious functional problems, so I rolled forward (through v5) to v6.5, and that one was just right for me. (I keep all the setups of old versions of any software I use a lot. Just an old habit from my testing days. Kind of hard for the developer to beat me up for “imagining” that it used to work differently, when I could sit him in front of a machine with a build on it that did do it differently and exactly the way I said it did, about 20 builds previous. Testers do have to get a bit ornery with developers sometimes, and vice-versa, but that’s the way the game is played.) So this last time that I fresh installed my system, I cycled through ZoneAlarm versions to see which one I liked the best, and that was v6.5.

      Basically the “improvements” in the more recent versions of ZoneAlarm have been to make them more user-friendly, more hands-off. Do it all quietly and automagically and don’t bother the user about anything. That’s a good thing, I suppose, for a general market, but not a good thing for someone who really wants to know what’s happening with their internet connection, and have good control over it all.

      So no, I am neither lazy nor do I think that the developers and the company are wasting their time with updates. I just prefer the functionality and economy of the earlier versions. Same thing with my preference for XP over Vista. (Haven’t tried Win7 yet, but since it’s basically a fancy service pack on Vista, chances are I’m not going to be thrilled with it either. But I’ll give it a fair chance when I’m ready to.)

      But I’m curious. What scareware tactics from their marketing people are you talking about? I generally screen out all the marketing buzz, so I guess I missed that. It raises another interesting point about all these rapidfire updates. How much of it is genuinely needed, and how much of it is just to make more money?

      1. Chad

        Since you are on version 6.5 you never saw it.

        Seriously, updates are a good thing. Some time at the cost of convenience. But programmers will be programmers. I actually hate that firefox changes crap and the way stuff works but I am not going to use firefox version 1.0. Nor am I going to keep using Vista when Microsoft stops updates. And when XP finally dies it is gone on all my family computers. That is just comfort level laziness for not switching when a program is updated. Windows 2000 got the boot that machine switched to Linux. And yes updates are about making money but they are also about security. Do you think Microsoft could live off Windows XP forever?

        As for the scareware thing, ZoneAlarm started showing a “Global Virus Alert” popup as a scareware tactic to get users to switch to their paid security suite.

        1. Chad

          I should say marketing people will be marketing people rather than programmers. The changes are all about market share. They change things to what is popular to get more users. I hate it but I know that sometimes those changes come with security updates.

          Did you read this column when Brian talked about Secunia? Research that and also get File Hippo’s update checker. God help you if you do any banking on your computer!

          But seriously please tell me what version of flash and java you have on your system I need a good laugh.

      2. Nick P

        @ DeborahS

        You really should consider upgrading to Windows 7 Professional (Pro has XP Mode). Vista offered numerous improvements over XP: *way* better security features; drivers being pushed to user-mode where possible; new driver model (winxp model loosing vendor support); software being written with new Security Development Lifecycle, with significant bug reduction. Vista also came with tons of bloat and needed about 1GB to run full-featured, while XP SP2 needed around 300MB.

        Well, with Windows 7, they took Vista and trimmed all the fat off it. They also made a few improvements. Windows 7 uses about 350MB of RAM and works pretty quick on a formerly WinXP machine. It’s not just a Vista service pack: it’s Vista done *right* and one of few versions of Windows that people loved right out the door. I’ve switched all my relatives over to a hardened Windows 7 and have fewer malware- or reliability-related tech support calls.

  26. DeborahS

    @Chad
    Well, since this might be just for drunken after-midnight ramblings – I’ll play along.

    “Seriously, updates are a good thing. Some time at the cost of convenience. But programmers will be programmers. I actually hate that firefox changes crap and the way stuff works but I am not going to use firefox version 1.0.”

    Yep, programmers will be programmers alright. That’s why they need testers to keep them in line, and companies who think they can’t afford testers will suffer in the quality of their product – make no mistake about that. Software is not the same as hard, physical, mechanical products, that follow the laws of physics and mechanics. The designers of software can be wrong, and it’s the tester’s job to tell them exactly what is wrong and how they must fix it. Delete the tester, and you will probably delete quality.

    Should you continue to use Firefox 1.0 if you don’t like the changes in the most recent versions? Well, that’s a highly subjective question that only each person can answer for themselves. And maybe there aren’t any versions between 1 and 4 that are exactly what you want. Tough titties, or make your own, is unfortunately what the options are.

    “Nor am I going to keep using Vista when Microsoft stops updates. And when XP finally dies it is gone on all my family computers. That is just comfort level laziness for not switching when a program is updated.”

    My, what a hard a$$ you are. (and a bit close-minded, I might add). Lots of people are still perfectly happy with their Win 9x and Win 2000 systems, regardless of whether they are currently supported or not. I personally think that XP beats them all hands down, but kudos to anyone who still thrives on their even yet older systems. Unsung heroes, yes they are, if they can keep going to their own satisfaction on those outdated systems.

    “Windows 2000 got the boot that machine switched to Linux. And yes updates are about making money but they are also about security. Do you think Microsoft could live off Windows XP forever?

    Ha ha! That’s the funniest thing you’ve said yet. No, Microsoft couldn’t go on living off of XP forever, so that’s why they had to pretend to do something better. They had to invent something really super duper to start making money again, and voila – we present Vista to you! Oh, oops. You didn’t like that so much, so let’s make it real fancy and crank up the ol’ marketing machine and make you go out and buy Windows 7!!! (It’s just a few tweaks on the Vista engine with a whole lot of eye candy layered on top of it, but the gumps will never know that.) Whoopee!! We make lots of money and you all feel so much more secure – and oh, by the way, don’t neglect all those patches and updates that make it even better!!! Oh yes, now we’re making $$$money$$$ !!! And you poor slobs can “be a trend” (as a recent young Polish student of mine put it). All bow down and worship at the altar of “being a trend”. After all, isn’t that what the market and the media tell you that you should do and be?

    Well, maybe some of us don’t buy that.

    “As for the scareware thing, ZoneAlarm started showing a “Global Virus Alert” popup as a scareware tactic to get users to switch to their paid security suite.”

    Doesn’t surprise me in the slightest. Personally, I paid in full for all the early versions of ZoneAlarm, so if those early versions still fully serve my purposes now, I don’t feel that I’ve cheated them out of anything. Maybe they could try setting up donation centers for us to show our appreciation for what they did once upon a time. I might go for that, but heck if I’ll install their crap and run it on my systems now just to pay them for what they did once upon a time.

  27. Chad

    You just do not get it. Your computer is not a TV. Just cause you are perfectly happy with black and white and see no need for color, does not translate to computer technology.

    Any idiot that surfs the internet with Windows 95 or Windows 98 and thinks that they can do everything (ie banking) like people with the newer operating systems is a moron and has no business being on the internet or let alone a computer. Really I am at my wits end for the night so just shut it. Except unless you are going to tell me what version of flash and java you have on your system. I might pee my pants with laughter but do tell.

Comments are closed.