The Federal Communications Commissions (FCC) may soon kickstart a number of new initiatives to encourage Internet service providers to do a better job cleaning up bot-infected PCs and malicious Web sites on their networks, KrebsOnSecurity has learned.
Earlier this year, the commission requested public comment on its “Cybersecurity Roadmap,” an ambitious plan to identify dangerous vulnerabilities in the Internet infrastructure, as well as threats to consumers, businesses and governments. Twice over the past few weeks I had an opportunity to chat with Jeffery Goldthorp, associate bureau chief of the FCC’s Public Safety & Homeland Security Bureau, about some of the ideas the commission is considering for inclusion in the final roadmap, due to be released in January 2011.
Goldthorp said there are several things that the commission can do to create incentives for ISPs to act more vigorously to protect residential users from infections by bot programs.
“Along those lines would be something like an ISP ‘code of conduct’ and best practice-oriented approach that ISPs could opt-in to or not, basically a standard of behavior for ISPs to follow when they find that a user of theirs has been infected,” Goldthorp said. “The goal of that would be to clean up the consumer and residential networks. We’re also very interested in trying to figure out if there are rules we have on our books that stand in the way of ISPs being more proactive and creating a safer environment for consumers online.”
In addition, Goldthorp said the FCC is considering ways to encourage ISPs to be more proactive in dealing with malicious Web sites.
“At the server level, we’re looking at doing things that would allow us in an operational role to apply our jurisdiction with ISPs and try to reduce the time to remediation of things like malicious hosts and phishing or spam sites,” he said. “That’s really an area that [the FCC is] doing nothing in right now. We don’t get any information now about what those sites are and what we could do about them. So, we expect that there will be specific things we’d propose on all those areas of the roadmap.”
Prompted in part by the FCC’s request for comment, I wrote a column for CSO Online last month in which I called on the commission to begin measuring the responsiveness of ISPs in quashing malicious threats that take up residence on their networks. One of the ways I suggested the commission could do that is by publishing data about badness on these networks – data that is already being collected by a myriad of mostly volunteer-led groups that monitor this type of activity.
Goldthorp said the commission has met with a number of folks from these groups, and is also considering what it could do to help these groups shine a light on ISPs that have a substantial numbers of problem customers that remain infected for long periods of time.
“The idea that the FCC could be in the middle of that and broker some of that awareness so that the time to remediation could be minimized is very attractive,” he said. “At a minimum, there are things we can do to shed light on this, and we don’t have to have a commission vote on that to do it.”
AN IDEA WHOSE TIME HAS COME, OR AN INVITATION TO BIG BROTHER?
A number of others are beginning to press the idea of ISPs becoming more proactive in cleaning up problematic customers. Comcast, the nation’s largest residential high-speed Internet provider, announced last week it had begun deploying a bot-notification service to all 16 million of its customers nationwide.
On Tuesday, a top executive at Microsoft called for the creation of ISP industry standards for dealing with botnet infections. Speaking at the International Security Solutions Europe conference in Berlin, Scott Charney, Microsoft’s vice president of trustworthy computing, suggested it was time to start viewing the bot epidemic through the lens of public health models used to combat the spread of human infectious diseases.
To achieve this, Charney said Internet connected devices could be required to present a “health certificate” as a condition for Internet access. Conditions that might be presented in such a certificate include whether the customer is running up to date security software, anti-virus and whether the device has any obvious infections. In such cases where a machine’s health certificate reveals missing patches or out-of-date virus signatures, the ISP could provide a notice to that effect.
“If the problem is more serious (the machine is spewing out malicious packets) – or the user refuses to produce a health certificate in the first instance – other remedies, such as throttling the bandwidth of the potentially infected device might be appropriate,” Charney suggests in his paper titled Collective Defense: Applying Public Health Models to the Internet. “Simply put, we need to improve and maintain the health of consumer devices connected to the Internet.”
Unfortunately, this model starts to break down pretty quickly if you can’t vouch for the integrity of these certificates, or if others can abuse the information in them for nefarious purposes. Herein lies what may be the most controversial component of Charney’s proposal: The inclusion of a hardware+software approach that is capable of creating “end-to-end trust” between the consumer’s PC and the service provider.
“Combining trusted software (that is, hypervisors) and hardware (that is, Trusted Platform Module) elements could further enable consumer devices to create robust health certificates and ensure the integrity of user information,” Charney writes.
In a phone conversation about his paper, Charney said such a system would need to be designed openly and transparently so that it cannot be used for other types of online policing activity, such as intellectual property protection or to hunt for child predators.
“These are things that by design and policy and rule can be put off-limits,” he said. “So with Windows Update, sure this is a feature that phones home to Microsoft, but we have that feature audited by a third party to make sure it works in the way we’ve described it and only in that way.”
As for who would pay for all of this? Charney acknowledges such a shift would not come cheaply, although he said he is neither suggesting new taxes nor opposing the idea. Rather, Charney said he’s hoping to stimulate public discussion and debate about the proposal.
“I made a comment at the last RSA conference about funding models for this idea, and all these articles came out saying ‘Charney called for taxation of the Internet.’ I did no such thing, but I do believe you’d have to think about how you sustain these kinds of efforts,” he said. “The market might actually do this because it’s good to get rid of malware and it’s a better customer experience. But if the market doesn’t do it, we have to be prepared as a society to say this is a serious enough issue.”
So how about it readers? What do you think of where the FCC is headed? About Charney’s ideas? Are there aspects of his proposal you like or positively despise? Sound off in the comments below.
I think the only sensible bit in all those ideas, Brian is based on your suggestion: create incentives for ISPs to disconnect and refuse service to parties (not individual residential customers) which are harboring in their infrastructure criminal activity (like it happened in McColo case). This whole “health certificate” “ideea” proposed by Mr. Charney will only help Microsoft strengthen their monopoly as I’m pretty sure various Windows versions will be supported from the beginning while support for other OSes, BSD and various Linux flavors will be scarce at best.
What incentive could an ISP possibly need to disconnect zombies from their network? These guys throw tantrums and accuse their legitimate, paying customers of sucking up more than their “fair share” of bandwidth, claiming, obtusely, that their only hope of surviving is tiered, capped pay-to-play Internet access.
Botnets are used for Spam and DDoS. If you spend the time and effort to keep botnets off your network, you will no longer be piping Spam and DDoS. That is an enormous incentive for any ISP!
I hope the links in these comments work properly, because I’ve got some supporting evidence from a quick and dirty Google search; none of this stuff is a well kept secret:
2% of all traffic are DDoS packets:
90% of email is Spam:
Why can’t the Verizons, Comcasts, Cablevisions, AT&Ts and all other service providers find the motivation to do this for themselves? They themselves would reap direct benefits in the form of retrieved resources. Are they really willing to deal with this chaff on their networks because it justifies hawkish anti-consumer price gouging?
If they have so much motivation, why don’t ISPs put infected users in “walled gardens,” or deny internet access to those who consistently fail to secure their computers? Because if any ISP did that, not only would those customers go to a different ISP, the ISP might be subject to bad press about internet censorship. There might even be sob stories in the press about their unkindness to the poor elderly folks with pwned computers who were just trying to check their email to see pictures of their grandchildren.
What the can do is set ground rules for all the ISPs, so that none of them is at a competitive disadvantage for doing what all of them know is the best course of action. Have everyone respond to zombified customers the same way at the same time and no one of them looks bad.
As far as all the proposals for identifying insecure machines — hel-looo? It’s not exactly difficult. The computers at risk of being used for spamming, DDoS’s, malware distribution and C&C centers would be the ones who are currently being used for spamming, DDoS’s, malware distribution and C&C centers. If we ever clear out the backlog of obviously infected computers, I suspect it will be pretty easy to pick out any still remaining.
Sorry — messed up the formatting. It should say, “What the government can do is set ground rules for all the ISPs,”
Obviously, our service providers, Comcast, Verizon, AT&T ect. are looking at how much money they will make off of their spam and botnets clients if they get tiered, capped pay-to-play Internet access.
Can I presume it’s George Ou? 😉
I agree with your implicit suggestion, George, that it would be difficult to hold ISPs accountable for certain individuals. Keeping the C&C’s under control would go a long ways to cleaning things up.
I don’t like Kreb’s idea that the government come up with metrics. There’s lots of third-parties that measure an ASN’s or IPs health (Google Safebrowsing, Spamhaus, PhishTank, UCEProtect, Virbl, etc) and there’s no way that the gov’t should or could replicate these diverse systems. If anything, I’d rather see them aggregate these into some kind of metric.
One last note — the only *incentive*that would drive ISPs and data centers to keep their networks clean is to tie a credible tax credit or deduction based on their performance. Then the abuse desk would be staffed and policies enforced!
Frank — Maybe you should actually read what I wrote instead of putting words in my mouth. I never said the government should come up with the metrics. To the contrary, I said there are plenty of third party organizations already collecting this information, which the government could simply make some effort to verify and then put its stamp on.
I’d encourage you to read the article(s) I have written about this proposal:
Thanks for the clarification. My impression from the following two paragraphs, the first of which is a quote from an FCC person and the second is your recommendation.
“At the server level, we’re looking at doing things
that would allow us in an operational role to apply
our jurisdiction with ISPs and try to reduce the time
to remediation of things like malicious hosts and
phishing or spam sites,” he said. “That’s really an
area that [the FCC is] doing nothing in right now.
We don’t get any information now about what those
sites are and what we could do about them. So, we
expect that there will be specific things we’d
propose on all those areas of the roadmap.”
Prompted in part by the FCC’s request for comment,
I wrote a column for CSO Online last month in which
I called on the commission to begin measuring the
responsiveness of ISPs in quashing malicious threats
that take up residence on their networks. One of the
ways I suggested the commission could do that is by
publishing data about badness on these networks –
data that is already being collected by a myriad
of mostly volunteer-led groups that monitor this
type of activity.
You suggest that the FCC publish the data, but don’t describe the source. The way it’s phrased, it sounds like the FCC would replicating what volunteer-led groups already do. It appears I misread what you meant.
One approach is not enough. Just as the banking scams don’t work without the mules, the spammers can’t succeed without the bots.
Bandwidth throttling sounds like a way to make sure a residential customer can continue to email or view websites, but which would slow down spam output by factors of 10 or 20.
ISPs might also consider the clear cloud DNS approach that makes it hard to get to “phone home” sites.
This is good news. Comcast is being proactive by sending messages to bot infested clients. ISPs must manage their infrastructure better. It also bothers me that they’ll disconnect torrent users but allow bots to spam the crap out of everyone. Hopefully, we’ll be seeing infested machines disconnected from the internet and people taking security seriously.
I have a problem with the potential slippery slope this can go down. The last thing we need is government or monolithic companies such as Comcast or Microsoft dictating hardware or software requirements for Internet access. It’s MY computer and I have control over what hardware and software are used, NOT anyone else!
It’s like the flu shots; I don’t have a problem with them as long as they’re voluntary. But, as soon as the government attempts to make them mandatory, no way, there’s going to be a problem! It’s MY body! Same thing with healthcare reform and its requirement that we the people have to purchase it!
This is all a slippery slope and eerily similar to stuff in Atlas Shrugged! 🙁
Not all things lead to the slipperly slope. For instance, having a driver’s license doesn’t lead to being put in a re-education camp because the government now knows what you look like and were you live.
The slipperly slope argument is very much an extraordinary claim and as such the onus on proof is on you. Where is your extraordinary proof that this is part or will be part of some conspiracy.
Currently, ISPs are already logging your URLs per federal statute. Currently, there are interfaces for snooping for federal and state authorities built in to networking equipment. IDS’s are analyzing all your packets. You’re already in a walled garden and have been since the day you first got internet access, yet where’s the big slippery slope in fascism? Oh right, its not here. Last I checked you’re not being put in a cattle car to end up in a work camp.
As far as health care goes, Medicare is one of the most successful programs in the history of US government. Just because something is government run doesn’t mean its somehow intrinsically evil, if anything the lack of a profit motivator makes its inherently less evil than a corporate solution.
Lastly, ISPs do have requirements for usage. Up until recently most ISPs wouldn’t allow Linux clients, so Linux users had to borrow a Windows machine or spoof browser headers just to get authorized. Where’s the outrage for this?
I do think this is a tempest in a teapot. The health certificate nonsense is something someone from Microsoft is suggesting, not the FCC. The FCC is more or less saying “Look, we have all these lists of these bad domains and IP addresses hosting malware and we want to share it.” Its time to get serious with cybersecurity. The status quo is absolutely not working.
Unfortunately, reactionary stuff like “BIG BROTHER” and “MICORSOFT IS GOING TO MAKE US GET CERTIFIED” sell ad impressions and make people crazy, but you should be making an attempt to rationally understand this stuff as opposed to some kind of thoughtless knee-jerk reaction.
I did get a little knee jerk there, too much coffee I suppose. 🙂
I’ll refer to the boiling frog metaphor though to back up my slippery slope argument.
For those not familiar with Atlas Shrugged and the context in which I speak, see below for a quick breakdown:
Relevance to current events:
“In “Atlas,” Rand tells the story of the U.S. economy crumbling under the weight of crushing government interventions and regulations. Meanwhile, blaming greed and the free market, Washington responds with more controls that only deepen the crisis. Sound familiar?
The novel’s eerily prophetic nature is no coincidence. “If you understand the dominant philosophy of a society,” Rand wrote elsewhere in “Capitalism: The Unknown Ideal,” “you can predict its course.” Economic crises and runaway government power grabs don’t just happen by themselves; they are the product of the philosophical ideas prevalent in a society — particularly its dominant moral ideas.
Why do we accept the budget-busting costs of a welfare state? Because it implements the moral ideal of self-sacrifice to the needy. Why do so few protest the endless regulatory burdens placed on businessmen? Because businessmen are pursuing their self-interest, which we have been taught is dangerous and immoral. Why did the government go on a crusade to promote “affordable housing,” which meant forcing banks to make loans to unqualified home buyers? Because we believe people need to be homeowners, whether or not they can afford to pay for houses.
The message is always the same: “Selfishness is evil; sacrifice for the needs of others is good.” But Rand said this message is wrong — selfishness, rather than being evil, is a virtue. By this she did not mean exploiting others à la Bernie Madoff. Selfishness — that is, concern with one’s genuine, long-range interest — she wrote, required a man to think, to produce, and to prosper by trading with others voluntarily to mutual benefit.”
Some of us rely on Wikipedia and the WSJ to bolster our opinions, others prefer to take a more reasoned approach.
Individuals cannot control the flow of data over the internet, but government and private enterprise (an Ayn Rand favorite) can.
You may want to take individual responsibility for keeping your systems clear of malware and viruses, and perhaps you have found a perfect solution to keep from becoming part of a botnet.
But, the problem is larger than that. Just as I would prefer to rely on government road building standards and a government-installed traffic light when crossing the street, so also, I would prefer that private enterprise or government oversee traffic on the internet to stop botnets. That doesn’t mean that anyone may inspect the inside of my car or my computer without a warrant – the constitution and privacy laws preclude that closer look.
I know; you can find anything on the Internet to justify an opinion. Just providing sources to back mine up! Take it for what it is. 🙂
I’m just cautioning on tilting too much toward government or monolithic corporations to deal with the issue, which I’m afraid is what too many are doing these days.
The real solution involves all parties (government, private and personal). I didn’t state it very well, but I’m trying to remind people about the private part (personal responsibility) and not buying into this idea that government is the solution.
swhx7 and Simon (see below) are much more succinct in their comments which I completely agree with. 🙂
I should’ve added this from the Wikipedia page I previously referenced and which I wholeheartedly agree with when it comes to personal responsibility:
“In Rand’s view, morality requires that we do not sanction our own victimhood. She assigns virtue to the trait of rational self-interest. However, Rand contends that moral selfishness does not mean a license to do whatever one pleases, guided by whims. It means the exacting discipline of defining and pursuing one’s rational self-interest. A code of rational self-interest rejects every form of human sacrifice, whether of oneself to others or of others to oneself.”
I don’t know if that makes sense to most, but if you ponder it enough, it’s a very powerful statement! 🙂
Eh? Where do I get a “security certificate” for my NetBSD machine? My linksys router? My Vonage phone? My kid’s gameboys? My Tivo? My PCS phone? Most of those devices would just issue static and useless certificates – assuming you could ever connect to one through my firewall to query the status.
Is the certificate handed out by a server on the device? The history of server apps is clear: they get hacked and have to be constantly patched.
The history of hacked machines is clear: they do what the hacker tells them to do, including handing out fake certificates claiming not to be hacked.
If I weren’t locked into a monopoly provider, I would use published metrics as one factor in my Internet connectivity purchase.
Many ISP’s still allow unrestricted access to outgoing port 25 (SMTP). If they restricted it to valid mail servers much of the spam would never leave their network and could be caught by AV/Spam filters on their servers.
Restrict SMTP access to “valid email servers”? Does that mean that I don’t get to run a self-compiled, carefully configured and regularly monitored Sendmail on my basement linux server?
Who gets to decide what’s “valid” or not? As a persecuted minority linux user (and hey, a shout out to the NetBSD guy above, who’s even more of a persecuted minority) I’m immediately suspicious that “valid email servers” will all have to run on Windows, and have a license code bought from Microsoft at top dollar. Despite the fact that the Exchange/Outlook combo is responsible for spamming me with Klez, Snow White and innumerable other email worms.
This “police the customers’ machines for them” idea may be well-intentioned on the part of some people, but it is an extremely bad idea.
As soon as ISPs are allowed to filter “malicious” sites – meaning those which might “infect” Windows users – there is no longer any way for it to be limited to those sites. If you can’t access the site, you can’t verify that it isn’t suppressed for political reasons. The whole initiative would become a pretext for political censorship.
A requirement of ISP-mandated or government-mandated software or hardware/firmware in computers would be even worse. Security doesn’t mean only that “the pc is not sending viruses or spam”, it also requires that the owner be in control. If the mandated police-ware would overrule the computer owner, then the owner could not verify what it was doing, and the whole “security” angle would become a pretext for spyware, censorship and remote control. It would merely transfer effective control of citizens’ computers to corporations and government, and freedom of communication would be dead in the US.
Notifying subscribers whose PCs are causing problems on the network is sensible. The rest of the proposals are radically dangerous, and I believe, intended for ulterior and anti-democratic purposes.
I agree, the road to hell is full of ‘good’ intentions. And this idea is pretty much anti-freedom, IMO it should be decided in courts.
Also this will eventually cause to the creation of more sophisticated botnets, which is progress in a way. If you build huge city walls the enemy will bring catapults and siege towers eventually..
Microsoft can send their security experts to give lectures at schools on how to avoid malware, young people today are more internet savvy than before. IMO trying to educate people out of the problem is better than taking aggressive measures which will force people get ‘certificates’ to use the internet, huge disturbing annoying hassle.
If they implant it over the in US, they might as well start issuing IQ tests and require FBI ‘not terrorist’ certification before allowing people to respond online.
ISPs are “allowed” to filter content on their own private networks already. One common example is the DROP list published by Spamhaus. A number of networks blackhole netblocks contained therein. It’s a small list of netblocks controlled “entirely by professional spammers.” For more information, see:
Try traceroutes to various netblocks in the DROP list to determine if your upstream providers allow connections. (It seems off and on for me, personally.)
Given that the FCC is already pursuing censorship by trying to implement the “Fairness Doctrine” once again on the airwaves, and that the FCC is trying to start regulating the Internet by classifying it as a “utility” (presumably so they can levy more taxes on its usage as well), I do not trust those $08’s farther than I can spit.
I do agree that something needs to be done about the botnet problem and customer notification about symptomatic behavior emanating from the customer’s PC would be a start – depending on how snoopy the ISP gets while analyzing that traffic.
A voluntary start couldn’t hurt. Comcast’s initiative might make a dent, but if there were voluntary iniatives from all the ISP’s, then at least the FCC would have a foundation to start from in the U.S.
The Internet is not an American commodity. Infected computers exist all over the world. ISPs everywhere need to work on eliminating bots.
Then again, every little bit helps.
The Microsoft position is nuts. Microsoft is avoiding their rightful responsibility for the malware problems they have largely created and failed to control.
“On Tuesday, a top executive at Microsoft called for the creation of ISP industry standards for dealing with botnet infections.”
More than 91 percent of browsing occurs in Microsoft Windows, and more than 99.9 percent of malware only infects Windows, yet Microsoft calls for “industry standards”? That is crazy! They ARE the standard! THEY are the problem!
We have experienced years of Windows patching, and as a result have the real experience to ask if we are better off now than we were. In fact, the malware problem has gotten much worse despite those efforts. Yet Microsoft continues to pursue their delusions rather than produce effective practical responses.
“…Scott Charney, Microsoft’s vice president of trustworthy computing, suggested it was time to start viewing the bot epidemic through the lens of public health models used to combat the spread of human infectious diseases.”
The term “computer virus” can be misleading: Computers are not life. We do not know how to improve life to prevent biological infection, but we DO know how to improve computers to prevent program infection.
Infection could be prevented almost completely with new hardware designs (with software support) to protect against changes to the OS boot data. A crude but effective example is a “LiveCD” which simply cannot be infected, because it does not allow changes to the boot data.
So where is the Microsoft Windows “LiveCD” to guarantee users an uninfected OS for each online banking session? Perhaps a Microsoft thinks a LiveCD would be too stark a contrast against their infectable Windows product.
“To achieve this, Charney said Internet connected devices could be required to present a “health certificate” as a condition for Internet access.”
Coming from Microsoft, the “health certificate” proposal is a joke: Microsoft does not even provide a program for their own users to certify a Windows installation as uninfected and thus suitable for online banking. Clearly, Microsoft cannot produce a health certificate FOR THEIR OWN PRODUCT, which also just happens to be, BY FAR, the most bot-infected operating system in the marketplace.
“Conditions that might be presented in such a certificate include whether the customer is running up to date security software, anti-virus and whether the device has any obvious infections.”
Those are feel-good journalistic delusions instead of reality: Anti-virus cannot find and stop zero-day attacks. Anti-virus cannot find locally-encrypted malware. Serious malware is not obvious and will not even appear in directory or process listings.
Neither hypervisors nor a TPM nor cryptography (which I know fairly well) is going to solve the problem of a resident bot. The problem is the bot! The answer is to get rid of the bot!
We are beyond being able to reliably remove bots, so the only real solution is to re-install the OS (or recover it from a saved pristine image). The better approach is to innovate new hardware designs or use DVD-based systems which are difficult or impossible to infect, and just that alone would make things much better for all of us.
Either Microsoft is breathtakingly uninformed from top to bottom, or the whole exercise is a public-relations ploy. It seems aimed at wrongly deflecting their sole responsibility for producing vulnerable products, while not requiring them to do anything at all. And they propose to use both government and technical journalism to help them get away with it.
I whole-heartedly agree.
Err, a Live CD does not prevent infection. At best it prevents *persisted* infection. They also aren’t a “safe quick boot” solution. The normal cost tripple is “Fast, Good, Cheap, pick two” — http://en.wikipedia.org/wiki/Project_triangle. But with a Live CD you are taking Cheap as one of your two, that means you can pick one of Fast (“boots fast”) or Good (“secure”). Historically people used Live CDs for “fast”, but the people who have advocated Live CDs to Brian’s readers somehow think of Live CDs as “Good”, which means that someone will have to pay for !Fast.
Please recall http://blogs.chron.com/techblog/archives/2008/07/average_time_to_infection_4_minutes_1.html
If your computer can be infected within 4 minutes of going online, then the fact that the infection doesn’t last when you restart the computer is irrelevant.
Now, if your Live CD is actually configured to somehow *safely* update all critical components and all infectable components before it allows you to do anything, then *perhaps* it’s useful.
The problem is that it means your Live CD will take longer to boot each time. Imagine if Microsoft issued a Windows XP (sp0) Live CD.
(266mb) the sp2 update for it would take:
* 10 hours over dialup
* 48 minutes over cable
* 25 minutes on a T1 (no consumer gets one of these, but it’s probably on par with some old flavor of DSL)
Had Microsoft actually shipped that Windows XP Live CD, it wouldn’t have helped *at all*.
Users who had been using such a Live CD would have noticed that their “boot time” would have doubled between sp1 and sp2 (because the size of sp2 is roughly twice as large as sp1) — which also wouldn’t have put many smiles on their faces….
People will say “But Linux comes with a firewall”, (and so does sp2, so perhaps a wXP sp2 live cd would be OK). That’s great.
Let’s temporarily ignore the fact that once in a while there are kernel exploits for Linux or other low level components.
If you’re trying to browse the web, you could be using the following components:
* Web Browser (10mb)
* Flash (4mb — Linux 32bit size from Adobe Labs)
* PDF reader (10mb — Foxit for Windows )
* Java (sun: 20mb)
* Media Player (vlc: 18mb)
 You may claim you don’t need it, but my insane bank requires it for credit card transactions (including when I’m using a device which doesn’t have Java support…)
 Originally I tried to calculate the size for Evince, but the sizes I found were either sources (3mb, let’s not play the Gentoo eBuild world game) or Windows binaries (30mb). My point is that often any component used by a browser is exploitable, this includes Font rendering (which is typically a component from a system layer, e.g. FreeType or Pango)
Total cost: 62mb. Download must be done each time you boot. This is about 1/4 the size of the sp2 update. That means that (using the sp2 numbers) it should take about 15 minutes before you can use your Live CD (12 minutes to download, and assume it somehow manages to install everything in 3 minutes).
Please remember that web browsers more than 3 months old are generally *not* safe. The same applies to Flash, PDF viewers, Java and Media Players. This means that if your Live CD is more than 3 months old, all of the components listed here will need an update.
Now, you might be lucky and be able to get a binary patch for your web browser, but most package managers don’t deal w/ binary patches, so the costs described herein are real.
This also assumes that the package manager is configured to ignore updates to all other packages in a system. In reality, it *should* update everything, as there’s really no way for it to predict which component or chain of components will be attacked.
At work I often take an Ubuntu install disk (old, because I’m lazy) and install it on a new/blank system. Then I get to see how many updates it wants and how much data it needs to download. It really isn’t much better than Windows. Remember that a Live CD works like my out of date install DVD, all components are crufty.
In my case, my bank:
1. requires a browser (10mb)
2. requires java (20mb)
3. uses pdf for documents (10mb)
So, perhaps we can skip the cost for Flash and VLC. That doesn’t help me much. And do you really want each end user to somehow be able to predict when they order their live CD which components will need updates? (If you do, you’re insane.)
People might wonder why one needs to worry about the scary web….
If people use Google to get to Facebook, is it possible they use Google to get to their bank’s login page?
Discover for instance has a login form on their non https page. I called and complained about it, they insisted it was because search engines don’t index https content (yay clueless first level support) — but this means they really do expect users to search for their site. Given that their login is on discovercard.com and not discover.com (Discover the magazine owned that domain through 2007 it was somehow transfered to the credit card company before Jan 2008 according to web.archive.org), it’s hard to blame customers for this one.
My Finnish bank is worse (sorry, I keep using it and mentioning it), it’s Sampo, but http://www.sampo.com/ split off its banking service, so I can’t actually type in something I can remember.
I also had an account with Washington Mutual. Thankfully the obvious domains (washingtonmutual.com and wamu.com)
do redirect to chase.com. But as a confused user, or a semi educated one, I might know that I’m not supposed to trust web sites. Chase doesn’t seem to have an EV cert afaict (scary!).
As a quick survey http://nyjobsource.com/banks.html listed some banks:
* BoA is mostly good:
1. BoA.com currently goes to a page clearly indicating I got to the wrong place
2. bankofamerica.com redirects to https://www.bankofamerica.com/ with an EV cert (just to show I’m not going entirely crazy).
* Chase has a number of properties which will get you somewhere with various domains but no EV cert
1. jpmorganchase.com isn’t secure
2. chase.com redirects to secure
3. wamu/washingtonmutual redirect to chase.com
* Wachovia redirects to secure, but again without an EV cert
* Wells Fargo redirects to secure for both wf.com and wellsfargo.com, but again without an EV cert
* Citi is a mixed bag.
1. the main page isn’t secure (http://citi.com)
2. the bank page isn’t secure (http://citibank.com)
3. it does have an EV cert for https://online.citibank.com
Other than BoA, are there any banks which don’t really suck these days? I remember doing a survey about 2 years ago (I didn’t publish it), but it was rather depressing.
A few other bank notes:
Bank of America:
bofa.com redirects to bankofamerica.com, which defaults to an SSL-secured page using an EV cert.
(I’ve never seen their name abbreviated as BoA before, usually only BofA.)
The main page of usbank.com is not secured, and does have a login box on it for your username only. Once you put in the username, you are directed to an SSL-secured page using an EV cert for identity question and password prompt. Note that you _can_ access usbank.com with SSL on a standard certificate if you do so purposefully, so I’m not quite sure what prevents them from redirecting to at least that.
Seems that bankofamerica.com’s SSL info is registered properly but shows that their key size does not exceed 192k which is below the 256k standard set by NIST. See:
SSL Information for bankofamerica.comVersion Supported? Information SSLv2 Yes Cipher Spec: SSL2_RC4_128_WITH_MD5 Cipher Spec: SSL2_RC2_CBC_128_CBC_WITH_MD5 Cipher Spec: SSL2_DES_192_EDE3_CBC_WITH_MD5 [0700c0]Cipher Spec: SSL2_DES_64_CBC_WITH_MD5 Cipher Spec: SSL2_RC4_128_EXPORT40_WITH_MD5 Cipher Spec: SSL2_RC2_CBC_128_CBC_WITH_MD5 Connection ID: d50b3961e1a700f5b457e146853829fd SSLv3 Yes Cipher Spec: TLS_RSA_WITH_RC4_128_MD5 (128 bit)  TLS 1.0 Yes Cipher Spec: TLS_RSA_WITH_RC4_128_MD5 (128 bit) 
But what are you suggesting, timeless, should people do to be reasonably safe when doing Internet banking ?
I, personally disagree. You can theorize how unsecure is a 3-month old LiveCD and how impractical would be to download on-the-spot updates, but the truth is, if people would combine the LiveCD with the habit the restart the system before doing any sensitive operation (Internet banking, CC purchases), none of the cases we read so often on Brian column would have happened, despite being still at theoretical level, possible.
Kudos to Terry, I share his disappointment in the way some vendors repeatedly promise to improve security in their products but their initiatives are again and again either ineffective or just ploys to increase market domination.
I take issue with the notion that one should avoid solutions that aren’t 100 percent perfect, but that are almost 100 percent better than the status quo.
Sure, criminals can defeat Live CDs. Are they doing that now? No. I see this comment a lot on this blog and elsewhere: Just because hackers aren’t going after Mac users and Live CD users, doesn’t mean things will stay that way. True, but that strikes me a bit like saying to a man who can’t swim and is about to abandon a sinking ship for a life raft, “You know, that raft could spring a leak.”
I’ll see you and raise: Not only should Microsoft Windows be recalled from the market as Intrinsically Unsafe but the underpinnings of most insecure software, the “C” and “C++” programming languages, should be on the list as well. Poor type safety, insufficient bounds checking, the list goes on and on.
We have better tools. We have actual software engineering languages such as Eiffel and Ada (most notably the “Spark” dialect from Praxis) that were designed to be used in high-assurance environments. Even Microsoft has been doing quite a bit of stealth development in next-generation operating systems designed for high reliability (i.e. the “Singularity” and “Midori” OS projects).
The problem with all of this, however, is that we still consider software development and design to be some sort of “art” (or “geek craft”), rather than being recognized as an engineering discipline. Until that’s fixed, and until the mindset changes from “cool new features” to “verifiable and reliable”, we’re just going to keep having these failures, no matter how many band-aids we apply to the systems.
Which reminds me: Why don’t we have any sort of “Underwriter’s Laboratories” for software yet? Yes, I know, “money”, but still…
It has already been put into the plan in Australia, so what’s wrong with playing catch-up?
Certainly, if some kid at school is found to have a highly contagious virus I have no problem in the Health Authorities imposing a quarantine. Botnet viruses on computers should be quarantined from the Internet in an analogous manner. It’s not a question of “personal freedom” when it concerns the common good. People who use the slippery slope argument are themselves already on it.
Imagine M$, Comcast, and Verizon as “traffic cops”.
Imagine you are flagged, but it’s a “FALSE POSITIVE”.
Imagine all the people…
“Err, a Live CD does not prevent infection. At best it prevents *persisted* infection.”
The analogy to biological disease is imperfect, but it makes sense to say that malware which cannot get itself executed on future sessions has not really “infected” the computer.
The practical distinction is that an infected computer has malware running on *every* future session. A clean computer *might* contract malware during a session, and that malware *might* run, but only for the remainder of *that* session, which is a far lesser risk than a *guarantee* of malware running all the time.
“The normal cost tripple is “Fast, Good, Cheap, pick two””
So what is the deal with Microsoft Windows? The Windows I know is not fast, not good (safe) online, and not cheap either.
“If your computer can be infected within 4 minutes of going online, then the fact that the infection doesn’t last when you restart the computer is irrelevant.”
That article actually makes my point: using Microsoft Windows online by itself can be so risky as to be almost useless.
Nowadays, I am willing to grant that everyone will use Windows SP2 or later with the firewall enabled, and an external hardware firewall / router. The hardware firewall may be almost incidental in the modern broadband installation. But we do not work without it, because we cannot trust Windows to be secure online on its own.
“Now, if your Live CD is actually configured to somehow *safely* update all critical components and all infectable components before it allows you to do anything, then *perhaps* it’s useful.”
For the vast majority of my online work, I use a Puppy Linux DVD without a hard drive. I am using that now. Puppy almost accidentally has the unique ability to update the DVD+RW with the latest browser and add-on updates. Since I live in the browser, I rarely deal with Linux, but the Linux part will fail to run almost all malware as it stands, with nothing special at all. A Windows LiveCD would not, have that particular advantage, but it might be made different enough inside to crash most malware.
“The problem is that it means your Live CD will take longer to boot each time.”
I do not have to project this idea, I live with it. The Puppy Linux DVD boot does seem to take “a long time,” but in reality is still faster than a Windows hard-drive boot with an online anti-virus update at the start. Additional sessions on the DVD each add a few seconds or so to the boot, and I have about a dozen sessions. I could cut that back to zero by burning another DVD from the current state, but the extra sessions are just not that big a deal.
“Users who had been using such a Live CD would have noticed that their “boot time” would have doubled between sp1 and sp2 (because the size of sp2 is roughly twice as large as sp1) — which also wouldn’t have put many smiles on their faces….”
If I were designing the Windows LiveDVD, I certainly would not put the full normal Windows on the DVD. It would be a different product, and would compete with existing LiveCD products.
Fortunately for Microsoft, many existing LiveCD products are quite large and awkward, requiring the CD to be kept in place. In contrast, Puppy Linux loads completely into memory, which allows the CD or DVD to be removed during operation.
“Total cost: 62mb. Download must be done each time you boot.”
No. On my system, download occurs once, provided I save the changes. For updates, I start a new session, get updates and save promptly before doing anything hinky online.
Running an older version of Flash is much less of an issue that it would be under Windows. The main malware problem is not an infection that we acquire upon going to the bank site, the problem is the infection we acquired elsewhere a few weeks ago which is still present when we go to the bank site. That sort of long-term infection does not occur on a DVD-boot system, and the malware does not run under Linux anyway.
“Now, you might be lucky and be able to get a binary patch for your web browser, but most package managers don’t deal w/ binary patches, so the costs described herein are real.”
I use the Firefox browser with extensive security add-ons, as documented in my articles in the computer security section of my site. Firefox updates itself, and if I save to a new DVD session, the next time the system comes up, I get the new browser. The add-ons update similarly.
The Firefox add-ons NoScript, Adblock Plus, Certificate Patrol, Google Docs Viewer, Perspectives, Safe, SSSPasswdWarning, WOT and others provide substantial added protection.
“In my case, my bank:
1. requires a browser (10mb)
2. requires java (20mb)
3. uses pdf for documents (10mb)”
None of these are a problem to get and use, but Java does have continuing security issues. For best security, it is important to avoid Java if at all possible.
I have no problem with I.S.P.s sending notices to infected users and downgrading their bandwidth. What I have a problem with is the Feds saying that I must use an anti-virus program or my machine will not get a PC Health Certificate that allows full speed internet access.
This dictate would be the same as if the Feds dictated that I must buy a particular health insurance or face a monetary fine.
It also assumes a fault on my part and I must prove otherwise. Something like assumption of guilt.
As Krebs has pointed out on many occasions, anti-virus software isn’t the great protector it is advertised as.
I have not run anti-virus junk since 2005 and obviously have no intention of reversing self direction. Keep the O.S. and Apps up-to-date and run free malware scanners (yes, plural) and also run a firewall, and all should be well.
But how do you tell a bureaucrat that your PC is protected unless it has the where all and be all of anti-virus software with the associated paid annual subscriptions. Well you can’t and thus you must submit to the will of the ‘Great Father’ in Washington D.C. and do as you are told, you small person, or else.
What I feel I must do is bite the hand of the Federal Government whenever I feel it leaning heavily upon me.
Sure I could run a free anti-virus program to get a clean bill of health and then disable the stupid thing, but that’s not the point of this rant.
While I agree that common sense is by far the most important antimalware strategy, going without security software is just as short sighted as relying completely on security software and abandoning common sense.
All those patches you are dutifully installing were developed *after* vulnerabilities were found. In at least some cases, the bad guys found them first and were already using them for criminal purposes.
All your astute judgment about which sites are safe to visit and which links are safe to follow? Even the most trustworthy website/email account can be hacked if an attacker is determined. If Kaspersky, Trend Micro, and the Web Hosting Talk forum can be pwned, you can be sure that the other sites you visit are at risk, too. You may have visited a site a thousand times and be perfectly comfortable running its scripts, but it doesn’t guarantee you will never encounter an exploit there.
Having an AV program is an extra level of defense. The best ones detect about 95% of new malware. Yes, that means they miss one in twenty, but if the exploit is buried in the source code, it’s 19 chances in 20 you’ll get a warning.
Given the consequences to a computer of even a single failure (and the availability of free or low cost AV/antimalware programs), unless I were visiting websites in text-only mode, going bare is a risk that I wouldn’t consider taking.
As a IT security professional and cryptographer of quite a few years now it is painfully obvious to me that ISP’s AND IAP’s have a self interest in dealing with malware, bots of various sorts, and other malicious activity passing across their networks. Why they don’t do a much better job than they currently do is beyond my comprehension unless they themselves are working with some of these organizations in some manner of another.
ISPs and IAPs don’t do a better job because keeping their customers PCs clean is an money-losing proposition. It has to be a matter of principle for an IxPs, because if the firm was looking just at the bottom line, it would almost be totally ignored.
Frank, I disagree with you here entirely. It is in all ISP’s and IAP;s interest as it reduces processor overhead at a minimum and in doing more, reduces infection risk to the ISP’s and IAP’s themselves.
Can you clarify what you mean by processor overhead? Are you referring to a router’s CPU?
Malware, being passed through (not targeted at the routers themselves to be sure), makes up a negligible percentage of a ISP/IAP’s router CPU.
In regards to customers on an ISP infecting another customer on that same ISP, sure, that’s true, but if the ISP punts all those calls then it’s a lot cheaper than working with each customer to make sure they clean up. Have you operated an ISP helpdesk or abuse desk?
Frank, to answer your last question yes I have operated a help desk at an ISP/IAP many times. I was not responding to what is and what is not cheeper per se. But over time given the increasing rate of malware and bots ect. being launched, it is overall cheaper to everyone for the ISP/IAP to address to the extent they can, not what they are willing to, any and all such threats.
Ah, OK, so you do acknowledge that it’s more expensive.
I agree, it would be cheaper and benefit the common good if everyone cleaned up their act. How about this, you start first and I’ll start doing it when I see that it helps my bottom line. =)
You can see the problem of cost shifting and bad actors. Even if we resolve it in the USA, we still have the rest of the world sending us garbage.
Let’s be honest and admit that keeping a clean network is going to raise our costs and move forward because it’s the right thing to do.
No frank I don’t agree. Evidently you didn’t understand my last reply. Maybe it was too direct. Sooner of later everyone is going to have to at least make a much better effort to clean up their act or the cost to users will exceed most current users ability to pay AND that with new regulation not only in the US but elsewhere, especially the EU, requirements of significant preassure to get their acts cleaned up will be, or are manditory.
Thanks for clarifying — you’re predicting that regulation and/or things will get so bad that ISPs will be money in ahead to keep their own network clean.
If past performance is any predictor, things aren’t bad enough to force most ISPs to keep their whole network clean. Sure, there are a few bad apples that almost every ISP has that they’ll clean up, but as you already acknowledged, little is done today.
I have severe problem with this. I provide filters that filter many known and sometimes unknown (the unknown done with the PAC filter patterns) bad web sites:
Both myself and Mike Burgess (MVPHosts) are connected to the Internet through Comcast. We have to connect through somebody. Hey, myself, Mike and others have to detect and block this stuff. But I finally abandoned using OpenDNS for my DNS batch runs as one of the tests to see if hosts that were bad are still bad (they are no longer bad if they get parked or are no longer in DNS) because OpenDNS informed me my machine (actually that should be plural) was the worst bot on the Internet. Now I suppose that Comcast could analyze and see that the vast majority of my packets were actually initiated with wget rather than a browser and from the headers determine that most of the time I am using Linux or BSD. That doesn’t help the MVPHosts author though; Mike uses Windows and IE with Fiddler most of the time. Unless we can get exclusions for the security researchers this idea is probably going to cause more problems than it solves. I need some proof from ISPs that their bot detection will not give security researchers a FP ID (perhaps with an exlusion for our IP addresses). We also need proof that there won’t be a lot of FPs for connections where there is nothing but Macintosh and Linux machines on the user’s end. Now that may change if the crackers begin targeting Linux and Macintosh but so far over 99% of the malware effort is directed towards machines running Microsoft Windows.
Comcast blocked me fron sending email out on port 25 with some unknown person or people complaining I was spamming. They did it with no proof. Now I have port 25 blocked in all directions at the firewall to prevent even the off chance of somebody bouncing port 25 packets off of my firewall. How many of you have a firewall blocking ports 25, 1025-1032, 1433, 1434 and other ports I have identified with worm activity in all directions? Botnet detection is more difficult to determine because most of the time they use port 80. If these bot detection efforts are heavy handed and don’t have ways for excluding us doing the detection then we won’t have a way to discover what is bad. If there is no way to handle the FPs and ways to reduce them in addition to actually enhance the TNs. It may end up being like the alarm in “How to Steall a Million” and after a while the BotNet detection will be turned off.
“We also need proof that there won’t be a lot of FPs for connections where there is nothing but Macintosh and Linux machines on the user’s end.”
We all can do without False Positives no matter what kind of machine we have, but FPs are an inherent problem with anti-virus scanners. FPs will happen. There will be no such “proof.”
“Now that may change if the crackers begin targeting Linux and Macintosh but so far over 99% of the malware effort is directed towards machines running Microsoft Windows.”
Malware will target only Windows for the predictable future.
Malware made for criminal profit lands on unknown web machines and must deal with what it finds there. About 91 percent of web browsing is done from Microsoft Windows.
Attackers get a choice: to prepare for Windows, and get a 91 percent chance of landing on a compatible machine. Or they can prepare for a secondary OS and get a 6 percent or a 1 percent chance of landing on that sort of machine.
The choice is not rocket science, it is a 91 percent hope of success versus a 6 percent hope of success. Malware targets Windows because Windows has the largest market share, and that will continue.
If you want to perform network security research, I’d recommend using a commercial internet connection rather than consumer broadband.
I sware by http://www.HostsFile.org/
and http://www.dnsstuff.com most of the time. No commercial connection needed to do a pretty good analysis of overall security.
In general I am not sure that the FCC is capable yet to be considering such a move given their own questionable security status. See:
http://www.dnsstuff.com/tools/dnsreport?domain=fcc.gov&format=raw&loadresults=true&token=26f1e9eb58ffa1482516a8042f119014 for instance. Also see:
especially the fact that they are still at sha1 when NIST has set the standard to SHA2 – 256bit.
The FCC’s DNS is not too far from being correct. The root and their own servers don’t match by one host name, ns.fcc.gov and ns1.fcc.gov, respectively, but those two host names resolve to the same IP address.
But you’re right, it’s amazing how many domain names don’t pass a basic DNS check.
Yes the FCC’s Domain name’s DNS config is not too far from being correct. However my more important point was that because it isn’t seems to me to be a good reason for FCC not to be ‘calling-out’ other’s security or configuration inconsistancies. AND yes it is amazing to me how many .GOV domain names DNS config’s are very similar to fcc.gov’s.
BTW, along the line of this FCC consideration did anyone else see:
From that article:
“The government is reviewing an Australian program that will allow Internet service providers to alert customers if their computers are taken over by hackers and could limit online access if people don’t fix the problem.”
It seems hard to argue against the concept of an ISP detecting improper use of their own customer connections. The problem comes in the proposed remedy of “limiting online access” and the test “if people don’t fix the problem.” It is wrong to hold the user responsible for a problem they cannot detect, because “the problem” could be elsewhere.
A problem is pretty difficult to fix when the fixer cannot see it. The very first use of government authority ought to be to require manufacturers to make available free tests which expose the presence of malware. THEN people can be held responsible for fixing their problem, OR they can present evidence that it is NOT their problem.
“Obama administration officials have met with industry leaders and experts to find ways to increase online safety while trying to balance securing the Internet and guarding people’s privacy and civil liberties.”
We already know how to increase online safety. The solution is very simple: Just do not use Microsoft Windows online. No more is needed right now.
Those who have a problem with not using Microsoft Windows online need to take that up with Microsoft. Yes, Microsoft is working hard at patching faults; no, all their patching has not stopped the malware wave, nor even slowed it down. The fact that Windows does not provide the online security our society needs should be the basis for class-action lawsuits, bank loss lawsuits and government action.
This is not a case of government against a particular company, it is a case of a particular company having gotten society into such a hole that government needs to force a stop to continued digging.
If and when the Windows market share declines to something comparable to some other system, THEN we may see malware start to target other systems. That will take years. And we can prevent even that by using computer systems which are difficult or impossible to infect.
Government has a role in requiring mass-produced computers to meet standards for being difficult or impossible to infect. Basically that means systems must boot using data from hardware storage which independently prevents unauthorized content changes. Perhaps existing FCC type-acceptance rules would suffice.
I in part agree with most of your comments/remarks here. However I don’t know an OS or NOS that is infection/malware proof or even more resistant than another. I do believe that MS has come far in the past very few years simply because it has had too to catch up with apple or linux in may security respects, but not all. MS still has a ways to go. But given I have put on 85 patches for Cisco’s IOS lately and another 6 for redhat linux 64 bit, I am not at all ready to get too into MS bashing even though I am relitively sure that MS can do better, as could Google for that matter.
“I don’t know an OS or NOS that is infection/malware proof or even more resistant than another.”
Malware is not about technical weakness in the OS. All large, complex systems have errors and security flaws. All OS’s can be attacked with malware. Criminal attackers focus on Microsoft Windows because that gives them access to the largest number of vulnerable users. They cannot get to nearly as many users by attacking another OS, even if that would be technically easier.
(There are, of course, some limited malware attacks on secondary OS’s, which may be for development or propaganda instead of profit. And targeted attacks on specific businesses or groups can be effective no matter what the OS. But most malware is profit-oriented and “thrown to the winds” to arrive wherever it does. 91 percent of the time it will find itself on a Windows system, so Windows is what criminals prepare their malware to run on.)
Because technical quality has only a minor impact on malware, patching cannot solve the malware problem. Using a different OS online does.
Having a computer which is difficult or impossible for malware to infect is a more fundamental way to address the malware problem. This does not mean that malware cannot get to a system and run. Lack of infection just means that malware cannot install itself to run on every subsequent session. Without infection, we can be free of any operating malware every time we restart the system. Disallowing infection marginalizes the very concept of a bot.
I disagree that your statement “Malware is not about technical weakness in the OS.” Some are and of course some are not at all related to any particular OS at all. No OS is immune some are more seseptable to malware than others, though this dynamic changes from time to time depending on how well and how pro-active the OS vender is at any given span in time.
Otherwise the remainder of you comments were VERY good! >;)