September 2, 2010

“Our dependence on all things cyber as a society is now inestimably irreversible and irreversibly inestimable.”

Yeah, I had to re-read that line a few times, too. Which is probably why I’ve put off posting a note here about the article from which the above quote was taken, a thought-provoking essay in the Harvard National Security Journal by Dan Geer, chief information security philosopher officer for In-Q-Tel, the not-for-profit venture capital arm of the Central Intelligence Agency.

The essay is well worth reading for anyone remotely interested in hard-to-solve security problems. Geer is better than most at tossing conversational hand grenades and then walking away, and this piece doesn’t disappoint. For example:

“Looking forward, without universal strong authentication, tomorrow’s cybercriminal will not need the fuss and bother of maintaining a botnet when, with a few hundred stolen credit cards, he will be able to buy all the virtual machines he needs from cloud computing operators. In short, my third conclusion is that if the tariff of security is paid, it will be paid in the coin of privacy.”

Geer’s prose can be long-winded and occasionally sesquipedalian (such as the phrase “Accretive sequestration of social policy”), but then he turns around and shows off his selective economy with words by crafting statements like:

“..demand for security expertise so outstrips supply that the charlatan fraction is rising.”

In the essay, Geer touches on a pet issue of mine: Accountability for insecurity. I recently wrote an editorial for CSO Online addressing a public request for advice by the Federal Communications Commission (FCC), which wants ideas on how to craft a “Cybersecurity Roadmap” as part of its $7 billion national broadband initiative.

In that column, I suggest that the FCC find a way to measure and publish data about the number and longevity of specific cyber security threats resident on domestic ISPs and hosting providers. I also suggest that the government could achieve this goal largely by collecting and analyzing data from the many mainly volunteer-led efforts that are already measuring this stuff.

Geer warns readers that “the demand for ‘safe pipes’ inexorably leads to deputizing those who own the most pipes.” But mine isn’t a “punish or regulate ISPs-for-having-lots-of-security-problems” approach. Instead, it’s more of a “publish a reputation score with the imprimatur of the federal government in the hopes that the ISPs will be shamed into more proactively addressing abuse issues” idea.

Who knows if my idea would work, but it wouldn’t be terribly risky or expensive to try. After all, as Geer said, “security is a means and that game play cannot improve without a scorekeeping mechanism.”

“These are heady problems,” he concludes. “They go to the heart of sovereignty.  They go to the heart of culture.  They go to the heart of ‘Land of the Free and Home of the Brave’.  They will not be solved centrally, yet neither will they be solved without central assistance.  We have before us a set of bargains, bargains between the Devil and the Deep Blue Sea.  And not to decide is to decide.”

Cue the music.

21 thoughts on “Toward a Culture of Security Measurement

  1. Martijn Grooten

    I arrived at this page through a link in my RSS reader, which happens to be Google reader. Now you suggest I add this page to my RSS reader…

    1. Martijn Grooten

      (Sorry, without my off-topic SGML tags that looks even more like nitpicking. Please take it as a FYI and feel free to delete both comments!)

      1. Dominic

        Just a by-the-way: The google reader RSS feed is part of iGoogle, would I be right in thinking that you’re checking where people have come from with the referer header? In which case, the referer should end in /ig.

  2. AlphaCentauri

    The tricky thing is how ISPs can get control of all their net neophytes with compromised computers without unnecessarily restricting their more sophisticated users.

    AOL, despite having very naive users, has a very low rate of zombification. But I doubt too many of the people reading this blog would want to be forced to use AOL or to have their own ISPs become similarly paternal.

  3. Darren Reid

    What a fantastic paper. Don’t just rely on Brian’s blog post about it: go read teh whole thing! It’s worth the time spent, you will be constantly stumbling over great quotable passages the whole way through.

  4. Deborah Lafky

    Great blog post and thanks for pointing us to this excellent essay.

  5. Andrew Johnson

    Here is my response to the conversational hand grenade mentioned above:

    I find it very questionable to believe that the business model of operating cloud servers will scale beyond 5 or 10 large companies. The economies of scale simply do not support that many more players. Already the smaller guys seem to have issues competing on price.

    It appears to me that the existing “big” players, Amazon, Rackspace, etc, have robust security policies. I’ve had experience with my own cloud instances getting compromised by hackers, and the response was swift.

    Publicly traded, western run companies don’t play around with security compromises. I suspect that the future will get better, not worse, as internet infrastructure consolidates into the hands of a few very large companies.

    There is room for government involvement, but it appears to me they are trying to insert themselves in to the places that don’t need it.

  6. Terry Ritter

    In the cited article, we find Geer encountering an issue we have repeatedly seen on this blog, so it seems strange that he finds himself boxed-in:

    “I ask this: if it is not the responsibility of the end user to avoid being an unwitting accomplice to constant crime, then whose responsibility is it?”

    My answer: “Primary responsibility for avoiding malware crimes should be assigned to the system hardware and software manufacturers, since they have built and sold systems which allow their customers to be exploited.”

    From my article “The Banking Malware Mess”:

    “If operating a computer is like driving a car, we ought to be somewhat familiar with manufacturing problems and recalls. If a car, in its normal environment, suddenly became undetectably dangerous to operate, would we blame the driver, or would we blame the car and the manufacturer?”

    Geer’s oddly phrased, “constant crime” seems to be what we call *infection*, or that which activates a malware bot on each and every computing session. “Constant crime” begs us to see that *transient* crime also exists, in which malware may run yet not install, and so be criminal only during one session. But it is the *infection* which is the problem, because that may be hundreds of times more frequent. And *infection* can be absolutely prevented by hardware and software design.

    The obvious example is a Linux LiveCD, in which the operating system (OS) code absolutely cannot be infected and so starts clean on every session. But more conventional systems could be designed with the same security property. They just are not.

    Instead of demanding that users control what they cannot detect, government can and should require manufacturers to supply an easily-used package to *certify* the lack of infection, or re-install the system so it is no longer infected.

    1. Sile

      I agree that when you buy a new car, you have a reasonable expectation that it will not become a danger to you or others on the road.

      However, it does fall on the car owner to be responsible for the maintenance of their vehicle. I would be less inclined to sympathize with someone who crashed if it was because their brake pads had worn out and they’d neglected to repair it. The problem is that people don’t see computers as needing maintenance.

      We have to figure out some way to get people educated about keeping their computers in good repair. And here’s where I would join you in asking the software and hardware manufacturers for their work. The consumer needs to have a place they can go where they can receive trusted support to fix their pc. Part of that burden is being able to establish a trusted location (physical and digital) and maintaining that integrity (chasing down phishers who will undoubtedly love having a secure site to mimic).

      1. Terry Ritter

        “However, it does fall on the car owner to be responsible for the maintenance of their vehicle. I would be less inclined to sympathize with someone who crashed if it was because their brake pads had worn out and they’d neglected to repair it. The problem is that people don’t see computers as needing maintenance.”

        AFTER we have a vehicle which starts out safe, and can be kept safe, THEN we can hold an owner responsible for maintenance. Currently, computers are not like that.

        The malware problem is not the user or their lack of maintenance, the problem is the equipment. We know this because our computers do not even start out safe, before any maintenance is required. Nor could repair facilities solve the problem, since people would have to take in their computers every day before doing online banking.

        Malware is just not like it was a decade ago. Absent bandwidth logging from an external firewall (which also may not be definitive), detecting the worst modern malware by normal means may be almost impossible.

        On the other hand, the OS manufacturer knows every boot step and static file needed for run-up, and their exact contents. They can be checked and certified as uninfected. Dynamic files such as the Windows Registry may be a problem, but Microsoft designed those problems. Features which cannot be secured need to be removed from their design.

        Government regulation forces safe design in cars, but not in computers used online. It may not be a coincidence that our computers are unsafe.

    2. eCurmudgeon

      My answer: “Primary responsibility for avoiding malware crimes should be assigned to the system hardware and software manufacturers, since they have built and sold systems which allow their customers to be exploited.”

      Which touches on the fundamental issue: So long as a computer allows any arbitrary executable to run on the system, it will essentially be insecure.

      There is a solution to this, which is to design systems to run on an “application whitelist” model, where only executable code that has been marked as “approved” by the system is allowed to run. Problem is, who decides what that “approved” list is?

      I suspect that we’ll see a solution arise based on the model of the Apple Computer “App Store”. Under this model (say, for future versions of Mac OS and Windows) the operating system will only allow the execution of code that has been vetted and approved from the vendor (and subsequently signed with a manufacturer-specific digital signature).

      It’s a model that’s shown to work in the mobile-device space, and would do wonders to clean up the software ecosystem.

      1. Jane

        Isn’t Windows’ “UAC” attempt a form of whitelisting? Only, instead of using a checksum or anything else to permanently whitelist, the user’s asked every time?

        1. BrianKrebs Post author

          I would liken the Windows UAC to a trigger lock on a gun than a whitelist. There is not a lot of intelligence or whitelist there (MS asks on its own exes even).

  7. Curt Wilson

    Some want to end the menace of cybercrime, but many don’t because it’s good business to keep a threat alive and even overhyped. We have seen this same technique used by governments to justify shady agendas and with the increasing tendency towards cyberwar rhetoric, I suspect we’ll be subjected to even more creeping tendencies towards police state mentalities that may only provide partial relief. As long as cybercrime makes money, it will continue. If a group can somehow undermine the financial benefits of malware/crimeware, they would strike a decisive blow but as long as companies look the other way (or worse, facilitate) as long as *they* are making money (bulletproof hosting comes to mind) then the problem will just continue. Robust patching to eliminate known exploits and increased security awareness are two components that should help but of course there are logistical, economic and political reasons why it’s easier said than done. As long as many perceive the costs of compromise as being less than the costs of securing, the incentive is not strong enough where functionality and convenience are king. This is a thorny problem with roots in the social and economic construction of culture and a solution won’t be quick. But we have to do our best in the meanwhile.

  8. Arctic Hare

    ” the charlatan fraction is rising.”

    That’s the understatement of the year. Reading Dan Geer after all the junk out there is like stumbling upon Thomas Jefferson after watching nothing but cable.

  9. Sy Burr

    Cyber cyber cyber cyber cyber cyber cyber cyber cyber cyber … argh!

    Let the nonsense term “cyber” be stricken from every book and tablet. Stricken from every pylon and obelisk. Let the term “cyber” be unheard and unspoken, erased from the memory of man, for all time!

    Dan Geer knows better, but I suppose he has to tailor his cyberspeak to his cyberaudience.

  10. stvs

    A nod to the tradeoffs between the dangers and benefits of Microsoft Windows’s ubquity: “cascade failure is so much easier to detonate in a monoculture—when the attacker has only to write one bit of malware, not ten million. The idea is obvious; believing in it is easy; acting on its implications is, evidently, difficult. Despite what you might think, I am sympathetic to the actual reason we continue to deploy computing monocultures.”

    Compare to the Wine Project’s (“Wine is not a [Windows] Emulator”) justification: “Large homogeneous populations are a risk to society. … Microsoft Windows is run on an overwhelming proportion of personal computers. … One on which most governments, most businesses, and many households depend on. … This issue is now considered serious enough that security analysts are calling our reliance on Microsoft Windows a threat to national security.”

    I also liked the comments about these other tradeoffs:

    “Freedom, Security, Convenience: Choose Two”.
    “security and privacy are a zero sum game.”

    Recent WSJ reporting illustrates how monocultures are a threat to security and privacy alike—see “Microsoft Quashed Effort to Boost Online Privacy“. The only solution to the problems of how poor of security and privacy are built in to web browsers by design is for people to abandon a good bit of convenience, which appears to be the real threat to security and privacy.

  11. xAdmin

    Overall, I like what Geer has to say, although it is at times dripping with unnecessary prose.

    Although he refers to them as “biases”, they are actually major tenets of security, “security is a means, not an end”, and “security is about risk management”. Later he refers to “defense in depth” (one of my favorites! :P).

    I also like the first part of the statement, “To move from a culture of fear, to a culture of awareness, to a culture of measurement.” Fear is good, it motivates us to take measures to protect ourselves. It is irrational fear that is destructive. A healthy fear of your computer getting compromised should create awareness that in turn causes you to learn about the potential threats and take appropriate counter measures to reduce your risk to those threats, thus “Security is about risk management.” As with any type of security, it hinges on awareness! How many times have we been told to be aware of our surroundings when it comes to physical security? There’s a very good reason for that! The bad guys are hoping to prey on the unaware as they are much easier to take advantage of. This applies to other types of security as well!

    I don’t agree with the “we are now faced with Freedom, Security, Convenience: Choose Two.” Why restrict ourselves to only two? In the right balance of each, all can be obtained at the same time. The trick is not to overdo any one of them. Too much security and you lose freedom and convenience, too much freedom or convenience and you lose security.

    In the end, to me Geer is basically saying that we need to find a balance between government, private sector, and personal responsibility regarding computer security. I’ll take that further and apply it to our society/culture as a whole! We are way off balance with many eschewing personal responsibility and looking to others, in particular government for answers, while greed dominates not only the private sector, but government and the individual! Is it any wonder we find ourselves in this current mess? 🙁

  12. Julie Squires

    I like his writing style; the line you excerpted reminds me of Steinbeck (“..demand for security expertise so outstrips supply that the charlatan fraction is rising.”).

    To Terry Ritter’s comment –
    [“’I ask this: if it is not the responsibility of the end user to avoid being an unwitting accomplice to constant crime, then whose responsibility is it?”

    “My answer: “Primary responsibility for avoiding malware crimes should be assigned to the system hardware and software manufacturers, since they have built and sold systems which allow their customers to be exploited.’”]

    – SRA International’s pick by AT&T, announced this morning, will provide secure, encrypted phone calls via off-the-shelf Blackberry smartphones over the AT&T wireless network.

    If tampered with, the TrustChip in the phone self destructs. If lost, the TrustCenter turns it off.

    Read more here: Thanks, Brian. Best, Julie

    [Disclosure: We are a PR firm and SRA is a client. Hey, I am also a private citizen and this is cool; makes each one of us more responsible.]

Comments are closed.