November 18, 2010

Once or twice each year, some security company trots out a “study” that counts the number of vulnerabilities that were found and fixed in widely used software products over a given period and then pronounces the worst offenders in a Top 10 list that is supposed to tell us something useful about the relative security of these programs. And nearly without fail, the security press parrots this information as if it were newsworthy.

The reality is that these types of vulnerability count reports — like the one issued this week by application whitelisting firm Bit9 — seek to measure a complex, multi-faceted problem from a single dimension. It’s a bit like trying gauge the relative quality of different Swiss cheese brands by comparing the number of holes in each: The result offers almost no insight into the quality and integrity of the overall product, and in all likelihood leads to erroneous and — even humorous — conclusions.

The Bit9 report is more notable for what it fails to measure than for what it does, which is precious little: The applications included in its 2010 “Dirty Dozen” Top Vulnerable Applications list had to:

  • Be legitimate, non-malicious applications;
  • Have at least one critical vulnerability that was reported between Jan. 1, 2010 and Oct. 21, 2010; and
  • Be assigned a severity rating of high (between 7 and 10 on a 10-point scale in which 10 is the most severe).

The report did not seek to answer any of the questions that help inform how concerned we should be about these vulnerabilities, such as:

  • Was the vulnerability discovered in-house — or was the vendor first alerted to the flaw by external researchers (or attackers)?
  • How long after being initially notified or discovering the flaw did it take each vendor to fix the problem?
  • Which products had the broadest window of vulnerability, from notification to patch?
  • How many of the vulnerabilities were exploitable using code that was publicly available at the time the vendor patched the problem?
  • How many of the vulnerabilities were being actively exploited at the time the vendor issued a patch?
  • Which vendors make use of auto-update capabilities? For those vendors that include auto-update capabilities, how long does it take “n” percentage of customers to be updated to the latest, patched version?

The reason more security companies do not ask these questions is that finding the answers is time-consuming and difficult. I should know: I volunteered to conduct this analysis on several occasions over the past five years. A while back, I sought to do this with three years of critical updates for Microsoft Windows, an analysis that involved learning when each vulnerability was reported or discovered, and charting how long it took Microsoft to fix the flaws. In that study, I found that Microsoft actually took longer to fix flaws as the years went on, but that it succeeded in an effort to convince more researchers to disclose flaws privately to Microsoft (as opposed to simply posting their findings online for the whole world to see).

I later compared the window of vulnerability for critical flaws in Internet Explorer and Mozilla Firefox, and found that for a total 284 days in 2006 (or more than nine months out of the year), exploit code for known, unpatched critical flaws in pre-IE7 versions of the browser was publicly available on the Internet. In contrast, I found that Firefox experienced a single period lasting just nine days during that same year in which exploit code for a serious security hole was posted online before Mozilla shipped a patch to fix the problem.

Bit9’s vulnerability count put Google Chrome at the Number 1 spot on its list, with 76 reported flaws in the first 10 months of this year. I’d like to propose that — by almost any objective measure — Adobe deserves to occupy the first, second and third positions on this grotesque vulnerability totem pole, thanks to  vulnerabilities in and incessant attacks against its PDF Reader, Flash and Shockwave software.

For one thing, Adobe appears to have had more windows of vulnerability and attack against flaws in its products than perhaps all of the other vendors on the list combined. Adobe even started this year on the wrong foot: On Dec. 15, 2009, the company announced that hackers were breaking into computers using a critical flaw in Reader and Acrobat. It wasn’t until Jan. 7 — more than three weeks later — that the company issued a patch to fix the flaw.

This happened again with Adobe Reader for 20 days in June, and for 22 days in September. Just yesterday, Adobe issued a critical update in Reader that fixed a flaw that hackers have been exploiting since at least Oct. 28.

True, not all vendors warn users about security flaws before they can issue patches for them, as do Adobe, Microsoft and Mozilla: In many ways this information makes these vendors easier to hold accountable. But I think it’s crucial to look closely at how good a job software vendors do at helping their users stay up-to-date with the latest versions. Adobe and Oracle/Sun, the vendors on the list with the most-attacked products today, both have auto-update capabilities, but these updaters can be capricious and slow.

Google and Mozilla, on the other hand, have helped to set the bar on delivering security updates quickly and seamlessly. For example, I’ve found that when I write about Adobe Flash security updates, Google has already pushed the update out to its Chrome users before I finish the blog post. The same is true when Mozilla issues patches to Firefox.

Marc Maiffret, CTO at eEye Digital Security, also took issue with the Bit9 report, and with Google’s position at #1.

“While many vulnerabilities might exist for Chrome, there are very few exploits for Chrome vulnerabilities compared to Adobe,” Maiffret said. “That is to say that while Chrome has more vulnerabilities than Adobe, it does not have nearly the amount of malicious code in the wild to leverage those vulnerabilities.”

There is no question that software vendors across the board need to do a better job of shipping products that contain far fewer security holes from the start: A study released earlier this year found that the average Windows user has software from 22 vendors on her PC, and needs to install a new security update roughly every five days in order to use these programs safely. But security companies should focus their attention on meaningful metrics that drive the worst offenders to improve their record, making it easier for customers to safely use these products.


31 thoughts on “Why Counting Flaws is Flawed

    1. george

      Indeed a very good post. I liked the Swiss Cheese analogy and I fully agree that Adobe and Sun/Oracle ought to share the first 4 positions in the top. I cannot understand how 2.4MB worth of software (Adobe Flash) manages to have discovered double digits number of bugs and vulnerabilities every 2 months, many of them exploitable and exploited in the wild while no new features were introduced since quite a while. Add to that the notorious stability and performance problems introduced and pretty much sets a record in negative terms.
      Yet, despite being flawed themselves, this kind of tops based on counting flaws tend to be widely recirculated, even in mainstream media. I hope, possibly in vain, that some vendors making this top ten will want to avoid next year to be in the same position and will decide to assign more resources to security and testing.

  1. Tim Benton

    Thank you Brian for succinct reporting of the daily fight against malware. This article helps to cut through all of the marketing hype and BS that folks in IT have suffered with for decades.

    Thanks again.

  2. Tom Cross

    Good report, BK. What caught my eye was the “software from 22 vendors” statement. How many of those applications were installed by the PC manufacturer (bloatware)?

  3. Michael Argast

    Brian – great article. I agree with your analysis of Chrome – personally, with it’s transparent auto-updating functionality I think it is staking out a leadership position in terms of browser security.

    I also agree Adobe should be at the top of the list – I’m hoping their new sandbox technologies really help reduce the very real security impact their ubiquity has had in the last couple years. Also, better defaults around javascript would certainly help.

    Your analysis of how the security industry feeds this is spot-on – Top 10 lists, etc, don’t often tell the whole story, but they do get a lot of press pick-up because of the ability to easily consume numbers or create sensational headlines.

    I feel in this conversation it makes sense to mention Java as a baddie as well – broadly installed, broadly attacked, and a very real source of insecurity.

  4. W J Freeman

    Useful information. I lack the technical skills to make much use of this information, but it will make me view those “reports” more critically.

    This post does raise a question: Are you certain you should be posting a blog?

    I find your reports to be reasonable and even-handed. There must be a huge body of readers who do not take you seriously because of that.

    1. Wladimir Palant

      I have seen references to Brian’s articles in the press both before and after he left Washington Post – I don’t think that anybody takes him less seriously just because he is a “regular” blogger now.

      1. Joshua

        I receive 90% of my news and information from blogs nowdays. I don’t care for nor trust the mainstream media on many issues as they are nothing but headlines and propaganda and have zero care for true reporting. On the other hand, blogs are maintained not as a job, but as a passion. I refer to passion in many of my posts because it draws an obvious line between them and those who only work for a paycheck. That is a blanket statement, I know, and on an individual basis that may not always be the case. Though when speaking of the MSM, you have to get by the editor. Who does Brian report to?

    2. BrianKrebs Post author

      I have no idea whether some percentage of people take my work less seriously because it shows up on an independent blog; certainly, there are some who don’t get their news beyond the big news sites, although I like to think that more and more news-gatherers shop at the big box stores AND the farmer’s markets, if you will 🙂

      You can certainly help, W.J., by referring your friends and family here. I try to keep this blog a mix of interesting and useful content, but a huge percentage of readers are first-time visitors.

      Your continued feedback and support are appreciated.

      1. Jim

        Brian, your blog is my main source for computer security issues. Most often, your advise/fixes precede those that are responsible for updates. You provide useful links and expert quotes to credential your advise.

        My personal website has proudly offered a link to this blog for some time now. As always, thank you for being Semper Paratus.

      2. JTW

        some people think anyone not paid by a big publication must be paid by “big business” and therefore can’t be impartial.

        And that does indeed happen, publications that are little more than collections of advertising thinly veiled as “reviews” and “editorials”.

    3. KFritz

      Let me guess. You’re trying to “praise this blog w/ faint damning?”

    4. EJ

      I definitely detect a tongue-in-cheek sarcasm in WJ Freeman’s comment, aimed squarely at blogs and blog readers in general, in which he’s pointing out the stereotypical aspect of both (i.e. usually neither fair nor balanced). As a connoisseur of sarcasm, I’ve come to realize that it rarely translates well to the Net unless it’s not so subtle.

      I’m fairly certain s/he’s praising this item and blog as going against those stereotypes.

  5. Wladimir Palant

    Brian, thank you for this great article. I often have to argue with people who try to defend bogus claims with numbers from these “studies”. Unfortunately, very few people understand how little sense it makes to count security flaws – which makes it a perfect tool of spreading FUD. Your 2006 article comparing vulnerability windows was great since it made “real” security visible to the people who are less knowledgeable. Too bad that not many people are willing to do the work of putting these numbers together when counting fixed security flaws is so much easier.

  6. TenorBrian

    Excellent post…I was thinking exactly along the same lines. Many of Chrome’s vulnerabilities are discovered by Google themselves, and further, almost none reach their definition of critical; they don’t execute outside of the Chrome sandbox. In addition, because of their uber fast auto updates, not only Chrome but the embedded flash are always updated. And they ship regular updates every 45 days now, so nothing will go too long, and I’ve got to believe that if a true actively exploited zero day for Chrome popped up, Google would crush it in no time. I think that purveyors of malware will likely target other software because their window of opportunity with Chrome (or Firefox, for that matter) will be very short indeed. Why would someone take the time to develop a zero day when they know it will be closed shortly thereafter?

  7. Joshua

    Excellent post. I love to see passionate people being passionate about people that have no passion. So much of our world today is a headline. People have no depth to their reporting and the people gulliably tweet it around the world. Everything in the world seems so fake. On that note, I’m guessing that’s the reason why scareware is so sucessful.

  8. Антон Гандоныч

    Кребс, тут хвалят твой топик, а как по мне, так полное гавно. ранше ты темки палил интерестные, а щас опосел чтото.

    1. BrianKrebs Post author

      Well, everyone has an opinion. I haven’t made up my mind about whether to delete your rude comment. Such hostility!

      The only reason I haven’t is that I’m fascinated by the internationalized domain in your sig.

  9. JohnP

    I’ve worked as a software developer for commercial software and for SEI level 5 dev environments for flight control and guidance software in space craft.

    Comparing the number of bugs of any kind between different programs isn’t completely useless, but it isn’t very useful either. It could mean the testers are better or the developers get bonuses for finding their own bugs, or the requirements really suck. There’s no way for an outsider to know.

    Over time statistics can be gathered at different phases of development and for a well-controlled development process, predictions CAN be made with over 90% accuracy. I’ve seen this over many year of development for teams with little process, but expert programmers and for the most controlled development processes. Based on the statistics for the GN&C team, if 2 errors were found during a peer code review (desk check), then there was over an 80% chance that at least 1 more critical error was still in the code. The team decided any errors getting through were bad enough, so a re-review was forced when that happened – human lives were at stake. Schedules didn’t matter in this job – safety always trumped every other concern – but we didn’t miss deadlines, ever. In that development team, our bug levels were less than 1 bug every 2 years when I moved onto a different job. It was very high quality software for one of the most complex machines ever made by man.

    When I see bug after bug after bug in Adobe software products, I’m convinced they do not have the development process they should and I don’t trust any software from that company. Sure, this is harsh. There are other options to accomplish the same results for almost every product they make.

    To me, a bug is a bug is a bug. They are all bad and go towards the quality (or lack of) for product management, testers, QA teams, developers, requirements writers and executive management.

  10. Ares

    FAIL.

    The Study (yes, without quotes) is for Enterprises know that it is important to have updated the software they use and not allow employees to install applications on their own.

    “Bit9’s Fourth Annual Report of Vulnerable Applications Reveals Importance of Continuous Monitoring, Patching and Application Control to Ensure Enterprises are Secure”

    “The “2010 Top Vulnerable Applications” report serves as a warning to enterprises about the risks of employees downloading unauthorized software and affirms the importance of staying current with software updates.”

    http://www.bit9.com/company/news-release-details.php?id=175

    This article is what happens when a fanboy see his idol attacked and goes fast to defend it without looking or thinking.

    1. F-3000

      “not allow employees to install applications on their own.” Amen! Yet, “study” like Bit9’s tells that fact to employers extremely bad and misleading way – if it does such at all.

      I personally would use a program which is under development and gets updates about once per week, rather than a program that’s existed for years, but the bugs just don’t get fixed.

      1. Ares

        “I personally would use a program which is under development and gets updates about once per week, rather than a program that’s existed for years, but the bugs just don’t get fixed.”

        And I know a application gets updates too often, fixed vulnerabilities recent but don’t fix critical old vulnerabilities, months and years.

        Make a list or ranking of what applications are best fixing bugs is very difficult if not impossible if we want to be fair and truthful.

        But is that Bit9 not intended do a rank for judgments about application security!!!, I do not know why you insists on reading things that way!!!. Is the fanboys? Is the ignorance? I do not know. They complain that blogs misread the report but you too.

        Bit9 just want to show that it is important to update because applications (populars) have vulnerabilities, and you shouldn’t have the application a year ago because in this year X vulnerabilities have been reported for the “Y application”, then you should be a newer version or another application.

        Is it very hard to understand?

        1. AlphaCentauri

          “But is that Bit9 not intended do a rank for judgments about application security!!!, I do not know why you insists on reading things that way!!!”

          Because they did rank them, from 1 to 12, in order. They didn’t group them by type of product, by manufacturer, by operating system, or by any other characteristic that would imply something other than judging the ones at the top of the list as being worse.

          And “Dirty Dozen” in English is a very disparaging term. If they were just intending to say, “Popular software products all have bugs; you should be sure to patch them regularly,” they would have used another term.

          1. Jane

            To keep hammering at the “Dirty Dozen” — dirty here isn’t meant in the sense “oops, you forgot to dust behind the TV” but rather “actively corrupt.”

          2. Ares

            “Because they did rank them, from 1 to 12, in order. They didn’t group them by type of product, by manufacturer, by operating system, or by any other characteristic that would imply something other than judging the ones at the top of the list as being worse.”

            Imagine that an application with 100 vulnerabilities discovered and other with 0 vulnerabilities discovered. Which application should call more attention to need to upgrade or change? IMO is correct it is ordered according to the number of vulnerabilities.

            In any case, we can discuss what other sort’s method could have been valid, and we can discuss why Bit9 ordered that way, but the latter would be speculating, especially when their intention is very clear throughout the original study.

            “And “Dirty Dozen” in English is a very disparaging term.”

            Sorry, I do not knew was so bad. In any case it is be a subjective assessment, but the number vulnerabilities’s of each application are data real. Maybe best is “less ‘not dirty’ dozen” XD

          3. AlphaCentauri

            The problem with their rank list is the assumption that the count is complete or that the raw numbers accurately represent the relative risk of using each of the products.

            As people have mentioned, bugs in open source software are generally all made public. Bugs in proprietary software may not be made public if they are found and fixed by their own developers, or if they are never fixed at all. The numbers are counts of different types of data.

            And the most commonly used products are simply the most likely to have their vulnerabilities publicized, though the same issue may affect multiple products. For instance, Seamonkey and Firefox are basically the same browser. Why is only the more popular Firefox on the list? Because when a problem is found, it’s more likely to be counted as a Firefox bug, but the fix is issued for both products.

            Not all bugs are equally dangerous. Given two equally dangerous bugs, the risk to users is further dependent on whether the user is aware of the danger, whether there are active exploits against it, and how long it takes to get the bug fixed. But this list isn’t even expressed in “vulnerable days,” i.e., how many days users of each product were using software with a known vulnerability, or how many days the vulnerability was present from the time it was coded to the time it was corrected.

            The “dirty dozen” list implies a single assessment tool — the raw number of reported vulnerabilities — is meaningful. It implies that by choosing products lower on the list, you are safer. And you simply can’t draw that conclusion from the evidence.

          4. Ares

            “The problem with their rank list is the assumption that the count is complete or that the raw numbers accurately represent the relative risk of using each of the products.”

            The list show how many “high severity” vulnerabilities reported had each application. There are no lies, and no hidden phrases. No products judged more risky than others.

            The problem is that people imagine / hallucinates that the study is saying what they want to believe it says. But they want believe it says something that makes them angry. (is the trouble with popular applications, have fans, fans with much anger and **** life).

            “As people have mentioned, bugs in open source software are generally all made public. Bugs in proprietary software may not be made public if they are found and fixed by their own developers, or if they are never fixed at all. The numbers are counts of different types of data.”

            For this reason the study says REPORTED vulnerability. Again, there are no lies.
            And sorry, English is not my native language, so I cannot discuss the typical fallacies Open Source vs Closed Source, but the people have very wrong ideas about it.

            “Not all bugs are equally dangerous”

            For the study have been taken “high severity” vulnerabilities.

            “Given two equally dangerous bugs, the risk to users…
            …or how many days the vulnerability was present from the time it was coded to the time it was corrected.”

            As I said before, the study does not intend to say that implementation is more dangerous, so everything you mentioned is irrelevant. His aims to inform the vulnerabilities reported in this year in popular applications and how many each.

            “It implies that by choosing products lower on the list, you are safer. And you simply can’t draw that conclusion from the evidence.”

            No, you cannot draw that conclusion because the original study are not saying that the applications in the top are more dangerous than those below, are saying they “should be updated/monitor applications”, because are saying “The App 1 had X vulnerabilities, The App 2 had Y vulnerabilities, blah blah” in this years.

  11. Jason

    Maybe you’re already having an impact on the mainstream reporting, Brian. The article you linked to as an example of the security press parroting these reports had this added at the end of it:

    “The method of just focusing on the number of reported vulnerabilities is not without controversy. As Mozilla pointed out two years ago, the Bit9 study ignores issues like how quickly the bugs are fixed, and it punishes companies like Google and Mozilla that publicly disclose all vulnerabilities while other companies disclose only publicly discovered holes and not those found internally. It also fails to recognize that some companies lump multiple vulnerabilities into one report in the vulnerability database. In addition, there have been concerns about the quality and presentation of data in the vulnerability databases themselves, as mentioned by Google earlier this year.”

    “Updated at 10:15 a.m. PT with information on complaints about studies based solely on numbers of reported vulnerabilities.”

    I think we have to give author Elinor Mills from CNET a few points for adding this to the article. Maybe she saw this blog post.

  12. Hank

    Incredible. The number of bugs isn’t as important as the type of exploit that can take advantage of them. Drive-by’s and remote exploits are the ones I worry about.

    Flash is a nightmare. It’s overused on the web. Why Adobe’s Flash has the ability to take over a user’s webcam and microphone is something I’ve always wondered about. That scares me.

    After paying far too much money for CS4, then having to deal with a renaming and pasting files to update core components as well as the overall lack of updates for it, I feel like I’ve been ripped off. By far, most of the free open source software I use has a better update functionality than Adobe’s flagship products.

    I am sick and tired of both hardware and software that doesn’t perform as advertised. I can think of no other industry that can issue an EULA that absolves the manufacturer of any responsibility or liability for selling a product with known defects.

    It wouldn’t surprise me if car manufacturers started selling cars with EULA’s. I can see it now: While you may own your vehicle, we are only issuing you a license to to use the underlying technology that enables its operation. Certain technologies used in this vehicle may fail to function, or those functions may be disabled at any time without warning. Our liability in this area is limited to…

  13. TemporalBeing

    You missed one aspect of the analysis – how often does a vulnerability re-occur?

    For instance, Microsoft has been known to have a vulnerability fixed in one patch, only to be unfixed by another patch. This too should be an important metric in the analysis as it shows how likely it is that software will remain fixed once fixed.

Comments are closed.