Dec 13

The Case for a Compulsory Bug Bounty


Security experts have long opined that one way to make software more secure is to hold software makers liable for vulnerabilities in their products.  This idea is often dismissed as unrealistic and one that would stifle innovation in an industry that has been a major driver of commercial growth and productivity over the years. But a new study released this week presents perhaps the clearest economic case yet for compelling companies to pay for information about security vulnerabilities in their products.

Before I delve into this modest proposal, let’s postulate a few assumptions that hopefully aren’t terribly divisive:

  • Modern societies are becoming increasingly dependent on software and computer programs.
  • After decades of designing software, human beings still build imperfect, buggy, and insecure programs.
  • Estimates of the global damage from cybercrime ranges from the low billions to hundreds of billions of dollars annually.
  • The market for finding, stockpiling and hoarding (keeping secret) software flaws is expanding rapidly.
  • Vendor-driven “bug bounty” programs which reward researchers for reporting and coordinating the patching of flaws are expanding, but currently do not offer anywhere near the prices offered in the underground or by private buyers.
  • Software security is a “negative externality”: like environmental pollution, vulnerabilities in software impose costs on users and on society as a whole, while software vendors internalize profits and externalize costs. Thus, absent any demand from their shareholders or customers, profit-driven businesses tend not to invest in eliminating negative externalities.

Earlier this month, I published a piece called How Many Zero-Days Hit You Today, which examined a study by vulnerability researcher Stefan Frei about the bustling market for “zero-day” flaws — security holes in software that not even the makers of those products know about. These vulnerabilities — particularly zero-days found in widely-used software like Flash and Java — are extremely valuable because attackers can use them to slip past security defenses unnoticed.

Frei’s analysis conservatively estimated that private companies which purchase software vulnerabilities for use by nation states and other practitioners of cyber espionage provide access to at least 85 zero-day exploits on any given day of the year. That estimate doesn’t even consider the number of zero-day bugs that may be sold or traded each day in the cybercrime underground.

At the end of that post, I asked readers whether it was possible and/or desirable to create a truly global, independent bug bounty program that would help level the playing field in favor of the defenders and independent security researchers. Frei’s latest paper outlines one possible answer.


Frei proposes creating a multi-tiered, “international vulnerability purchase program” (IVPP), in which the major software vendors would be induced to purchase all of the available and known vulnerabilities at prices well above what even the black market is willing to pay for them. But more on that in a bit.

The director of research for Austin, Texas-based NSS Labs, Frei examined all of the software vulnerabilities reported in 2012, and found that the top 10 software makers were responsible for more than 30 percent of all flaws fixed. Frei estimates that if these vendors were to have purchased information on all of those flaws at a steep price of $150,000 per vulnerability — an amount that is well above what cybercriminals or vulnerability brokers typically offer for such bugs — this would still come to less than one percent of the annual revenues for these software firms.


Frei points out that the cost of purchasing all vulnerabilities for all products would be considerably lower than the savings that would occur as a result of the expected reduction in losses occurring as a result of cyber crime — even under the conservative estimate that these losses would be reduced by only 10 percent.

In the above chart, for example, we can see Oracle — the software vendor responsible for Java and a whole heap of database software code that is found in thousands of organizations — fixed more than 427 vulnerabilities last year. It also brought in more than $37 billion in revenues that year. If Oracle were to pay researchers top dollar ($150,000) for each vulnerability, that would still come to less than two-tenths of one percent of the company’s annual revenues (USD $67 million).

Frei posits that if vendors were required to internalize the cost of such a program, they would likely be far more motivated to review and/or enhance the security of their software development processes.


Likewise, Frei said, such a lucrative bug bounty system would virtually ensure that every release of commercial software products would be scrutinized by legions of security experts.

“In the short term, it would hit the vendors very badly,” Frei said in a phone interview with KrebsOnSecurity. “But in the long term, this would produce much more secure software.”

“When you look at new innovations like cars, airplanes and electricity, we see that security and reliability was enhanced tremendously with each as soon as there was independent testing,” said Frei, an experienced helicopter pilot. “I was recently reading a book about the history of aviation, and [it noted that in] the first iteration of the NTSB [National Transportation Safety Board] it was explicitly stated that when they investigate an accident, if they could not find a mechanical failure, they blamed the pilot. This is what we do now with software: We blame the user. We say, you should have installed antivirus, or done this and that.”


In my challenge to readers, I asked for thoughts on how a global bug bounty program might maintain its independence and not be overly influenced by one national government or another. To combat this potential threat, Frei suggests creating a multi-tiered organization that would consist of several regional or local vulnerability submission centers — perhaps one for the Americas, another for Europe, and a third for Asia.

Those submission centers would then contract with “technical qualification centers” (the second tier) to qualify the submissions, and to work with researchers and the affected software vendors.

“Most critical is that the IVPP employs an organizational structure with multiple entities at each tier,” wrote Frei and fellow NSS researcher Francisco Artes. “This will ensure the automatic and consistent sharing of all relevant process information with all local submission centers, thus guaranteeing that the IVPP operates independently and is trustworthy.”


According to Frei and Artes, this structure would allow researchers to check the status of a submission with any submission center, and would allow each submission center to verify that it possesses all information — including submissions from other centers.

“Because the IVPP would be handling highly sensitive information, checks and balances are critical,” the two wrote. “They would make it difficult for any party to circumvent the published policy of vulnerability handling. A multi-tiered structure prevents any part of the organization, or legal entity within which it is operating, from monopolizing the process or the information being analyzed. Governments could still share vulnerabilities with their agencies, but they would no longer have exclusive access to this information and for extended periods of time.”


Frei’s elaborate system is well thought-out, but it glosses over the most important catalyst: The need for government intervention. While indeed an increasing number of software and Internet companies have begun offering bug bounties (Google and Mozilla have for some time, and Microsoft began offering a limited bounty earlier this year), few of them pay anywhere near what private vulnerability brokers can offer, and would be unlikely to up the ante much absent a legal requirement to do so.

Robert Graham, CEO of Errata Security, said he strongly supports the idea of companies offering bug bounties, so long as they’re not forced to do so by government fiat.

“The amount we’re losing from malicious hacking is a lot less than what we gain from the free and open nature of Internet,” Graham said. “And that includes the ability of companies to quickly evolve their products because they don’t have to second-guess every decision just so they can make things more secure.”

Graham said he takes issue with the notion that most of the losses from criminal hacking and cybercrime are the direct result of insecure software. On the contrary, he said, most of the attacks that result in costly data breaches for companies these days stem from poorly secured Web applications. He pointed to Verizon‘s annual Data Breach Investigations Report, which demonstrates year after year that most data breaches stem not from software vulnerabilities, but rather from a combination of factors including weak or stolen credentials, social engineering, and poorly configured servers and Web applications.

“Commercial software is a tiny part of the whole vulnerability problem,” Graham said.

Graham acknowledged that the mere threat of governments imposing some kind of requirement is often enough to induce businesses and entire industries to self-regulate and take affirmative steps to avoid getting tangled in more bureaucratic red tape. And he said if ideas like Frei’s prompt more companies to offer bug bounties on their own, that’s ultimately a good thing, noting that he often steers clients toward software vendors that offer bug bounties.

“Software that [is backed by a bug] bounty you can trust more, because it’s more likely that people are looking for and fixing vulnerabilities in the code,” Graham said. “So, it’s a good thing if customers demand it, but not such a good thing if governments were to impose it.”

Where do you come down on this topic, dear readers? Feel free to sound off in the comments below. A copy of Frei’s full paper is available here (PDF).

Tags: , , , , , , ,


  1. Would the NSA and other global intelligence agancies try to stop any such plan? Insecure software is a tool for their activities.

  2. Sounds like death for small companies (eg most recognisable web brands), individual programmers (eg Minecraft, bitcoin) and open source software (most of the infrastructure of the web).

  3. This also has the potential to increase the number of vulnerabilities. A software developer may be enticed to introduce new bugs, pass them along to his buddy and split the bounty. This has happened before when companies have tried internal bug fix bounties. The code got worse because the developers were creating more bugs so they could “find” them and get paid more.

  4. While this sounds like a great idea, Robert Graham is right. The vast majority of losses / compromises are not from zero-day exploitation. Back in 2011 99.88% of all compromises were the result of known vulnerabilities being exploited. What does that tell you? It tells you that people aren’t able to protect themselves from known vulnerabilities and because of that most hackers don’t need 0-day’s.

    Netragard’s High-Threat Penetration Testing services do not (usually) use zero-day exploits and yet we have a 99.4% success rate at penetration with total infrastructure compromise. What does that mean? It means that from the internet we are able to penetrate into a company, compromise the domain controller and from there compromise the rest of our targets without using a single zero-day.

    Finally, we published an interesting article on zero-days and just how risky they really are. You can read all about that here:


  5. I am attorney who used to be a software architect. What many people (and attorneys) do not realize is that if a software company negligently develops software, then the software company is liable for damages caused by that negligence. In other words, existing law can apply.

    As it continues to become more standard to develop secure software, it helps define the duty that software companies must live up to. A lot of software licenses disclaim liability, which helps to insulate the software companies from liability.

    However, licensees who use this substandard software are subject to liability for damages that may result from bugs in that software. (Think Target having potential liability for exposing all of that credit card data.) As these licensees start to get sued, then they will force the software companies to change their licenses. Once the software companies understand their exposure, they will manage their development processes better to develop more secure code.

  6. Hypothetical: I wrote a little game app, and used a 3rd party library in doing so. But there’s a flaw in my game that lets a miscreant alter high score records.

    So two issues here. First is scale: Am I on the hook for the same $150k Oracle would owe for a massive hole in Java? And a sub-issue to that: Does it matter if I was giving the game away for free or selling it? What if it was free, but ad supported?

    Second, what if I say the flaw is in the 3rd party library? What if the library developer says I used their code improperly? Who arbitrates that wrangle? Again, does it matter if the library was free/open source?

  7. This is quite possibly the dumbest thing I’ve heard out of our community. This ranks right up there w/the guy who thought bringing in the iPads w/Retina displays was going to increase network bandwidth usage. Not only are the financial losses not in line w/who creates the software AND pays the bounties, but as has been pointed out, most incidents occur as a result of non-zero-day bugs or social engineering. Not to mention that offering 150k per bug would simply drive up the black market prices. It’s an almost pure capitalist market out there.

    This paper is a shameless attempt at publicity through half-assed content generation for content-generation’s sake. The NSS “findings” listed near the beginning are mostly fact, assumptions, or generally accepted ideas. I can list my findings today as well: the sky is blue, publicity is king, most people don’t do actual research to get their “data”, they just google for it… I really expect more from Brian Krebs than blind cross-promotion of this drivel.

  8. Both ends against the middle

    Crazy idea, what if credit card companies were liable for how insecure their entire industry is instead of forcing their customers to create systems to handle their insecure, 30 year old system and make it work on the modern web.

  9. uptil I looked at the paycheck 4 $5763, I did not believe that…my… best friend woz like truley making money part time from there pretty old laptop.. there mums best friend has done this for under ten months and just cleard the depts on there condo and purchased BMW 5-series. try this website,,,,,,,