11
Jan 10

Firm to Release Database & Web Server 0days

January promises to be a busy month for Web server and database administrators alike: A security research firm in Russia says it plans to release information about a slew of previously undocumented vulnerabilities in several widely-used commercial software products.

Evgeny Legerov, founder of Moscow based Intevydis, said he intends to publish the information between Jan 11 and Feb 1. The final list of vulnerabilities to be released is still in flux, Legerov said, but it is likely to include vulnerabilities (and in some cases working exploits) in:

-Web servers such as Zeus Web Server, Sun Web Server (pre-authentication buffer overflows);
-Databases, including Mysql (buffer overflows), IBM DB2 (local root vulnerability), Lotus Domino and Informix
-Directory servers, such as Novell eDirectory, Sun Directory and Tivoli Directory.

In an interview with krebsonsecurity.com, Legerov said his position on vulnerability disclosure has evolved over the years.

“After working with the vendors long enough, we’ve come to conclusion that, to put it simply, it is a waste of time. Now, we do not contact with vendors and do not support so-called ‘responsible disclosure’ policy,” Legerov said. For example, he said, “there will be published two years old Realplayer vulnerability soon, which we handled in a responsible way [and] contacted with a vendor.”

At issue is the pesky ethical and practical question of whether airing a software vendor’s dirty laundry (the unpatched security flaws that they know about but haven’t fixed yet) forces the affected vendor to fix the problem faster than it would have had the problem remained a relative secret. There are plenty of examples that show this so-called “full disclosure” approach does in fact prompt vendors to issue patches faster than when privately notified by the researcher and permitted to research and fix the problem on their own schedule. But in this case, Legerov said he has had no contact with the vendors, save for Zeus.com, which he said is likely to ship an update to fix the bug on the day he details the flaw.

Intevydis is among several vulnerability research firms that sell “exploit packs” — or snippets of code that exploit vulnerabilities in widely-used software (others include Gleg, Enable Security, and D2). The company’s exploit packs are designed for users of CANVAS, a commercial software penetration testing tool sold by Miami Beach, Fla. based Immunity, Inc.

While organizations that purchase CANVAS along with exploit packs from these companies may have better protection from newly-discovered security vulnerabilities while waiting for affected vendors to fix the flaws, Immunity does not report the vulnerabilities to the affected vendors (unless the vendors also are customers, in which case they would have access to the information at the same time as all other customers).

That approach stands apart from the likes of TippingPoint‘s Zero-Day Initiative and Verisign‘s iDefense Vulnerability Contributor Program, which pay researchers in exchange for the rights to their vulnerability research. Both ZDI and iDefense also manage the communication with the affected vendors, ship stopgap protection for the vulnerabilities to their customers, and otherwise keep mum on the flaws until the vendor ships an update to fix the bugs.

Legerov said he’s been an anonymous contributor to both programs over the years, and that it is not difficult for good researchers to make between $5,000 and $10,000 a month selling vulnerabilities and exploits to those companies. But he added that he prefers the full disclosure route because “it allows people to publish what they think without being moderated.”

Dmitri Alperovitch, vice president of threat research at McAfee, called Legerov’s planned disclosure “irresponsible,” given the sheer number of businesses that rely on the affected products. Alperovitch said the responsible way to disclose a vulnerability is to send information to the vendor and let them know you plan to release in a reasonable time (usually 60-90 days).

“If they ask for more time  — again, reasonably – not a year out — you try to accommodate. If the vendor doesn’t respond, you release and move on,” he said. “But to give them no advance notice just because some vendors don’t take security seriously is irresponsible.”

Charlie Miller, a former security researcher for the National Security Agency who now heads up the Baltimore based Independent Security Evaluators (and is co-founder of the No More Free Bugs meme) , also has earned tens of thousands of dollars from vulnerability management firms — most famously by competing in ZDI’s Pwn to Own contests, which carry a $10,000 First Prize.

“These programs are good because they allow researchers to get something for their effort, and you don’t have to deal with the back-and-forth with the vendor, which is not fun,” Miller said.

Still, Miller said he’s sympathetic to researchers who react to vendor apathy with full disclosure.

“The thing is, finding critical security bugs in widely used software should be rare if vendors are doing their job. But the sad part is, finding a serious bug in something like Adobe Reader is not a very rare event, and it seems to happen every month almost now,” Miller said. “The conclusion we can draw is that some vendors aren’t doing enough to make their software secure. It should be rare enough that vendors should be so surprised and concerned that they’re willing to do what they need to do to get it fixed.”

Setting the full disclosure debate aside for the moment, it has been fascinating to watch the development of the vulnerability management industry. I can recall a heated panel discussion back in 2006 at the CANSEC West conference in Vancouver, B.C. Canada, in which ZDI and several supporters of that effort took some heat for the program from a number of folks in the security industry.

These days, ZDI and iDefense are responsible for pushing software makers to fix an impressive number of software flaws.  Take Microsoft, for example: By my count, Microsoft fixed approximately 175 security vulnerabilities in its Windows operating systems and other software last year. Of those, the ZDI program is responsible for reporting 32, while iDefense’s program contributed 30 flaw reports. Put together, the two programs accounted for more than a third of all vulnerabilities Microsoft fixed in 2009.

Got strong feelings about this article, or about the issue of vendor responsibility or vulnerability disclosure? Please drop a note in the comments section below.

Tags: , , , , , , , , , ,

82 comments

  1. I’d like to add my two cents to this thread…

    Having been on the receiving end of the disclosure process for a very large AV/security firm. I can tell you that we as a company did our best to work with the researcher and treat them with respect and understanding that is commesurate with their position/role/background/personality. In fact, there are still some researchers that I keep in contact with because they are pretty smart/cool/funny people. And as far as I know we never paid any researcher, mainly because there is a secondary market for that (i.e. Tipping Point). But for all of those individuals that complain about how there is no testing or QA (or very little, or poorly done, etc.) on some of these vendors products, you are very sadly mistaken.

    Speaking from direct experience as a developer for said AV company, the QA staff was larger than some entire companies, demonstrating a huge commitment to quality code/secure products. I know first hand how hard it is to write code that is not exploitable, period. Especially, if you don’t own the entire “stack”. I also know what it takes to “pen test” a product, since that was my role for awhile.

    This all leads up to the statement, that Responsible Disclosure for the most part works, however you have to be patient, for the “big machinery” of a large corporation doesn’t turn on a dime. It takes time to communicate, verify, schedule, prioritizie, correct, test, and insert into the product the potential fix without introducing a new issue or without unacceptably delaying the existing work. Work that may be performed by dozens and dozens of people and worth millions of dollars.

    As for Full Disclosure of a defect, it just makes the researcher look amateurish. As a company, we still needed to go through the same process either way the “defect/exploit” was submitted. We would watch the various “eploit outlets” or press our personal contacts to give us a heads up, if possible. Once we had the minimum info that we needed we would go ahead and put the bug into our system and assign it to someone to manage through our entire process. All the Full Disclosure did was tarnish the reputation of the company/product, which now required collaborating with the PR department in order for them to disseminate the relavant information in the most positive light possible. We as a company have much experience with the entire process, as do most of the other security firms includuing Microsoft.

    If you can’t tell, I am not a huge fan of Full Disclosure and believe it isn’t as effective as it might appear.

    • Consumers who buy a product in good faith do not have to understand or have patience. But they have a right to know immediately if there is a risk. Imagine your car manufacturer was aware of a fatal flaw – you think it’s all right for the company to withhold that information? Of course it’s not!

  2. Vendors hate bugs. They want to push products out as fast as possible with as many features as possible. Security is usually an after thought and an expense. They complain about security researchers that exploit their software, but really it’s just market economics. Vendors push their software out like they do to make money. Many researchers try to do the right thing and responsibly disclose bugs, but as it turns out they’re given a hard time and little or no pay for their trouble. However, there’s a market for bugs that has a huge premium. After all, if you’re a company what would you pay to protect your databases? What would you pay to get insider data or see your competitor be embarrassed?

    So long as there is a market imbalance between what software vendors will pay for bugs and their real market value, researchers and companies will sell or publicly release exploits.

    To anyone that views such a sentiment as immoral, I’ve never seen a moral corporation and generally speaking, a corporation has a legal responsibility to make a profit for share holders. Good luck Intevydis.

  3. Just because one can elicit a desired outcome, does not make it right. I don’t subscribe to the end justifies the means argument … two wrongs don’t make a right. In other words, making the computing ecosystem at large pay the price for the reputed sins of software makers is just wrong.

    And for the sake of argument, let’s drop the word ‘reputed’. Is making the ecosystem at large pay the price the only way to engender change? Are we not smarter than this?

  4. You can’t assume the hacker making the disclosure is the first one to discover the exploit. It’s entirely possible it’s been in use by malicious entities for months by the time he discovers it. His disclosure gives vulnerable users a warning that the software they are using could put them at risk, so they can act accordingly while waiting for a patch. Everyone ought to think that way all the time, but we’re human, so a little wake up call helps.

  5. Software defects are inevitable. Good programmers introduce fewer bugs than bad programmers, but nobody ever writes 100% error free code. Some of the comments in this thread assert that software vendors don’t care about bugs. That may be true in some circumstances, but where I work these issues get taken very seriously.

    I work as an OS developer for a large company. Our product has been under continuous development since 1983. Our customers have expectations about compatibility between software releases. As RobotDog observed, the development cycle of a large software project has many steps. This process exists to prevent quality problems for our customers. Changes get extensively tested and reviewed prior to integration. Release Engineering tests the product again before it is released. In order for a fix to be back-ported to a maintenance branch, it must remain in the development branch with no errors for a certain period of time. All of this work gets done to prevent the customer from receiving a product that has additional bugs, regressions, and feature incompatibilities.

    Full disclosure may get a vulnerability fixed faster, but there doesn’t seem to be any data on the underlying quality of the patched product. Is the vulnerability actually fixed? Did the release introduce other bugs for the customer? Were there functional or performance related regressions? If I were a customer, I would be curious to know if full disclosure increases the likelihood that I will get a low quality solution to my security problem.

    • If you are an OS developer for a large company with a product dating back to 1983, then you must work for Microsoft.

  6. I consider myself new to the security field. In reading this thread, I have gone back and forth several times being swayed by comments to and from full disclosure or not.

    Looking at things from a network perspective, a lot of the vulnerabilities have something to do with remote-code execution, and the like. From a network-eye view, any network on which these applications reside should have some kind of intrusion prevention, network anomaly detector, or at bare minimum, an intrusion detection appliance.

    I say this because ultimately a lot of these exploits come back to the network level. The IPS should have a way for the network security folks to add new information on how to block the exploits as they are coming in — or at least as they are trying to leave the network.

    So responsible disclosure or not, end-users within a corporate network SHOULD BE armed with a way to defend themselves against 0-day exploits. The {average} home users are still stuck relying in their own AV products to handle this for them.

  7. 60-90 days is not reasonable. How much damage could be done in just one day by organized stump heads? I’m feelin’ ya, Fred.

    The Tramp

  8. J.StevenLivacich

    From my first view– I see this as a good effort towards better computer security, I shall follow this further. You were well recommended by knujon@coldrain.com.

  9. Does anyone know of any vendors that have been sued for civil (financial) liability by hacked victims with respect to products or software with known security bugs? Brian has highlighted the recent bank litigation in Texas and Maine that hold considerable interest. The right vendor claim would be even more powerful to reshape the disclosure debate and landscape overnight.

    As a practicing lawyer and recent entrant into the industry (Calyptix where we focus on IT security for SMB customers), the imposition of liability becomes a very powerful motivator to fix code. Just look to the auto industry. Uncapped liability ultimately drove the manufacturers to favor seek regulation as a liability shield rather than a burden – because everyone faces the same costs. As long as the status quo is less expensive than the proposed alternative, the vendors will resist change (e.g. regulation, disclsoure, etc.) and any regulation is likely to offer limited value. Civil litigation is a wild card and hard to calculate the cost thus can become a game changer. All you need is the right victim, right facts and right jurisdiction.

  10. Intevydis follows through on their pledge.

    See “Oracle Breaks Regular Patch Cycle Because of Zero-Day Bug”; URL: http://news.softpedia.com/news/Oracle-Breaks-Regular-Patch-Cycle-Because-of-Zero-Day-Bug-134246.shtml

    Interesting quote:
    “The company was forced to take this step after exploit code has been publicly released by a security research company without any notification in advance.”

  11. I used to work for a company that developed software-intensive consumer electronics products. When I questioned some security issues in a joint project we were contemplating with a competitor (the market leader), I was told that the competitor’s official position on security was that “security is a market disabler”. They would rather ship a known-insecure product to get a capability into the market quicker and make revenue, even if they had to use a portion of the profits to fix the bugs later, than to delay getting to market (risking having a competitor beat them) and make it more secure. There were people in our company who thought that their position was “enlightened”, and wondered out loud why we didn’t emulate them.

    Does anyone doubt that this is common in for-profit companies? There are only a few things that will cause companies to change were they draw the line between spending time and effort enhancing security vs. time-to-market + profits: 1) Bad press. Bad for business, bad for stock price. 2) Regulation. Most companies, if reluctantly, obey the law at least partly because failing to do so violates #1. 3) Customers and sales. If customers told companies they wanted better security in their products, they might get it, or if lots of people tell companies that they aren’t buying a product because it’s not secure, then it might have an effect. But people ask for product features and ASSume that it’s secure, so this virtually doesn’t happen in practice.

    In the early days of cellphones, cloning was quoted as causing billions of dollars of lost revenue. The dirty little secret was that most of that “lost revenue” was calls that wouldn’t have ever been made if the caller had had to pay for them, and until the “real losses” caused by denying access to paying customers got big enough, the carriers really didn’t care enough to pay for fixing it. Fast-forward to the credit card industry now: As long as they can charge people 30% annual interest, why worry about a few percent of that being spent on fraud? Or to the Microsoft’s of the world: as long as people buy their software, and have to pay for upgrades to get higher security going forward, then full-steam-ahead on features, and minimum spending on improving security. As long as companies are making a profit, and especially if they can make their customers pay for the fixes to their mistakes, they have little incentive to fix their problems.

    I don’t particularly like the idea of being hit by a 0day because my daily-use software is buggy, but my guess is that, statistically, early disclosure is likely to reduce the long-term damage caused by a 0day exploit more than it increases it (though it will do both) because it gets it out in front of the software vendor and the detection vendors without back-room rigamarole.