January 11, 2010

January promises to be a busy month for Web server and database administrators alike: A security research firm in Russia says it plans to release information about a slew of previously undocumented vulnerabilities in several widely-used commercial software products.

Evgeny Legerov, founder of Moscow based Intevydis, said he intends to publish the information between Jan 11 and Feb 1. The final list of vulnerabilities to be released is still in flux, Legerov said, but it is likely to include vulnerabilities (and in some cases working exploits) in:

-Web servers such as Zeus Web Server, Sun Web Server (pre-authentication buffer overflows);
-Databases, including Mysql (buffer overflows), IBM DB2 (local root vulnerability), Lotus Domino and Informix
-Directory servers, such as Novell eDirectory, Sun Directory and Tivoli Directory.

In an interview with krebsonsecurity.com, Legerov said his position on vulnerability disclosure has evolved over the years.

“After working with the vendors long enough, we’ve come to conclusion that, to put it simply, it is a waste of time. Now, we do not contact with vendors and do not support so-called ‘responsible disclosure’ policy,” Legerov said. For example, he said, “there will be published two years old Realplayer vulnerability soon, which we handled in a responsible way [and] contacted with a vendor.”

At issue is the pesky ethical and practical question of whether airing a software vendor’s dirty laundry (the unpatched security flaws that they know about but haven’t fixed yet) forces the affected vendor to fix the problem faster than it would have had the problem remained a relative secret. There are plenty of examples that show this so-called “full disclosure” approach does in fact prompt vendors to issue patches faster than when privately notified by the researcher and permitted to research and fix the problem on their own schedule. But in this case, Legerov said he has had no contact with the vendors, save for Zeus.com, which he said is likely to ship an update to fix the bug on the day he details the flaw.

Intevydis is among several vulnerability research firms that sell “exploit packs” — or snippets of code that exploit vulnerabilities in widely-used software (others include Gleg, Enable Security, and D2). The company’s exploit packs are designed for users of CANVAS, a commercial software penetration testing tool sold by Miami Beach, Fla. based Immunity, Inc.

While organizations that purchase CANVAS along with exploit packs from these companies may have better protection from newly-discovered security vulnerabilities while waiting for affected vendors to fix the flaws, Immunity does not report the vulnerabilities to the affected vendors (unless the vendors also are customers, in which case they would have access to the information at the same time as all other customers).

That approach stands apart from the likes of TippingPoint‘s Zero-Day Initiative and Verisign‘s iDefense Vulnerability Contributor Program, which pay researchers in exchange for the rights to their vulnerability research. Both ZDI and iDefense also manage the communication with the affected vendors, ship stopgap protection for the vulnerabilities to their customers, and otherwise keep mum on the flaws until the vendor ships an update to fix the bugs.

Legerov said he’s been an anonymous contributor to both programs over the years, and that it is not difficult for good researchers to make between $5,000 and $10,000 a month selling vulnerabilities and exploits to those companies. But he added that he prefers the full disclosure route because “it allows people to publish what they think without being moderated.”

Dmitri Alperovitch, vice president of threat research at McAfee, called Legerov’s planned disclosure “irresponsible,” given the sheer number of businesses that rely on the affected products. Alperovitch said the responsible way to disclose a vulnerability is to send information to the vendor and let them know you plan to release in a reasonable time (usually 60-90 days).

“If they ask for more time  — again, reasonably – not a year out — you try to accommodate. If the vendor doesn’t respond, you release and move on,” he said. “But to give them no advance notice just because some vendors don’t take security seriously is irresponsible.”

Charlie Miller, a former security researcher for the National Security Agency who now heads up the Baltimore based Independent Security Evaluators (and is co-founder of the No More Free Bugs meme) , also has earned tens of thousands of dollars from vulnerability management firms — most famously by competing in ZDI’s Pwn to Own contests, which carry a $10,000 First Prize.

“These programs are good because they allow researchers to get something for their effort, and you don’t have to deal with the back-and-forth with the vendor, which is not fun,” Miller said.

Still, Miller said he’s sympathetic to researchers who react to vendor apathy with full disclosure.

“The thing is, finding critical security bugs in widely used software should be rare if vendors are doing their job. But the sad part is, finding a serious bug in something like Adobe Reader is not a very rare event, and it seems to happen every month almost now,” Miller said. “The conclusion we can draw is that some vendors aren’t doing enough to make their software secure. It should be rare enough that vendors should be so surprised and concerned that they’re willing to do what they need to do to get it fixed.”

Setting the full disclosure debate aside for the moment, it has been fascinating to watch the development of the vulnerability management industry. I can recall a heated panel discussion back in 2006 at the CANSEC West conference in Vancouver, B.C. Canada, in which ZDI and several supporters of that effort took some heat for the program from a number of folks in the security industry.

These days, ZDI and iDefense are responsible for pushing software makers to fix an impressive number of software flaws.  Take Microsoft, for example: By my count, Microsoft fixed approximately 175 security vulnerabilities in its Windows operating systems and other software last year. Of those, the ZDI program is responsible for reporting 32, while iDefense’s program contributed 30 flaw reports. Put together, the two programs accounted for more than a third of all vulnerabilities Microsoft fixed in 2009.

Got strong feelings about this article, or about the issue of vendor responsibility or vulnerability disclosure? Please drop a note in the comments section below.


63 thoughts on “Firm to Release Database & Web Server 0days

  1. lembark

    Commercial vendors who cry about immediate release have an interesting double-standard: none of them complain about vulnerabilities in Open Source products (e.g., Apache’s httpd) being released immediately into the public forum. Users appreciate the quick disclosure so that they can plug holes themselves if necessary: even if they don’t know how to plug the holes themselves, disclosure prompts maintainers to look at workarounds and fixes.

    One place I worked at made a point of monitoring forums for dirt on other products, our competitors did the same, and the only real reason anyone budgeted money to fix bugs or security holes was to avoid public disclosure of problems that would affect our marketing.

    So, perhaps disclosure is just Capitalism’s way of making sure that vendors compete on the basis of secure, workable products.

  2. Andreas

    I applaud the opposition to responsible disclosure. Vendors don’t treat security responsibly either. Software products built upon inherently unsafe technology (and yes, this means C/C++/PHP and the like) just don’t deserve to be treated lightly. If you use them, you’re vulnerable, no matter what disclosure looks like.

    More to the point, disclosing such bugs at all, instead of just selling them to the highest bidder, can already be considered responsible behaviour on the side of the researcher.

    1. BrianKrebs Post author

      How come there are six people who voted that they didn’t “like” this comment, but not one responded directly to the comment? I’d like to hear from readers who have a beef with the pro-full disclosure folks.

      1. fish

        I have no beef with full disclosure. Vendors have made their choices on where best to invest their money and those who have avoided patching exploits will get their due. I say bring on full disclosure now. Maybe the world would be a little more careful with technology if they knew just how bunged up some of it is.

        But I’m not going to sit by while a FUD statement like “inherently unsafe technology (and yes, this means C/C++/PHP and the like) ” floats by. Maybe we should be using visual basic and .net instead? Choice of programming languages do not make for “inherently unsafe technology” any more than the choice of car tires makes for “inherently unsafe driving habits”.

      2. Gerry

        > How come there are six people who voted that they didn’t “like” this comment, but not one responded directly to the comment?

        Because we all though it should be obvious that his “unsafe technology” comment was complete BS and so decided to thumb down and not leave a comment which would only feed the troll.

      3. Paris Hilton

        Full disclosure is the only way. Full disclosure implies a belief software systems can and will be made secure. I haven’t seen many exploits for the Linux kernel recently, nor of Apple’s XNU, nor of OpenBSD. These are all relatively open systems and so they are being inspected all the time, especially OpenBSD.

        The only people afraid of full disclosure are those who don’t believe their software systems can ever be made secure. It’s easy to figure out who they are and what OS they support.

    2. Jeff

      “Software products built upon inherently unsafe technology ” WTF, Come on are you really telling every one that because you use c, c++, or php (insert lag here) you have to be kidding me, your “technology” has little to do with writing secure and clean code and yes any language can have a security problem even the new ones

    3. TheGeezer

      I really don’t the “inherently unsafe technology” comment. An axe handle used to beat someone’s head in is only ‘unsafe’ in the wrong hands. Same thing with the mentioned software. I would not call the technology either inherently safe or unsafe.

      1. Paris Hilton

        Yes and no. Java is safer that C but you can’t build an OS with Java (and you probably use C to make your Java tools).

    4. Doug

      Opposition to responsible disclosure. That is base, conceited and I’m disgusted.

      So you think it’s right to punish end users for their vendors’ business model.

      Let’s examine your position from a goals perspective. You want vulnerabilities to be “outed” for… what purpose? To try to get your way and force someone to do something you want. When you want. And I don’t see you offering to pay for it, either.

      I make my living helping organizations secure their source code. I help them when they ask, and they pay me when I help. What’s my goal? I don’t want anyone to have to worry about malware and breaches. I get a lot of satisfaction working for people who simply want to do the right thing; or at least they want to manage their vulnerabilities rather than be managed by breaches.

      If you’d like to restate this in a different way, illuminate me please.

      1. Paris Hilton

        The responsibility is the vendor’s, no one else’s. If there’s a vulnerability then I want to know about it. Put another way: no vendor has a right to keep that from me.

        This is only an issue with closed systems anyway. And one of those closed systems has 9/10 of the market and hundreds of thousands of exploits, and according to McAfee is responsible for a lot of the botnets and spam. Could there possibly be a connection?

        Helping companies try to improve closed source is working against the common good for solely personal pecuniary purposes.

      2. dc0de

        Doug,

        I applaud you for your efforts. It sounds as though you’re trying to fight to retain the position of Sisyphus. You’re pushing that boulder uphill, and making progress with a FEW customers.

        What about all of the vendors out there who will not responsibly review their own software/hardware/appliances before shipping them to the general IT Public? Who’s being irresponsible?

        I’m 100% in favor of full and open disclosure, just as I am about finding out about tainted food.

        Using your argument, I “should” go and sell my services testing everyone’s beef/chicken/pork in their home for contaminants before they eat it.

        I say fix the problem at the source. Throw these lazy and self-righteous vendors under the bus.

        Once the news gets out that your personal data has been released overseas due to a vendor who didn’t even test their own software, people will begin voting with their money.

        It’s time to clean the IT Gene pool.

  3. Evan Francen

    Over the years, I have been won over by vulnerability full-disclosure. If you have worked for software development firms, you gain a good understanding of the economics at work. Developers are under pressure from two fronts; one is to get the software to market as soon as possible so they can make money (businesses are in business to make money, after all), and two is to make the software secure and reliable. The pendulum swings from side to side. In an ideal world, we could have software that is released quickly AND secure. Some developers can deliver both, but in reality, the two forces often work against each other.

    If we have an understanding behind the economics of software development, we can see that there is no immediate, direct revenue generated in patch development. If there is no immediate return, people have a tendency to wait until they are forced to act. Full-disclosure forces developers to act. The faster a developer acts, the sooner a patch is released, and the sooner the patch is installed in companies with good patch management programs. There are more moving parts than simply disclosure.

    These vulnerabilities are known. They are known by the researcher, and they may be known by people with less than admirable intentions. As an information security professional, I want to know. I disagree with Alperovitch, although I see his point. Alperovitch “called Legerov’s planned disclosure “irresponsible,” given the sheer number of businesses that rely on the affected products.” These businesses have defective products. Don’t you think the responsible companies want to know? Responsible companies have mature patch management programs and are ready to deploy patches in a timely manner. It doesn’t matter who discloses the vulnerability, it matters how soon a patch is released. For the irresponsible companies without patch management programs, none of this matters. They might patch, or maybe they won’t. Their risks (and the risks to their customers and partners) will increase significantly either way.

    The vulnerabilities exist with or without disclosure, and my preference is to know as soon as possible. It doesn’t matter who tells me.

    1. dc0de

      Perhaps it’s time for software companies to come up with a different set of “economics” for building and creating software.

      They could start with learning how to code securely, it’s not a mystery.

      As well, having a constant security testing cycle in the SDLC helps considerably.

      Lastly, hiring outside companies to perform full penetration and exploit tests against the codebase prior to it’s release would also help.

      But this would cut into the bottom line, and keep the board from receiving their extra .01% of share value.

      No, no, can’t have that….

      Pluheeze… I say make the software / vendor legally responsible for their code. That should shake things up a bit.

  4. CG

    for less than 10k all those companies can just buy CANVAS and all the exploit packs and keep up to date with the bugs/oday as they come out. Or pay more for the Immunity Early updates. all less than the price of a junior security “whatever” on staff.

    1. Rick

      CG is right. If they really gave a damn (which they don’t) then a 10 K investment is nothing on their books. Absolutely nothing. The sad truth is they don’t care. The sad truth is that programmer quality is getting worse and worse. The sad truth is very few of them even care about QA and proper vetting and testing anymore. The sad truth is they’ve decided it’s just not profitable. Perhaps a flurry of 0days will force them to change their minds. At least for a short while. But most likely they’ll just call in the spin doctors. Spin doctors are still cheaper than good programmers and standardised security routines.

    2. Prefect

      Always nice when a vendor drops by and leaves the comment:

      “If you really cared about security, you would buy x product.”

      Does this ever actually result in sales?

  5. Mike Larkin

    It occurs to me aside from the various rights and wrongs of releasing this information is why does it continue?

    Aren’t these attacks often the same sort of things, over and over…buffer overflows, sql injection? Is checking software not available, not used, what?

    1. Doug

      There’s lots of software available. The good stuff is very expensive, and the free stuff is only somewhat helpful. It’s an immature field. Some buyers of source code analysis products are motivated by doing the right thing; others take a “risk based approach” which is something I’ve never fully understood.
      Sometimes, the “risk management” conclusion is to do nothing until they are breached.

  6. Curt Wilson

    I concur with CG’s sentiment. 0day is fact of life; bugs are not going away any time soon and those with malicious motives are also hunting bugs and have their own black markets to do so. Unfortunately there is nothing stopping those with malicious motives from purchasing all of the exploit kits with the spoils from their crimeware, but at least with the commercially available kits the “good guys” stand a chance to have a better idea of what’s out there. For a commercial vendor to expect free QA doesn’t make sense to me. I do think that open source software should be held to a different standard, and if I were actively bughunting all day and found a bug in some prominent piece of open source I would notify the developers without any expectation of compensation. And as mentioned, ZDI and other legitimate markets exist to reward researchers for their efforts and let them focus on the actual research, while ZDI does the other administrative legwork.

    1. Paris Hilton

      It doesn’t matter if bugs go away. What matters is if they can hurt me. If one of my applications crashes and nothing else happens, then it’s not a catastrophe. If one of my Web applications crashes and nothing else happens, then it’s not a catastrophe, But if one of my applications crashes because it’s been targeted and the attacker is able to poke a hole right through my application and into the innards of the operating system itself, then something is very wrong. Operating systems should never allow that. Good ones don’t, and this isn’t front page news.

  7. Al

    People should wake up, responsible disclosure was a “favor” to give vendors the benefit of the doubt. Since they haven’t “woken up,” maybe consumers and IT professionals will realize simply reducing the footprint of products on their servers/clients is the way to go. Adobe Acrobat Reader, Adobe Flash, etc…remove this junkware and stick to standard based formats that don’t require addins/plugins/junkware. Enough with these craplications!

    1. Daniel Hall

      Al wrote: Adobe Acrobat Reader, Adobe Flash, etc…remove this junkware and stick to standard based formats that don’t require addins/plugins/junkware. Enough with these craplications!

      While I cannot say that you are wrong about these applications that have holes… I would say that if you applied your same logic to OS’s like M$ Windows, then your same solutions should apply.

      I use Adobe PDFs and I use Adobe Flash.. I like them… I admit they have security problems (like most software does) and so I think they just need to make sure they fix their bugs as soon as possbile.

      If a commercial software company doesn’t have a priority to fix their security bugs, I vote for giving them a reason to fix it as fast as possible.

      1. KFritz

        Somebody ‘chooses’ to use Adobe flash? I’m an extremely low qualified commentator compared to experts who are commenting here, but is there another widely available, usable flash application. Comparable to Foxit Reader?

  8. b en

    Why don’t these primadonnas do what everyone else do when they find a bug (because security holes are just glorified bugs)? File the bug in the bug tracker and supply a patch.

    1. Steve

      Where exactly does one go to submit a bug report and patch for Acrobat Reader or Flash Player?

  9. Fred in IT

    Sitting here on the business end of the firehose – in the corporate world with some of the aforementioned products in my environment, my choice is (strangely) Full Disclosure immediately.

    If the hole is big enough and bad enough I can isolate and limit as I see fit. If necessary, take the system down until it’s patched.

    If I don’t have full disclosure then I’m at the mercy of the vendor, iDefense, etc. on what I can hear and when. And while they sip their Chi Tea Late’ and make a decision, I am at the mercy of some stump-headed hacker and don’t even know it.

    If some researcher knows it, then odds are a hacker does too. There are many more monkeys trying things on the black-hat side than the white-hat side. And the incentive is much higher as well.

  10. BrianKrebs Post author

    Once again, there are just some really fantastic comments here. I wanted to reply to one of them in a threaded fashion but then couldn’t decide which one.

    Even Microsoft is swayed pretty convincingly by the full disclosure route. Granted, this data is pretty stale, but back when I was measuring patch times for Microsoft, I found they consistently fixed stuff a lot faster when faced with full disclosure.

    http://blog.washingtonpost.com/securityfix/2006/01/a_time_to_patch.html

    “In cases where Microsoft learned of a flaw in its products through full disclosure, the company has indeed gotten speedier. In 2003, it took an average of 71 days to release a fix for one of these flaws. In 2004 that time frame decreased to 55 days, and in 2005 shrank further to 46 days.

    The company also seems to have done a better job convincing security researchers to give it time to develop a patch before going public with their vulnerability findings. In 2003, Microsoft learned of at least eight critical Windows vulnerabilities through full disclosure. Last year, this happened half as many times. ”

    From what I know of the security industry, researchers like to be acknowledged and maybe even compensated for their work. If FD advocates tend to go easy on Redmond, it may be because Microsoft has done a better job than any other vendor in courting the security research community.

    Again, old data. Probably worth revisiting sometime soon.

    1. Ken H

      Actually, you end up with the old squeaky wheel get’s the grease type of response. So you may have a low to medium risk vulnerability getting patched, since it’s been disclosed, over a more serious vulnerability that hasn’t been disclosed. If you have open discloser for all of X vendor’s product vulnerabilities, you will just end up back at the same issue of some vulnerabilities getting fixed faster than others, except now more vulnerabilities are known and can be exploited by a larger group of hacker/script kiddies.

  11. Liz

    By “widely-used” i presume they were referring only to MySQL? The rest are more of a ‘rarely-used’ in any sort of external server environment.

  12. Bob

    I heard this statement over forty years ago. It is valid even more than ever.

    “If builders built buildings the way programmers write programs, the first woodpecker that came along would have destroyed civilization.”

    1. KFritz

      I’m a builder. I enjoyed the analogy. BUT. A program is much more labyrnthian affair. There are just as many bozos in construction, but it’s simpler. Remember that the building codes were called forth by an epidemic of electrical fires early in the 20th century. The insurance industry midwifed the electrical code, the oldest one.

    2. Paris Hilton

      I disagree that programmers build poorly. I know the programmers I work with vary in ability. But they’re not the problem and never will be. It’s those who tell us what to write and how to write it.

  13. SuperMario

    One thing everyone seems to gloss over with respect to the entire security “research” paradigm and “discussions” with affected vendors:

    Where do you draw the line between reporting an exploit and extortion/blackmail?

    With the DMCA and other draconian efforts being pursued by the powers that be, almost any kind of disclosure or even discussion where disclosure is implied can lead to legal repercussions – especially when attempting to deal with a large company.

    I say “screw them”. They should write proper, bug free software in the first place. If they can’t do that, they they deserve to be exploited and lose their customers, much as the first poster eloquently attributed to capitalism put into effect.

    SM.

  14. Darryl

    This should make the vendors respond faster. The biggest thing is that even with so called “responsible disclosure” who’s to say some other non-security firm isn’t already exploiting the found bugs?

    If the public knows of bugs workarounds can be applied faster (if applicable) and the public can also pressure the vendors to respond faster rather than relying on one or two security firms.

  15. choombak

    In my experience, responsible disclosure of an issue affecting a large number of products takes a lot more time, since multiple products are required to co-ordinate on the fix and patch release, which is scary. If the researcher had the information for so long, chances are high that someone else also did.

    Full-disclosure does put a lot of pressure on the management of the company, and also at times affects its stocks — which gets the ball
    rolling pretty fast. My experience suggests that full-disclosure patches things faster than responsible, and all the responsible thingy is simply a joke.

    Most of the large organizations (except a few) do not take security seriously — there are enough egg-headed managers’ who simply don’t understand the seriousness of it. The intangibility of software, and no insight into the real-life black-hat community are partially
    responsible, but being an idiot more so. The engineering is more than willing to fix the issue, but the management takes ages to schedule it for a release.

    Any sensible and resonable developers understands and appreciates the importance of security vulnerabilities. Anyone who does not, should not be programming, and would be better off selling bread.

  16. AlphaCentauri

    Software manufacturers know their products have vulnerabilities. They apparently aren’t losing sleep over the possibility that their customers are at risk during the interval they let vulnerability reports sit in a queue.

    Now they have to worry about all hell breaking loose if a major problem is disclosed to every hacker on the planet, causing a public relations meltdown as all their customers suffer attacks while they scramble to fix the hole. The only logical way to deal with that possibility is to revamp budget priorities to have a lot more of their own employees testing and patching their product first.

    And that’s a good thing.

    1. robotdog

      The understanding you have of the process companies use to handle “disclosure” from the community, is lacking.

      Companies that have been through this “fire drill”, before Responisble Disclosure became the norm, have learned to manage the PR aspect pretty well. Including having the support of the community (bloggers, journalists, power users, etc.), that it isn’t a big deal any more.

      Priorities _might_ change in regards to the existing queues but all of the steps of the process still need to happen, including verifying, scheduling, communicating, fixing/developing, testing, documenting, and inserting into the product update process. And they still have the same people and resources that they had the day before disclosure was made…

  17. TheGeezer

    Again, great article Brian!

    Two comments actually.
    1. The same thing should apply to registrars who fail to take down domains which are clearly being used for fraud.
    Botnets have their favorite registrars and for a reason. As I pointed out in a previous post, most of these vulnerabilities are taken advantage of after the installation of the malware via email campaigns referencing these fraudulent domains.
    Several registrars are good at taking these domains down, but most are like the software vendors. Their revenue and focus is on registering domains. Publishing a list of registers with the most fraudulent domains might be in order also.
    In fact maybe even a Cicso award for most responsible registrars would be appropriate.

    2. There is a lot of blame in the posts on incompetent programmers. I doubt that incompetence is the problem. Every vendor probably has it share of competent programmers but if the IT manager puts priority on software availability before software reliability that is what you are going to get.
    I have lost count of the number of projects that I felt were needed but had to take on on my own time. Although I got praise from other department managers, the IT manager just wanted to be assured that I didn’t use company time for the project.
    So, again, give the IT/Programming department the financial incentive and you may find that there are many more competent programmers on staff than you realized!

  18. Fnord

    It’s pretty annoying to be on the receiving end of some juvenile “look momma, I broked it!!1!” so-called “security advisory” any way you look at it. To me, if the “researcher” finding the problem didn’t even contact the vendor first, that’s a black mark. This gets worse when you’re purposefully using software from vendors or worse, open source projects, that have a track record of responsiveness to security problem reports and quick patching.

    Of course, as a “security researcher”, working with irresponsible, unresponsive, abusive vendors is no fun. But that doesn’t mean all of them are. Dropping “responsible disclosure” entirely just because some vendors annoyed you is being just as irresponsible and abusive as the bad vendors.

    In the end it’s not about bad vendors or some “researcher’s” fifteen minutes of fame, but about improving software, and no longer notifying vendors deprives users from a chance to vote with their feet; they get to enjoy the fallout unfiltered directly from the “researcher”.

    One could argue that the whole finding more of the same type of security bugs is basically a waste of time because once the type of attack is known and automated tools exist to find them, it’s a programmer’s job to run them and fix their software, not the not-so-“whitehat” skiddies’. What do you mean, “research”?

  19. Pierre

    As U.S. government agencies such as the FBI (“magic lantern”) officially exploit software “bugs” to infiltrate end-user systems, there is little wonder about why U.S. software vendors have an incentive to release buggy code (and, saddly, this includes “security” tools such as firewalls, antiviruses, VPNs, IDS, etc. which expose new security holes monthly):

    http://www.wired.com/print/politics/law/news/2007/07/fbi_spyware

    The real question is: why so much hypocrisy?

  20. Kune

    From reading the comments on Intevydis’ blog it appears that Intevydis’ problem with responsibile disclosure is not the “responsible” part but the economic incentive. He apparently expects companies to pay for the unasked research. So these full disclosures could be seen as a kind of marketing of Intevydis’ services.

    Personally I believe that products marketed to the public should be penetration tested regularly and wouldn’t even mind to make it a regulation. While standard products have on average a higher quality than custom-developed software, they have still enough vulnerabilities that should be fixed by the vendors in the first place. This is particularly true for all the web services that businesses are now integrating into their processes.

  21. Tamas Feher from Hungary

    The russian ruffians are only sour and full of hatred towrds the USA. In the 1970s IBM cooperated with the CIA to sell the soviets a supposedly black market mainframe, which was used to control siberian pipeline networks. The machine was in fact rigged with a backdoor, which made it to cause a prearranged overpressure situation after months of operation. The pipleine exploded with the force of a small atomic bomb, causing billions of rubles in damages. This greatly humiliated russians and they didn’t dare to trust stolen free world electronics any more, thus helping the west win the Cold War against the evil empire. Now these russians want a revenge, but their effort is simply pathetic. NSA can listen in reatime to hear what Putin says while in the bathtub!

    1. Paris Hilton

      The Soviet Union and the Eastern Bloc countries mostly used clones of VAX computers with VMS, so there goes your conspiracy theory.

  22. robotdog

    I’d like to add my two cents to this thread…

    Having been on the receiving end of the disclosure process for a very large AV/security firm. I can tell you that we as a company did our best to work with the researcher and treat them with respect and understanding that is commesurate with their position/role/background/personality. In fact, there are still some researchers that I keep in contact with because they are pretty smart/cool/funny people. And as far as I know we never paid any researcher, mainly because there is a secondary market for that (i.e. Tipping Point). But for all of those individuals that complain about how there is no testing or QA (or very little, or poorly done, etc.) on some of these vendors products, you are very sadly mistaken.

    Speaking from direct experience as a developer for said AV company, the QA staff was larger than some entire companies, demonstrating a huge commitment to quality code/secure products. I know first hand how hard it is to write code that is not exploitable, period. Especially, if you don’t own the entire “stack”. I also know what it takes to “pen test” a product, since that was my role for awhile.

    This all leads up to the statement, that Responsible Disclosure for the most part works, however you have to be patient, for the “big machinery” of a large corporation doesn’t turn on a dime. It takes time to communicate, verify, schedule, prioritizie, correct, test, and insert into the product the potential fix without introducing a new issue or without unacceptably delaying the existing work. Work that may be performed by dozens and dozens of people and worth millions of dollars.

    As for Full Disclosure of a defect, it just makes the researcher look amateurish. As a company, we still needed to go through the same process either way the “defect/exploit” was submitted. We would watch the various “eploit outlets” or press our personal contacts to give us a heads up, if possible. Once we had the minimum info that we needed we would go ahead and put the bug into our system and assign it to someone to manage through our entire process. All the Full Disclosure did was tarnish the reputation of the company/product, which now required collaborating with the PR department in order for them to disseminate the relavant information in the most positive light possible. We as a company have much experience with the entire process, as do most of the other security firms includuing Microsoft.

    If you can’t tell, I am not a huge fan of Full Disclosure and believe it isn’t as effective as it might appear.

    1. Paris Hilton

      Consumers who buy a product in good faith do not have to understand or have patience. But they have a right to know immediately if there is a risk. Imagine your car manufacturer was aware of a fatal flaw – you think it’s all right for the company to withhold that information? Of course it’s not!

  23. Matthew Wollenweber

    Vendors hate bugs. They want to push products out as fast as possible with as many features as possible. Security is usually an after thought and an expense. They complain about security researchers that exploit their software, but really it’s just market economics. Vendors push their software out like they do to make money. Many researchers try to do the right thing and responsibly disclose bugs, but as it turns out they’re given a hard time and little or no pay for their trouble. However, there’s a market for bugs that has a huge premium. After all, if you’re a company what would you pay to protect your databases? What would you pay to get insider data or see your competitor be embarrassed?

    So long as there is a market imbalance between what software vendors will pay for bugs and their real market value, researchers and companies will sell or publicly release exploits.

    To anyone that views such a sentiment as immoral, I’ve never seen a moral corporation and generally speaking, a corporation has a legal responsibility to make a profit for share holders. Good luck Intevydis.

    1. Paris Hilton

      Citing legal responsibilities is not an excuse for sidestepping moral issues.

  24. William

    Just because one can elicit a desired outcome, does not make it right. I don’t subscribe to the end justifies the means argument … two wrongs don’t make a right. In other words, making the computing ecosystem at large pay the price for the reputed sins of software makers is just wrong.

    And for the sake of argument, let’s drop the word ‘reputed’. Is making the ecosystem at large pay the price the only way to engender change? Are we not smarter than this?

  25. AlphaCentauri

    You can’t assume the hacker making the disclosure is the first one to discover the exploit. It’s entirely possible it’s been in use by malicious entities for months by the time he discovers it. His disclosure gives vulnerable users a warning that the software they are using could put them at risk, so they can act accordingly while waiting for a patch. Everyone ought to think that way all the time, but we’re human, so a little wake up call helps.

  26. johansen

    Software defects are inevitable. Good programmers introduce fewer bugs than bad programmers, but nobody ever writes 100% error free code. Some of the comments in this thread assert that software vendors don’t care about bugs. That may be true in some circumstances, but where I work these issues get taken very seriously.

    I work as an OS developer for a large company. Our product has been under continuous development since 1983. Our customers have expectations about compatibility between software releases. As RobotDog observed, the development cycle of a large software project has many steps. This process exists to prevent quality problems for our customers. Changes get extensively tested and reviewed prior to integration. Release Engineering tests the product again before it is released. In order for a fix to be back-ported to a maintenance branch, it must remain in the development branch with no errors for a certain period of time. All of this work gets done to prevent the customer from receiving a product that has additional bugs, regressions, and feature incompatibilities.

    Full disclosure may get a vulnerability fixed faster, but there doesn’t seem to be any data on the underlying quality of the patched product. Is the vulnerability actually fixed? Did the release introduce other bugs for the customer? Were there functional or performance related regressions? If I were a customer, I would be curious to know if full disclosure increases the likelihood that I will get a low quality solution to my security problem.

    1. Paris Hilton

      If you are an OS developer for a large company with a product dating back to 1983, then you must work for Microsoft.

  27. DaFyre

    I consider myself new to the security field. In reading this thread, I have gone back and forth several times being swayed by comments to and from full disclosure or not.

    Looking at things from a network perspective, a lot of the vulnerabilities have something to do with remote-code execution, and the like. From a network-eye view, any network on which these applications reside should have some kind of intrusion prevention, network anomaly detector, or at bare minimum, an intrusion detection appliance.

    I say this because ultimately a lot of these exploits come back to the network level. The IPS should have a way for the network security folks to add new information on how to block the exploits as they are coming in — or at least as they are trying to leave the network.

    So responsible disclosure or not, end-users within a corporate network SHOULD BE armed with a way to defend themselves against 0-day exploits. The {average} home users are still stuck relying in their own AV products to handle this for them.

  28. PhantomTramp

    60-90 days is not reasonable. How much damage could be done in just one day by organized stump heads? I’m feelin’ ya, Fred.

    The Tramp

  29. J.StevenLivacich

    From my first view– I see this as a good effort towards better computer security, I shall follow this further. You were well recommended by knujon@coldrain.com.

    1. Paris Hilton

      Let’s hope Ryan doesn’t mind you posting an e-mail address in the clear like that.

Comments are closed.