April 8, 2014

Researchers have uncovered an extremely critical vulnerability in recent versions of OpenSSL, a technology that allows millions of Web sites to encrypt communications with visitors. Complicating matters further is the release of a simple exploit that can be used to steal usernames and passwords from vulnerable sites, as well as private keys that sites use to encrypt and decrypt sensitive data.

Credit: Heartbleed.com

Credit: Heartbleed.com

From Heartbleed.com:

“The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software. This compromises the secret keys used to identify the service providers and to encrypt the traffic, the names and passwords of the users and the actual content. This allows attackers to eavesdrop communications, steal data directly from the services and users and to impersonate services and users.”

An advisory from Carnegie Mellon University’s CERT notes that the vulnerability is present in sites powered by OpenSSL versions 1.0.1 through 1.0.1f. According to Netcraft, a company that monitors the technology used by various Web sites, more than a half million sites are currently vulnerable. As of this morning, that included Yahoo.com, and — ironically — the Web site of openssl.org. This list at Github appears to be a relatively recent test for the presence of this vulnerability in the top 1,000 sites as indexed by Web-ranking firm Alexa.

An easy-to-use exploit that is being widely traded online allows an attacker to retrieve private memory of an application that uses the vulnerable OpenSSL “libssl” library in chunks of 64kb at a time. As CERT notes, an attacker can repeatedly leverage the vulnerability to retrieve as many 64k chunks of memory as are necessary to retrieve the intended secrets.

Jamie Blasco, director of AlienVault Labs, said this bug has “epic repercussions” because not only does it expose passwords and cryptographic keys, but in order to ensure that attackers won’t be able to use any data that does get compromised by this flaw, affected providers have to replace the private keys and certificates after patching the vulnerable OpenSSL service for each of the services that are using the OpenSSL library [full disclosure: AlienVault is an advertiser on this blog].

It is likely that a great many Internet users will be asked to change their passwords this week (I hope). Meantime, companies and organizations running vulnerable versions should upgrade to the latest iteration of OpenSSL – OpenSSL 1.0.1g — as quickly as possible.

Update, 2:26 p.m.: It appears that this Github page allows visitors to test whether a site is vulnerable to this bug (hat tip to Sandro Süffert). For more on what you can do you to protect yourself from this vulnerability, see this post.


171 thoughts on “‘Heartbleed’ Bug Exposes Passwords, Web Site Encryption Keys

  1. Andrew Hay

    It should be noted that the potential number of exploitable targets is far less scary than the potential number of users-per-exploitable-target.

    1. BrianKrebs Post author

      No question. The worst part is, an attacker with this exploit can just run it over and over and suck down credentials left and right.

      1. Wladimir Palant

        Luckily, it seems that you will be getting the same memory areas most of the time (at least that’s what I have seen in my testing). For most larger websites the window of opportunity was at most a day (yes, even Yahoo), the chances to hit the private key of a website during that period seem low. Then again, who knows – maybe the bad guys have been abusing this bug for months already. So there is definitely nothing wrong with replacing certificates and revoking the old ones.

        It’s similar with sensitive data like passwords or credit card numbers – they go relatively rarely over the wire and the chances to see those are rather low. Yet session cookies are something that one definitely has to consider compromised, from all we know the bad guys might have spent this day mining session cookies for high-profile websites.

  2. Roger

    Has anyone seen the actual memory dump code in the wild yet? I know the test code is out but looking for code to dump memory contents and not just test for vulnerability. Have yet to see it. This test is out http://www.exploit-db.com/exploits/32745/ and I am assuming just send a call to /dev/kmem through the socket in s.send(hb) Sorry too lazy (busy) at the moment to write it.

    1. BrianKrebs Post author

      Yes, I have seen the exploit in action and it’s nasty. If the site is vulnerable, it dumps 64kb of cache. If not, it says either connection refused or “server returned error, likely not vulnerable.”

    2. Roger

      Ok I got the exploit to work. Just imported pprint and then pprint(pay) and you get all of the goods. Wow this beast is bad, very bad.

  3. Steve

    Brian,

    “OpenSSL versions 1.0.1 through 1.01f.”

    Should the latter version be 1.0.1f?

  4. Ann

    Can you please clarify for us non-technical users what this vulnerability means? If I simply visit a website on my browser that has this vulnerability, could a hacker penetrate my computer and steal my information? Or do I have to actually sign into a vulnerable site in order to have my account info and password stolen? Thank you.

    1. BrianKrebs Post author

      It means it might be a good day to stay off the Internet 🙂

      Just kidding…sort of. The problem isn’t in your browser. It’s with software that some web sites use to encrypt/scramble/protect sensitive information that users submit to the sites. If you happen to log in (submit username/password) at a site that is vulnerable, there is a decent chance that someone will steal your password unless and until the site upgrades to the latest (non-vulnerable) version of OpenSSL.

      The problem is that short of consulting lists like this one (https://github.com/musalbas/heartbleed-masstest/blob/master/top1000.txt) it’s not readily apparent to the end user which sites are fine and which are still vulnerable.

      1. Steve

        I’m getting conflicting results from some sites promising a read on which sites are vuneralble. For example;

        http://filippo.io/Heartbleed/

        Tells me that http://www.suntrust.com is “All Good” and seems not to be affected.

        While
        http://www.lastpass.com/heartbleed

        tells me

        “The SSL certificate for http://www.suntrust.com valid 7 months ago at Aug 27 03:51:52 2013 GMT.
        This is before the heartbleed bug was published, it may need to be regenerated.”

        And when I go to Suntrust’s cite and check certificate by hand I it is running TLS 1.2 – which is vulnerable.

        Does anyone have an explanation? Are the folks behind

        http://filippo.io/Heartbleed/

        running a test exploit?

        Does their “All Good” mean that Suntrust has disabled the the heartbeat function?

        1. SteveSteve

          The FAQ says that they use a live exploit. But none of these tests or lists will tell you if there was a vulnerability the day before yesterday.

          “Is this a live test? Is it a full exploit?

          Yes, when you hit the button I actually go to the site, send them a malformed heartbeat and extract ~80 bytes of memory as proof, just like an attacker would. I don’t check versions or make assumptions, I look for the bug.”

  5. Ralph Daugherty

    The POS memory scans to grab data may not have been as cleanly done with a memory request API, have no idea, but same concept.

    That is one reason why I and others said in earlier comments IBMi (formerly AS/400 and iSeries) has never been involved in these data thefts, be it web serving, socket serving, database serving, app server. etc. Security is built throughout the operating system.

    Commodity systems seem cheap until you end up being a Target.

    Thanks again for all the great reporting and explanations, Brian.

    1. bob

      If you’ve got anything running in the linux environment on your AS400, you’re probably using OpenSSL and are just as vulnerable as anyone else.

      The reason there are few attacks against OS400 is simply that there are too few for anyone to care about.

      That being said, I’m sure there are targeted attacks against it but we’ll probably never hear about them because they’ll only be spotted by dedicated security teams tracking down a compromise against the sort of company that’ll do its best to keep schtum about it.

      I haven’t worked on AS400 for some 10 years but security just wasn’t considered by anyone I worked for. I wouldn’t be surprised if the vast majority of software had basic Little Bobby Tables, etc issues.

  6. Mike

    Can you confirm that this will mitigate the bug?

    -DOPENSSL_NO_HEARTBEATS.

    We have servers with this already set.

    1. Carl 'SAI' Mitchell

      Yes. Without the ability to send heartbeat packets the bug won’t be triggered. Of course this will cause sessions to terminate when data is not being transferred, so it’s best to update to the latest version ASAP anyway, unless you would turn of heartbeats anyway.

  7. Ralph Daugherty

    Yes, posting on the Heartbleed SSL bug thread, which says app memory was obtained in 64k chunks via an API call. The POS memory scan was similar in that other code accessed the private app memory and like I say may not have used as simple a memory request API, I don’t know, but accessed the app memory to obtain the data in similar manner.

    At least that’s what I got out of it.

    thanks

  8. Bobby

    Just think how many banks are vulnerable. Its going to be so bad, not sure it gets much worse than this.

    1. W Sanders

      A lot of banks run Windows, so this is one case where Windows is a win :-P.

      Also, if you have a proprietary firewall that is the endpoint for your SSL, and it’s not based on one of the vulnerable versions of SSL, you’re OK.

      A lot of those firewalls are probably running a patched-up 0.9-something….

  9. Silemess

    I’m not seeing a good timeline for how long this vulnerability has potentially been known. Using the known vulnerable version date, I can see pain.

    Using OpenSSL’s change log, I can see that they moved from version 1.0.0h to 1.0.1 back in May of 2012. Ouch.

      1. Silemess

        2 years for the vulnerability, but my question is more about how long the vulnerability has been known and exploited?

        Meanwhile, proceeding with the worse case scenario: Assume that most log-ins made in the last two years are vulnerable. So any site you currently use needs to be checked to see if it’s secure (now?) and then update the password used there. Lather, rinse, repeat.

        1. Jon Marcus

          No way to know how for how long the vulnerability has been exploited. Using our wouldn’t leave any trace. Any the bad guys aren’t likely to tell you what they’ve done…

  10. Chris Gervich

    I have found fixes for Linux distributions, but is there a Microsoft IIS fix yet?

    1. Dalmatian90

      IIS is not affected (longer comment to follow)

    2. Dalmatian90

      IIS with more explanation:

      IIS doesn’t use OpenSSL, so it is not affected DIRECTLY by this bug.

      Now you could have one of several scenarios that Microsoft ends up being an innocent bystander (for once) that gets caught in the carnage:

      — Windows server running a non-Microsoft web server, like Apache, which is using OpenSSL.
      — Windows server sitting behind a Linux proxy or load balancer which is vulnerable
      — A shop with a mix of Linux & Windows. Private key is exposed on a Linux box, you can now setup a Man-in-the-Middle attack that affects both Linux and Windows servers.

      Might be some other ways I haven’t thought of!

      1. E.M.H.

        You seem to have covered the ones I can think of.

        My colleague here at work and I were noodling around thinking how this could conceivably have an effect in a virtualized environment, and so far we’ve come up with nothing unusual or unexpected. The VM instance is still either a Windows host that runs IIS and shouldn’t be vulnerable or a *nix host that runs something with OpenSSL and is susceptible, and we couldn’t come up with scenarios where the hypervisor introduces this vulnerability. If anyone else can, feel free to chime in, but neither of us came up with any decent scenarios. Admittedly, though, I’m not versed in Xenserver or VSphere administration to any degree, so this was only amateurish noodling at best on my part.

    3. timeless

      As noted, Microsoft’s software isn’t directly vulnerable to this.

      More amusingly, because of an incompatibility between IIS and some part of Heartbeat, Node.js hasn’t been vulnerable for a while:
      https://github.com/joyent/node/issues/5119

      So, I, as a Node.js consumer thank Microsoft for their bug. 🙂

  11. Chris

    We are a Microsoft IIS shop, and when we go to these testing sites listed above (http://rehmann.co/projects/heartbeat/ – thank you Luke), and enter our web site in this link – it comes back saying that our site is vulnerable. There is no Linux involved at all. We are running SSL 3/ TLS 1.2 on this site.

    There is no mention at all on Microsoft.com about the Heartbleed virus – I was hoping there would be a Microsoft patch issued to take care of this issue.

      1. Dalmatian90

        “The hacker would only get data from the proxy not your site, so these are in this context false positives.”

        What is the difference? I don’t believe there is one, so it’s a legitimate problem, not a false positive.

        They would have access to just as much data from a proxy as they would from a web server with this vulnerability.

        For the proxy to work at Layer 7, it takes care of the Layer 5 and 6 functions of TLS/SSL. Has to decrypt the data to examine it to apply content rules, handle SSL offloading, and all those other good things you use proxies for.

        If you have a TCP/IP load balancer functioning at Layer 3 it wouldn’t be running OpenSSL since that works up at Layers 5 & 6.

  12. Simon Cousins

    This issue makes it even more critical to have a different password for every online system and service. Using a common password means that when (not if) that service is exploited then the attackers have the credentials for use elsewhere.

    For me, LastPass continues to be a critical tool for existing on the Internet today.

  13. LRC

    I just checked yahoo.com with the git tool and it registers as not vulnerable.

    I was one of those “lucky” yahoo mail users who’s account credentials was hacked. That was (and still is) a disaster. Could this have been the exploit that was used to gain access to our accounts?

    1. Sasparilla

      Possibly, since its been out there for 2 years…but its extremely hard to pick out a “it was this thing” that led to your accounts being hacked afterwords (as there are so many ways for this to happen)…we’d just be guessing.

  14. Chris

    Dalmation90 – thanks for your help. It turns out that we think it’s our Barracuda Web Application Firewall that is vulnerable (not our IIS site). We’re contacting Barracuda support now.

    1. Christian Kopacsi

      Barracuda WAF are vulnerable, supports states a new firmware within 24 hrs.

  15. Chris R.

    GrubHub.com is vulnerable still, guess I will order my food somewhere else for today

  16. Alex

    Question for tech people: what about Yahoo mail accessed via an iphone or smartphone device? Would you recommend turning off push today?

    1. Mike

      It does not matter what device is used to connect up to the vulnerable site. If your password is in the site’s memory at the time a hacker hits, then it can be stolen. That is what is so bad here. It is a server-side attack. So, yes, in the abundance of caution, I would turn off that feature.

  17. reedmb

    It is also “epic” on the client side. For every password that needs changed, and each one can be done in say 5m (at best 3m), then changing 25 passwords will mean 1-2 hours in front of a computer with no salable work created! It does not matter that passwords are in a wallet, they still need changed.

    1. Chriz

      The point is: With a wallet, you can use multiple different password for each and every web access. Therefore, you don’t have to replace all of them if one gets compromised. Of course, not knowing if a website was affected in the past by this vulnerability or not may require that you change most of your password.

  18. SK

    Thanks for your posting every time.

    SK from Korea.

  19. TheOreganoRouter.onion.it

    Even Easydns (dot) com is warning about this issue to all it’s customer base. When they put warnings up like this then you real start to take notice

  20. Mike

    Let the fun begin… lets think of how many home routers and devices are vulnerable. I tested my NAS and it was vulnerable, as is my DD-WRT router… then I looked up my OpenVPN version and they say it’s also vulnerable.

    Turning everything off and going to bed. Sigh.

    1. Infosec Geek

      If any of those home devices are accessible from outside your private enclave you’ve got worse problems than this.

      Sleep tight. Pleasant dreams.

  21. Infosec Geek

    Just finished several hours of OT working this issue. Thanks, OpenSSL team!

    It may not be quite as bad as the hype machine would have it. For one thing, any site using Akamai is shielded. IIS is not vulnerable, of course. And the alarmist fearmongers are stressing that even client side could be vulnerable, but most common browsers aren’t. NSS is a different implementation so Firefox and Chrome are safe, and this time MSIE is an advantage.

    So yeah. Pretty bad. But not the end of the world as we know it. Tomorrow will be business as usual. Another bug, another exploit. All in a day’s work.

  22. Paranoid

    Thanks Brian for all the work you do–genuinely! A question: being my paranoid self, I removed all certificates and issuers from my trusted/accepted lists. How will I know when and which certificates I can start accepting again (ie. which certificates are new and take into account the threat of heartbleed)? Right now, of course, I can’t use the internet, (using a different computer to post) because all websites are regarded as having untrusted certificates. But I recognize that a trusted site does not equal a trustworthy certificate and I do not want to blindly accept a compromised key for all sites simply because I trust the single site I am trying to access. (please correct me if there’s something I’ve misunderstood). Thanks!

    1. timeless

      You can’t.

      Imagine that you are dealing with a private bulletin board system.

      Imagine that there are five administrative accounts, each of which have had their password stolen using this exploit. Now, one of the admins goes out and:
      1. Upgrades OpenSSL on all their computers. (You can test the Open SSL version for individual computers to which you connect, although short of connecting to all computers on the Internet at all ports, you can’t prove that there isn’t another affiliated computer which isn’t still affected – and even connecting to all computers could miss out on load balanced servers)
      2. Creates a new certificate for all their computers. (You can check the issue date for each certificate and ask netcraft if a service once used Open SSL – although this isn’t a great check)
      3. Changes his / her password.

      That sounds good, right?

      What’s missing?
      4. Your password (and any session cookies issued to you) could be available.

      OK, so you reset your password (you can control this)

      5. The four other administrative accounts could have had their passwords stolen. There’s no way for you to know if those accounts have had their passwords reset since the breach.

      A site could assert that all administration accounts have had new passwords created.

      6. Say there’s a sixth account which as of today isn’t an administrator, but whose password hasn’t been reset. One of the current admins promotes it to admin. Now, what was after the previous step a “safe” system, is no longer safe.

      A site could retain information about the last date an account’s password was changed (some systems do this), and admins could force prospective admins to reset their password before promotion.

      But, you can’t know this has been done.

      7. At best, you can wait for an email or similar notice indicating that ALL account passwords have been reset.

      8. Even this isn’t a perfect all clear. Lazy people can pick their old potentially stolen passwords again, at which time, the site is vulnerable again.

      You’ll have to trust the site not to have silly users for this.

      A site could in theory retain hashes of old passwords and refuse to allow new passwords if their hash matches a previous password hash. But…

    2. JasonLA

      You shouldn’t need to remove the Trusted Root CA certs. They wouldn’t be impacted by this (professional CA organizations don’t keep their root private key connected to the internet so it wouldn’t be subject to scraping).

      Really, there’s no way to know when each site is safe without know a) when they patch the vulnerbility and b) if they have reissued their cert since then. If you knew that, you could write a browser plugin to block any site that was vuln that has either not fixed it or whose cert is older than the fix date. But that would be a pretty huge effect to track the vulnerability fix date for the entire Internet.

      Also, any site that does it’s SSL operations in an HSM using a hardware protected private key wouldn’t be vulnerble to having the private key stolen even if they were otherwise vulnerable.

        1. Paranoid

          Good info and points, thanks to all three of you!

  23. John Eichinger

    You’re promoting Trusteer on this webpage.
    I would never install Trusteer on a computer.
    It interferes/disables your previously installed anti-virus, prevents your browsers from working properly and is very difficult to uninstall.
    If you are capable of making the connection that Trusteer is the culprit for your non-functioning products then when you go to uninstall it, up pops a message “If you are uninstalling because your browser is not working properly, please contact us so we can connect to your PC and correct the problem.”
    Who foists a product like that on an unsuspecting public??

    1. Justin Bonar

      You have no idea what you are talking about. I have worked in internet security and financial technology for most of my career. I joined Trusteer and worked closely with the core team there from the company’s beginnings through to its acquisition by IBM. Okay, you are one of the few people who had some kind of system configuration issue. Your anomalous experience doesn’t qualify you to dismiss one of the most effective security defenses ever provided, particularly when you make your irrelevant comment in the context of a discussion about the Heartbleed OpenSSL vulnerability. Should I start talking about my cousin Ernie’s problem with his seat belts on his old Ford pickup truck? I mean that’s kind of a security problem too!

  24. Josh K

    Do you know what the story for sites such as Facebook, which are not listed as vulnerable, but who have a login widget/api used by thousands of sites who may be vulnerable? For instance, many sites’ comment sections allow users to login via Facebook to comment. Could the Facebook login information been leaked through this process?

    1. timeless

      I can’t speak authoritatively, but in general, assuming no MITM, what’s issued by such systems isn’t a password, but a per site token. Such tokens could have been stolen, just as per site session cookies could be.

      Your Facebook password wouldn’t be exposed directly, but the ability for someone to impersonate you would exist until you convinced that site to expire the relevant tokens. The only obvious way to do this is to change your password (and pray that this expires Amy current credentials – it should, but there’s no guarantee).

      I’d be more worried about administrative accounts whose credentials are stolen – as they could be used to change the site and make it do whatever – including not use the real Facebook login system but instead provide something similar to which you naively provide your real password.

      Basically, until everyone rebuilds their systems and changes all passwords (remember that many people share passwords across sites), the world isn’t really safe.

      Even if a site admin changes his site password, if his email account password was stolen, someone could reset his account using the compromised email account and delete the email evidence. Until this person tried to log in with the previous password, there’d be no indication of a problem (most people don’t log into all systems for which they have credentials every day…).

    2. JasonLA

      No, those sites don’t have your Facebook login. That’s done using OAuth. You might Google that for an overview. It’s a clever solution that allows two sites to share your identity without sharing your password.

  25. rvdm

    I would be interested in a story that highlights how these top1000 sites might be impacted in a different way. Assuming a third party used the exploit yesterday to retrieve part of the private key (which is perfectly possible), this opens up the possibility to decrypt traffic encrypted with this key in the past.

    Suppose you have a datacenter with some sniffed but encrypted traffic somewhere, this might just have been the key/exploit you were waiting for. In that sense, this is a rare example of an exploit supporting time travel; things you do today, allow you to decrypt what you dumped a year ago.

    Also consider organisations often swap out their certificate, but do not replace the private key. Such sites could even be vulnerable to decryption of traffic that was dumped before the last SSL certificate renewal.

    1. Lee Church

      I agree with your concern. In the past, breaches have been discounted with such rational as “it was encrypted” (by perhaps the compromised SLL lib?) so the user data was considered safe.

      What this breach shows, is that there is future risk even to past data transactions (as you put it, ‘time travel of the exploit’).

      I had a post (that discussed the re-insurance aspect) that I don’t think made it on the blog (perhaps a technical issue) so I won’t go through all of it, but the accounting that is being done simply doesn’t consider the risk of handling past data. We are piling that tail risk up until events like the SSL bug make those ‘unrealized risks’ all too ‘real’. That can be well after the folks taking the risk have exited (with their profit).

      More generally, the combination of various exploits is basically the inverse of the ‘detection by intersection’ (the common point of purchase) approach used by banks to identify compromises. This isn’t a quirk, it’s the inverse of the relationship. Instead of card A and card B being compromised at a merchant over some period of time, it’s exploit A and exploit B join to compromise a merchant over some period of time.

      This particular breach may or may not actually be a good example of the proverbial ‘time travel’ exploit (i’ll note that folks may dismiss your term as ‘tin foil’ because you used ‘time travel’ ) but the SLL exploit does serve as notice that future exploits can be coupled with past exploits in various combinations (and more than two) which then results in compromised information.

      Right now, I suspect folks are too busy running around patching and resetting pws to think about the past data that was thought safe because it was encrypted.

      1. rvdm

        Well, unfortunately it doesn’t even matter what openssl library was used in the past – as long as the key was recovered.

        I fully agree with you on the ‘combination of exploits’ thing – but then again, this is true for most compromises; they build on information gathered from multiple sources or services. Combining exploits can generate attack vectors that are difficult to predict.
        I guess in a way, vulnerability of larger systems is a function of their complexity – any sufficiently complex system becomes too chaotic to manage. Food for thought, and one of the basic reasons to make simple designs – which is often not easy.

        With regards to the tin-foil-hat statement, here’s another one: 64k of memory is turning out to be more than enough for quite some sysadmins right now 🙂

        1. Lee Church

          I’m thinking the same thing. As long as the data stream was encrypted the same, a breach of library version A which renders the key will compromise a data store encoded by library version B.

          If they used the vulnerable library at any time, then there is some chance the key was obtained. If the certificates remain in place during a library update, then even sites that check out fine now, may be compromised (if they didn’t update certificates).

          If the libraries could be updated without updating certificates (that is my understanding), then we could have a lot more potential data security compromises than currently estimated.

          It becomes “any site that ran the bad code library at any time in the past” rather than “any site running the bad code library”.

          Thus these ‘checker’ apps may give folks a false sense of security (yet I do think they have the right idea).

          As Brian pointed out, there is going to be a whole lot of password changing going on. And that will make it harder to see certain other exploits such as password resets (like the recent @N twitter account hack). I would not be surprised to see a damped sine wave of exploits (unless things get really out of control).

          Anyway, thanks for making it clearer that previous data encrypted with non-vulnerable lib also has risk if the vulnerable lib was ever used. (That won’t stop the PR though.. “we don’t use that lib.. so don’t worry.” 🙂

          Taking it one step further, if a site ever used the vulnerable library, then users should change their passwords (when fixed) even if the sites have already changed their certificates. To be even more secure, changing the logon ID credential (user name) and other data associated with the system would make residual information of logon IDs
          less valuable as well.

          Unfortunately, there is point where most folks will say “good enough”, but the SSL exploit even with a change of PW will mean accounts are less secure than they were before (actually they were less secure before, but folks just didn’t know they were less secure).

          Anyway I look at it, it’s a mess.

      2. timeless

        So, a note about cryptography and wires…

        While users typically think of something that is encrypted as “uncrackable”, that really isn’t what means. Often it means “not practically crackable with a significant portion of the world’s total computing power for next features years.”

        There used to be contests which showed this property. Distributed.net amongst others pooled resources to crack various sample messages.

        I don’t have any real numbers for this, but given cryptography used for wire messages (i.e. things sent from one computer to another over an unsecured path), we like to hope that a message won’t be crackable for say 10-20 years. But given advances in computing power and drops in cost, it’s likely that 20 years after a message is sent, someone could be enough computing power to crack it.

        While I’m not a big fan password expiry policies, this is one of the reasons for passwords should have an expiration – and should never be recycled.

        1. Lee Church

          Very true, it comes down to playing for time (practical).

          One note: resetting PWs leaves the account ID compromised (the bad guys should not advance their knowledge horizon or it’s a breach). To reset attempted access folks who wish to be a bit more secure should really change their account as well (impractical at present).

    2. John Austin

      There are a lot of if’s associated with this issue. The vulnerability would expose 64k of data on servers which support the Heartbeat protocol. If the compile option was not turned on, nobody would be able to exploit the vulnerability.

      One big if is the layout of memory on the server. If there is no useful data in the 64k region, nothing is exposed. There is no public evidence thst anyone has written and employed an exploit. Your private data would only be exposed if it were present in memory when the exploit wss run.

      Should software developers who took the trouble to erase incore passwords be rewarded for taking the trouble? Isn’t it a best practice to clean up? You can’t overwrite a string in Java until garbage collection but you can erase confidential data stored in java arrays and in many other languages.

      1. JasonLA

        OpenSSL is a C library, not Java so garbage collection isn’t a concern. And I read elsewhere that they were taking a performance shortcut and not freeing the memory and re mallocing it so it doesn’t get cleared properly.

        1. John Austin

          Yes but that does not preclude the use of Java elsewhere in the application stack. A Java application might use OpenSSL through JNI but who knows what could be inthe memory before the response was generated.

          A private key is the Holy Grail of penetration. Session keys are less important because they are created uniquely for every session. In days to come we will hear about bit about the key generation functions used. Only the inputs to key generation would help an attacker.

          Hopefully the people handling passwords don’t leave them around in the clear for collection by an attacker.

          I just completed Dan Boneh’s Crypto I on Coursera so now I know what to look for. …

          1. JasonLA

            I would think that using OpenSSL via JNI would be very rare. JNI has lots of disadvantages and Java has usable native crypto so I don’t see a compelling case to do it.

            And if you’re talking about authentication code (which is very reasonable with OpenSSL), then it is very likely that some clear text password would be in memory. They don’t arrive from the client hashed. Generally, you collect the password and do one of: 1) hash it in app code (e.g. if you’re using a database for user data), 2) connect (hopefully over SSL) to an LDAP (or similar) user store or authentication API that hashes the password on it’s side before comparing to the real password) or 3) encrypt the password with a symmetric key if you have a compelling case to reverse the password back to cleartext (this should be rare but isn’t as rare as I’d like).

        2. Julio

          Freeing memory doesn’t clean it up. The only way to clean up memory is by writing blank or new data on the memory block you want to clean up.

Comments are closed.