Late last month, hackers allied with the Syrian Electronic Army (SEA) compromised the Web site for the RSA Conference, the world’s largest computer security gathering. The attack, while unremarkable in many ways, illustrates the continued success of phishing attacks that spoof top executives within targeted organizations. It’s also a textbook example of how third-party content providers can be leveraged to break into high-profile Web sites.
The hack of rsaconference.com happened just hours after conference organizers posted several presentation videos from the February RSA Conference sessions, including one by noted security expert Ira Winkler that belittled the SEA’s hacking skills and labeled them “the cockroaches of the Internet.”
According to Codero CEO Emil Sayegh, the attackers spoofed several messages from Codero executives and sent them to company employees. The messages led to a link that prompted the recipients to enter their account credentials, and someone within the organization who had the ability to change the domain name system (DNS) records for Codero fell for the ruse.
Sayegh said the attackers followed the script laid out in Winkler’s talk, almost to the letter.
“Go look at minute 16 from his talk,” Sayegh said. “It’s phenomenal. That’s exactly what they did.”
Amit Yoran, senior vice president of products and sales at RSA, said the SEA often finds success by exploiting trust relationships between content providers on large Web sites. In short: targets are only as strong as their weakest link.
“Unfortunately, complexity is very often the enemy of security,” said Yoran, emphasizing that he was speaking for RSA and not for the RSA conference Web site, which is a separate entity. “If it’s a content-rich and interactive Web site, it only takes one simple slip for the site to be hacked.”
The SEA has had great success by spoofing the boss and by targeting weaknesses in third-party content providers. Last year, the group claimed credit for defacing the Web sites of Time, CNN and The Washington Post after gaining administrative access to Outbrain, a third-party system that provides “Other stories from around the Web” recommendations at numerous sites. Outbrain later acknowledged that the incident was the result of a phishing attack sent to Outbrain employees that spoofed the company’s CEO.
From my perspective, what’s truly remarkable about these attacks from the SEA is that they could be so much more damaging, and yet this group appears to do little more than use each attack to spread a propagandist message. Unfortunately, malware purveyors don’t care about propaganda, and frequently abuse trust relationships between and among Web sites to spread malicious software.
all websites are vulnerable, some other less more
however the difference between success and failure of attack through the human factor
ps: never call me cheap ……..
It seems that the human element is the weakest link – if we could just convey to our fellow employees that all it takes is ONE person to conpromise the system, we may have a fighting chance!
I know that this article will be read by a few of us here at the office, so hopefully it makes them more aware of their actions both in the office and at home.
Keep up the work, Brian!
And yet it only gets worse, interdependencies on websites are getting more and nobody seems to care.
Thank you for your work on Adblock Plus!
Ira Winkler may be a recognized expert on computer security, but he proved himself to be rather average with respect to world and KOS affairs.
Somewhere between 100,000 and 160,000 people are estimated to have died in Syria’s civil war, a great tragedy to be sure. However, that figure pales in comparison to that of the DPRK which has around 200,000 people imprisoned in labor camps, with millions having died since Kim Il-sung founded the country after WWII. Not to mention its eccentric YouTube channels.
And surely the skids who have swatted and otherwise annoyed our fearless leader would be more deserving of that moniker.
For more on Winkler’s SEA dalliance, see the Computerworld story referenced below:
Remember the name of the blog you are reading?
Maybe that should have been KoS.
Would someone please replace java? Please? Please? Please? Please?
From the primer linked in the article:
Both are vulnerable to risks.
They look similar, but inside they work very differently.
Make them only able to use the guest account in windows. Sure they may be duped into giving up identifying information, but at least you dont have to remove malware from the machine.
Good advice ahazuarus! As long as all updates for the operating system and all applications are present!
Spock: Beauty is transitory, Doctor. However, she was evidently highly intelligent.
James T. Kirk: Kirk to Enterprise. Five to beam up. I don’t agree with you, Mr. Spock.
Spock: Indeed, Captain?
James T. Kirk: Beauty… survives.
If I come up with a solution and it is not beautiful I know it is not the right solution. Stop weird machines.
I’ll give you a plus 1, even if I’m not sure the TV script is accurate – it sure sounds accurate! 🙂
“The attack, while unremarkable in many ways,”
That really says it all.
SEA is unremarkable in many ways,
The death caused by the conflict is remarkable and in many ways.
Maybe they can wear some wristbands to help spread the word about their scause.
Complexity as the enemy of security…..Indeed. But it is also interesting how complexity is also becoming the threat of the criminal element.
Taking one mule down can lead to entire operations (that have run silent for some time) to collapse.
“A typical hacker may use features in software which ar
e not obvious security holes, in order to probe the
system for useful information to form an attack.
Source code complexity can mask potential security
weaknesses, or seemingly innocuous software c
onfiguration changes may open up security gaps. An
example would be a change to the permi
ssions on a system directory.
The implications of such changes may not be apprec
iated by a system administrator or software
engineer, who may not be able to see the whole pictur
e due to the complexity in any typical operating
system. A reduction in this complexity may improve security.” http://www.ieee-stc.org/proceedings/2009/pdfs/TJM2273a.pdf
Spend less to gain more security. Don’t bother trying to explain complexity to justify increased spending. You can’t get weird machines to do normal things. Stop weird machines.
I agree, not just for the non tech savy, but also for anyone who actually wants to view content that requires scripts to be allowed on certain webpages. It just becomes a joke trying to figure out which scripts out of dozens to allow, not knowing if their safe anyways, and especially after finding out you have to allow everything for certain content to be viewed anyways…..which seems to be happening to me more and more.
There is web of trust, and similar addons, but thats not too reliable either….
Regardless, after many years, i’ve finally switched from firefox and noscript, to chrome and scriptsafe. Which lists more scripts then noscript does for some reason, and also has fuller domain names.
Interesting. I’m hesitant to switch to Chrome because of Google’s information-gathering capabilities. Firefox seems more benign==but perhaps I’m fooling myself.
I think everyone gathers information, not just google.
But I first started installing chrome for all my family just because pages look better, and it is a much much faster browsing experience. Firefox seems to get bogged down after a while on my pc and also on many other pcs i look at. Chrome also auto updates adobe flash.
Also, In Chrome some content just doesn’t work at all with scriptsafe even when allowing all the listed scripts. I’m still forced to use firefox sometimes to allow some things on a page that scriptsafe must block regardless. Also you might remember BKrebs article on how the FBI used firefox exploits to go after tor users.
I think chrome is safer nowadays, although thats probably not saying much.
Try Comodo Dragon – it is an odd doppelganger of Chrome, but has a better EULA, and is faster in many way. CoooIAC is right – getting used to ScriptSafe’s quirks is a learning curve, and of course ABP is a requirement, but you will have to do a lot of work putting your favorite URLs on allowance lists to get full functionality. I say it is well worth it. It does have a lethargic update schedule I will warn you, if that is an important factor to you.
Brian, I love what you do for all of us–so please follow your own advice and stop using Feedburner/Google Proxy for your RSS feed!
What’s wrong with Feedburner, exactly?
Complexity–and the whole Google thing.
Without crime there would be no need for government. Without complexity there would be no need for Google.
After more thought, I withdraw my concern. You need to monetize your site in all reasonable ways, and that’s what Google does. This isn’t just some dude’s security blog but a business. Keep on.
Complexity does not make Google necessary. Complexity makes indexes (or indices if you prefer) necessary. What Google DOES with that is another question entirely.
It’s owned by Google, 2nd largest spy agency in the world. Or first.
“Online security is a horrifying nightmare. Heartbleed. Target. Apple. Linux. Microsoft. Yahoo. eBay. X.509. Whatever security cataclysm erupts next, probably in weeks or even days. We seem to be trapped in a vicious cycle of cascading security disasters that just keep getting worse. Why? Well — “Computers have gotten incredibly complex, while people have remained the same gray mud…” http://techcrunch.com/2014/05/24/the-internet-is-burning/
Put it out… “One series of tests using empty houses at Vandenberg Air Force Base compared [this new] system with a 20-gallon-per-minute, 1,400 pound-per-square-inch (psi) discharge capability (at the pump) versus a standard 100-gallon-per-minute, 125 psi standard hand line—the kind that typically takes a few firemen to control. The standard line extinguished a set fire in a living room in 1 minute and 45 seconds using 220 gallons of water. The [new] system extinguished an identical fire in 17.3 seconds using 13.6 gallons—with a hose requiring only one person to manage.
In other words, this new system put out a fire more quickly, using less water, and — critically — with fewer firefighters needed to operate the hose. This frees up needed firefighters to do other important tasks on the job, and therefore makes fighting fires faster and safer.” http://blogs.discovermagazine.com/badastronomy/2012/03/21/this-is-why-we-invest-in-science-this/#.UxfmPoWmj6c
The clean rooms are used for growing lettuce using hydroponics. All the complexity on a chip didn’t work out.
There are two ways of constructing a software design.
One way is to make it so simple that there are obviously no deficiencies and the other is to make it so
complicated that there are no obvious defi-
— C.A.R. Hoare
This sums up the question of complexity.
Ironically, Hoare greatly simplified the problem. Complexity appears in software for many reasons (not in any order and not inclusive):
– developers want to use the latest tool before they are proficient to pad their resume
– management demands that an arbitrary date be met, causing design/test shrinkage, i.e. they have never read “The Mythical Man-Month”
– some developers do not appreciate the dangers of memory leaks and other bugs
– marketing demands that features be included regardless of the security considerations
– management is too cheap to buy the necessary tools
– tools have flaws
– the organization does not have a competent test division, possibly because management does not understand the need
– some bugs are just not obvious at first
– management is more concerned with compensation than delivering a quality product
“The future of digital systems is compl
exity, and complexity is the worst enemy
Bruce Schneier, Founder and CTO, Counterpane Internet Security, Inc. 2000
1970s, the average car had 100,000 lines of source code.
Today it’s more than a million lines, and it will
be 100 million lines of code by 2010. The difference
between a million lines of code and 100 million lines
of code definitely changes your life.”
The ignition lock switches were defective.
“Consumer Reports says the recent recall by GM, which has linked 12 deaths to a defective ignition switch, raises concerns about warning systems for auto safety.”
[Search domain http://www.consumerreports.org] consumerreports.org
Too much code?
Being a layperson in the midst of all these security nightmares I often wonder exactly how much I unintentionally give away about myself. I always have my iPhone’s Safari browser set to private but I still wonder. Do I leave trails? Any advice on how I can find out?
It’s not an issue of being tracked and having no privacy. That went out the window years ago. iPhone? That means you trust Apple and your cellular carrier. They know where you are and a lot more, at most times. Apple is notoriously secret, which is why I do not trust them in any way.
Antivirus is dead?
An anti-pattern (or antipattern) is a common response to a recurring problem that is usually ineffective and risks being highly counterproductive. The term, coined in 1995 by Andrew Koenig, was inspired by a book, Design Patterns, in which the authors highlighted a number of design patterns in software development that they considered to be highly reliable and effective.
The term was popularized three years later by the book AntiPatterns, which extended its use beyond the field of software design and into general social interaction and may be used informally to refer to any commonly reinvented but bad solution to a problem. Examples include analysis paralysis, cargo cult programming, death march (software development), groupthink and vendor lock-in.
A story about very precise engineering is given in the 1858 story The Deacon’s Masterpiece or, the Wonderful “One-hoss Shay”: A Logical Story by Oliver Wendell Holmes, Sr., which tells of a carriage (one-horse shay)
That was built in such a logical way
It ran a hundred years to a day,
went to pieces all at once, —
All at once, and nothing first, —
Just as bubbles do when they burst.
Because it had been engineered so that no single piece failed first – no piece was over-engineered relative to the others, and they thus all collapsed at the same time. https://en.wikipedia.org/wiki/Overengineering
Two years of Heartbleed and no warnings, just more insecure overengineered waste.
When corporate web sites make use of a layered “no-trust allowed or expected” security system(s) as banks often employ then possibly the human element will be mitigated somewhat.
Multiple authentication techniques for access etc. that avoid direct one step access would help significantly.
Also working to a “total loss” possible, potential, mean being prepared for the security systems to be overwhelmed thus further mitigating loss and improving recovery.
Separating data elements “in time” and “distance” and authentication for example so that data is useless unless the hacker waits around for it. Using a “deep key” system.
Basically an “ablative” security system that expects to be hacked.
Great article Brian… extremely relevant to today’s IT security climate
“…yet this group appears to do little more than use each attack to spread a propagandist message.”
Appearance is not reality.
I have firsthand experience with an attack attributed to the SEA that was otherwise.
Also, there is some possibility they have support from friends engaged in more pernicious activities (nukes?).
“Also, there is some possibility they have support from friends engaged in more pernicious activities (nukes?).”
You must be referring to North Korea.
That said, would you prefer that Syria implodes, only to be replaced by an Iranian-style Islamic government?
I have no special knowledge, just speculation. But since you mentioned Iran, they were goaded into serious cyber warfare when they got stuck, er stux, awhile back. No telling who Assad’s friends and playmates might be. I just hope something better than a rock and a hard place might develop.
As far as trying to guess who might be helping the SEA, if anyone is, I’d be hard pressed to decide whether my money’s on the DPRK, Iran, the KGB or the PLA. They all would be happy making mischief for us. I’d bet there is some assistance to the SEA, from somewhere, based on details not to be discussed here.
Given the organization of the Internet you can’t stop people from using these methods to do devilish things. What is needed is someway to vet when the site changes in critical aspects so it can be caught and knocked down fast. In the old days there was production control to anticipate the impact of any changes before they went live. It seems the Internet needs more human eye’s on the problem.
Complexity is an enemy of password creation as well. Sites are increasing their password complexity, making the average user remember multiple passwords and differing rules. Thus people write them down. I access one secure confidential government system that requires a minimum of 12 characters, a combination of caps, lowercase, numbers and special characters which changes every month and cannot be repeated for 24 months. So all one need do is look under most user’s keyboards where they have written this month’s silly password because few can remember it. Complexity achieved. Security defeated.
I don’t know if I am doing the right thing, some reader input would help, but I now use Dashlane and let it generate my passwords. Other than the main password that lets me manage Dashlane I no longer even know what my passwords are to most sites, Dashlane generates and manages them for me. My using it is a direct result of ever-increasing password complexity. I hope I am not fooling myself into believing in a product that in fact does not protect me but other than writing them down I can’t keep track of all of my passwords!
It partly helps – mainly for the case where someone steals the equivalent of the passwd file, it becomes devilishly hard to brute-force crack the passwords when the passwords become this long and complex.
But from the standpoint of someone within the organization, it clearly makes no sense – it is easy to steal anyone’s password (assuming that they write it down correctly without subtle changes between what is written down and what you need to enter).
Windows does allow for the use of smarcards or eTokens to log in, but only in domain environments and not for workstations. I have used them in testing labs, and the theory seems sound enough. But I don’t know of many sites out there that use these to log into the corporate computers.
And this is of course no use for securing public-facing websites.
Another complexity issue is encryption. I use PureVPN and Firefox to connect to the web. PureVPN supposedly encrypts data and routes through servers which hide my IP. If it in fact works, and all I can say about that is that I haven’t been hit so far (luck or PureVPN?). If it in fact works I am not sure why everyone is not using it.
Back in the day when “Big Iron” was king products like ACF2 and Top Secret fought the good fight and won! Are the new breed of “security experts” missing lessons learned back in the day. Is there, again, a place for the old to help the new? Just a thought.
The advice i give to all my less technical friends who use windows is simple: create an account without administrator privileges and always use that account when computing. Sign in as administrator ONLY when you need to install software and software updates that you know and trust. It eliminates 95% of the chances that malware will affect your machine.
Trust models (Eve, the apple, and the serpent)
Insider threats (Samson and Delilah)
Decipherment (the writing on the wall of Babylon) – See more at: http://www.rsaconference.com/podcasts/125/how-joshua-dosed-jericho-cybersecrets-of-the-bible#sthash.OMSFOjv7.dpuf
Security is abundant. You can’t run out of it.
It’s not complexity. It’s complexity plus convenience. You can secure a complex system. But the more complex the less convenient it is to do so. Eventual convenience wins out and security gets broken
Web sites are making it more and more difficult for us end users to secure our machines. Two things that many websites do that are particularly problematic, IMO:
Several sites deliberately obscure all the content with an unscrollable covering element demanding you turn on JS, and (if you use NoScript’s delete-element feature to bypass it) turn out to work fine without JS anyway. (For the end user, but not, presumably, for their advertising partners. So they want to force end users to potentially compromise their own security for totally selfish reasons.)
One egregious example I saw had page source that consisted essentially of just a script to do a setCookie with my IP address and then page.reload, where presumably the server would send a different response, with actual page content, when the cookie was present. I manually created a cookie with “127.0.0.1”, which didn’t work, then changed it to my actual IP, which did. This seems like it exists *solely* to harass users who either disable JS or have a frequently changing IP address (e.g., TOR users).
2. They load many scripts from third party sites. It used to be that you could usually get away with at most temporarily whitelisting the JS on the same domain, while blocking everything else, and the only “functionality” lost would be tracking and advertising, plus things like Facebook “like” buttons and other social-networking-integration features.
Now, though, I frequently run into sites where there are easily two dozen or more miscellaneous things shown by NoScript and just the site’s own domain (and obvious variants, e.g. both huffingtonpost and huffpost) does not suffice to make something important work. As an example, I went to a site and images were missing. Enabling the site’s own domain’s JS produced placeholders caused by iframe click-to-play, which when clicked came up blank. Mouseover showed them to come from an instagram.com. There were scripts from there that could be enabled, but enabling those didn’t seem to do the job. Right click “inspect element (Q)” and expanding showed that the image placeholder area contained a load of script invocations from something like df38dfm4j23783ejdfol.cloudfront.net (the exact alphanumeric garbage being both almost-certainly wrong and utterly irrelevant to my thesis). Enabling that (while leaving out another very similar domain, differing only in the alphanumeric gobbledegook) made the images finally render.
Not using a plain-jane img tag = bad.
Requiring viewers run your image host’s scripts and not just your own scripts = worse.
Requiring viewers enable scripts from a hostname that’s totally f*cking opaque and *exactly the kind of meaningless names malware writers are wont to use* = *inexf*ckingscusable*.
I can whitelist Instagram if I think they are relatively trustworthy and unlikely to get hacked. I can’t whitelist *.cloudfront.net without totally compromising the whole point of using NoScript at my end, because it’s pretty clearly some sort of nameless self-storage boxes of the net, which anyone can rent to store any material. It would be scarcely worse to whitelist *.com.
I therefore ask all web developers everywhere:
2. PLEASE make JS functionality “gracefully degrade” if at all possible. Techdirt does this well: the story-expanding links expand stories in place with JS enabled; they load a page with the article by itself in its entirety with JS disabled, rather than just not work.
3. PLEASE DO NOT force JS on just to read or navigate your site. Users that disable JS have good reasons for doing so, and you should respect them. If you do not respect your users do not be surprised if they go elsewhere. Their machines’ security means much more to them than your advertising revenues do.
4. PLEASE host ALL scripts for user-centric functionality on your own domain. I see a lot of you needing scripts on third party image hosts like the Instagram example above, and a lot more needing jquery.something and/or .cloudfront.net. I DON’T EVER WANT TO SEE user-centric functionality that requires OPAQUE mystery-meat domains whitelisted for scripting EVER AGAIN, including but NOT limited to these jquery and cloudfront examples.
Limit your use of other-domain-hosted scripts to the tracking and advertising stuff that users won’t miss when it’s nonfunctioning. As the above Krebs article clearly demonstrates, the biggest risks to end-user security (and to your own site appearing defaced to users) comes from relying on such scripts. The surface area of attack for getting at my machine is much greater if I whitelist yourdomain.com, instagram.com, facebook.net, bigadhosting.net, tracking-cookie.com, ultrabehavioralprofiler.com, dizzygobble.cloudfront.net, moregarbage.cloudfront.net, and half a dozen more mostly unknown-to-me domains than if I whitelist just yourdomain.com, and is virtually zero if I can browse your site usefully without enabling *any* JS.
But I especially hate those opaquetoken.cloudfront.com ones, because those look *exactly* like the sorts of domain names where you’d expect some blackhat to tuck away a copy of the blackhole exploit kit to point to from the maliciously-altered script they altered after compromising instagram.com or some other third party component you used.
Don’t do that. Copy all user-functionality scripts onto your own servers and point to those local copies. Then the blackhats can’t get your users by hacking any other servers than your own, whether those users disable third-party JS or enable it all. And your site’s functionality relies solely on your own servers’ availability, and won’t degrade or go missing just because some other host such as that jquery one goes down. As it is, your site goes down if *either* your servers *or* theirs goes down!
And yes, maybe that should include your site’s ads’ code too. NoScript users without AdBlock will see your ads if they whitelist your site; otherwise they probably won’t, unless they love playing Russian roulette with “temporarily enable all” and the possibility that someone you outsourced user-tracking to got hacked.
Incidentally, I will note that not only is yoursite.com a smaller surface area of attack than yoursite.com plus a dozen or more third-party hosts, but yoursite.com also has a bigger incentive to avoid being hacked in the first place. If YOU get hacked and your users get hit with something nasty like Cryptolocker, YOU take a reputational hit. If you use a third-party provider for something, force your users to enable scripts from that provider or half your site’s functionality doesn’t work, and THEY get hacked and your users get hit with something nasty like Cryptolocker, YOU take a reputational hit. So, those third-party sites have little incentive to secure their systems. THEY won’t get a bad reputation if their machines spend a stint infecting end-users with something, their CUSTOMERS will.
Totally agree with your sentiments!
Me too! I definitely simply dump sites that wont work because they just want to get into your shorts! I have one or two sites that are too valuable to me to block everything, and I get attacked constantly with PUPs and downloads, redirects to other spurious sites. I haven’t got around to complaining to them, because my defenses are doing a good job nabbing the miscreants, but I have to wonder why they put their valuable customers through this. I hate to admit it, but I occasionally get redirects here at KOS wanting me to download some ridiculous media/flash player(not adobe), or a pager redirect that say “It Works!”. I’m not worrying about it here too much, but MBAP has so far nabbed or blocked all miscreants before any damage can be done.
Did not know about NoScript’s delete-element feature, thanks for bringing that up.
Do you have any other Noscript tips or tricks to share?
I have been using it since Steve Gibson mentioned it but never took the time to investigate all it does.
I find using Palemoon’s View / Page style set to no style gets around some JS blocked content.
There has been a huge increase in the number of new vulnerabilities in the web and mobile applications, leading to frequent online attacks. 5,291 new vulnerabilities were discovered in 2012, and 415 of them were on mobile operating systems. The cyber-attacks are now more sophisticated and focussed, and the financial loss as well loss to the brand’s reputation after such an attack is, in most of the cases, very hard to recover from. Such times call for the need for an exhaustive range of application security solutions which not only provide total application security to a company’s business but also aid them in the struggle commonly faced against regulatory compliance requirements.
The head of IAD for the NSA, Debora Plunkett, has said about 1 MILLION viruses are released every month and they don’t even stay in the wild long. The NSA wants to find a commercial partner to manufacture its own hardened cell phones, because they assume theirs are already virused. I guess similar to how they hardened Obamas ipad after it got hacked lol.
Some people call this propoganda, but its actually very elusive, and I believe it. Its also amazing how many so called computer gurus, in this day and age, still don’t even believe a bios can be infected. They always have been, since I was a kid, since before chernobyl, and nowadays its even easier to infect, not harder like the industry claims. I mean its common sense that if more things are wirless, and can be updated and tweaked right from the O/S, they are more vulnerable. Kids can blackscreen your pc with the push of a button, sometimes when your pc doesn’t turn back on you have to cross your fingers you can reset the cmos and that it fixes any checksum errors. And everytime you do this you shorten the lifespan or possibly the performance. And then you have to worry that nasty virus isn’t in some other device on your network that will just reinfect it again.
Its a losing battle, and the only thing thats gonna help, is society changing these kids minds. So they don’t end up with bad people. Or Working for people trying to destroy other people, for w/e reasons, which is wrong. Online communities themselves have to fight back against malicious hackers like they used to. Teen porn and the American government are not the real issues. People just need to Grow up and get a life.
Hackers that want to blame users and say its their own fault, or who want to pretend these issues don’t exist, are suspect, and automatically deemed just as malicious and guilty Society.
A system needs to be as simple as possible, yet as complex as necessary, or it doesn’t hold up to Occam’s razor.
complexity is just one arrangement of scale.
a more simple system will have fewer potential modes of failure, but it pushes all those extra event risks into a smaller number of events.
it seems to me being concerned about complexity in isolation is misguided and perhaps dangerous as it generates false confidence.
At some point, folks are going to have to address scale. Some may be blinded by greed in that more scale is always good.
so the RSA guy is way off, complexity without scale is meaningless. A database, complex as may be, is safe with zero records. These blind spots are very dangerous and it’s this stuff that scares me the most.