The recent zero-day vulnerability in Internet Explorer caused many (present company included) to urge Internet users to consider surfing the Web with a different browser until Microsoft issued a patch. Microsoft did so last month, but not before experts who ought to have known better began downplaying such advice, pointing out that other browser makers have more vulnerabilities and just as much exposure to zero-day flaws.
This post examines hard data that shows why such reasoning is more emotional than factual. Unlike Google Chrome and Mozilla Firefox users, IE users were exposed to active attacks against unpatched, critical vulnerabilities for months at a time over the past year and a half.
The all-browsers-are-equally-exposed argument was most recently waged by Trend Micro‘s Rik Ferguson. Ferguson charges that it’s unfair and unrealistic to expect IE users to switch — however briefly — to experiencing the Web with an alternative browser. After all, he says, the data show that other browsers are similarly dogged by flaws, and switching offers no additional security benefits. To quote Ferguson:
“According to this blog post, in 2011 Google’s Chrome had an all time high of 275 new vulnerabilities reported, the current peak of an upward trend since its day of release. Mozilla Firefox, while currently trending down from its 2009 high, still had a reported 97 vulnerabilities. Microsoft’s Internet Explorer has been trending gradually down for the past five years and 2011 saw only 45 new vulnerabilities, less than any other browser except Apple’s Safari, which also had 45. Of course raw numbers of vulnerabilities are almost meaningless unless we consider the respective severity, but there again, of the ‘big three’ the statistics favour Internet Explorer. If zero-day vulnerabilities have to be taken into consideration too, they don’t really do much to change the balance, Google Chrome 6, Microsoft Internet Explorer 6 and Mozilla Firefox 4. Of course different sources offer completely different statistics, and simple vulnerability counts are no measure of relative (in)security of browsers, particularly in isolation. However, it cannot be ignored that vulnerabilities exist in every browser.”
Looking closer, we find that this assessment does not hold water. For one thing, while Ferguson acknowledges that attempting to rate the relative security of similar software products by merely comparing vulnerabilities is not very useful, he doesn’t offer much more perspective. He focuses on unpatched, publicly-highlighted vulnerabilities, but he forgets to ask and answer a crucial question: How do browser makers rate in terms of unpatched vulnerabilities that are actively being exploited?
Part of the problem here is that many security pundits rank vulnerabilities as “zero-day” as long as they are both publicly identified and unfixed. Whether there is evidence that anyone is actually attacking these vulnerabilities seems beside the point for this camp. But I would argue active exploitation is the most important qualifier of a true zero-day, and that the software flaws most worthy of worry and action by users are those that are plainly being exploited by attackers.
To that end, I looked back at the vulnerabilities fixed since January 2011 by Google, Microsoft and Mozilla, with an eye toward identifying true zero-day flaws that were identified publicly as being exploited before the vendor issued a software patch. I also queried both Mozilla and Google to find out if I had missed anything in my research.
As Ferguson mentioned, all browser makers had examples over the past 19 months of working or proof-of-concept exploit code available for unpatched flaws in their products. However, both my own investigation and the public record show that of the three browsers, Internet Explorer was the only one that had critical, unpatched vulnerabilities that were demonstrably exploited by attackers before patches were made available. According to Microsoft’s own account, there were at least six zero-days actively exploited in the past 18 months in IE. All but one of them earned Microsoft’s most dire “critical” rating, leaving IE users under zero-day attack for at least 152 days since the beginning of 2011.
If we count just the critical zero-days, there were at least 89 non-overlapping days (about three months) between the beginning of 2011 and Sept. 2012 in which IE zero-day vulnerabilities were actively being exploited. That number is almost certainly conservative, because I could find no data on the window of vulnerability for CVE-2011-0094, a critical zero-day flaw fixed in MS11-018 that Microsoft said was being attacked prior to releasing a patch for it. This analysis also does not include CVE-2011-1345, a vulnerability demonstrated at the Pwn2Pwn contest in 2011.
For that same time period, I couldn’t find any evidence that malicious hackers had exploited publicly-disclosed vulnerabilities in Chrome or Firefox before those flaws were fixed. Nevertheless, I put the question to both companies. A Google spokesperson said that the company has never observed a Chrome zero-day in the wild against any of its stable versions since it first released Chrome. The company noted that while there have been Flash zero-days — and that it ships Flash with Chrome, none of these Flash flaws have been unique to Chrome. What’s more, Google said, most attacks on Flash explicitly did not target Chrome because the attackers didn’t have a sandbox bypass to pair with the Flash exploit.
A Mozilla spokesperson said the last true zero-day that was used by attackers to install malware via a vulnerability in Firefox came in October 2010, from miscreants who’d stitched an exploit for an unpatched Firefox flaw into the Nobel Peace Prize Web site.
Microsoft and other major software vendors like to point out that a majority of the early attacks against true zero-day flaws are targeted and not widely disseminated. While that is sometimes the case, the bigger the vulnerable software’s install base, the greater the likelihood that the exploits for zero-day flaws in them will be loaded into automated exploit kits that are sold in the hacker underground. Microsoft was relatively quick to issue a fix for its most recent IE zero-day (although there is evidence that the company knew about the vulnerability long before its first public advisory on it Sept. 17), the company’s 42-day delay in patching CVE-2012-1889 earlier this summer was enough for code used to exploit the flaw to be folded into the Blackhole exploit kit, by far one of the most widely-used attack kits today.
Below is the data I used to arrive at the Internet Explorer vulnerability numbers above:
==
Patch Date: Sept. 21, 2012, MS12-063
Flaw: CVE-2012-4969 (execCommand Use After Free, Critical)
Initial identification: Sept. 14, 2012
Minimum window of active attack: 8 days
==
Patch Date: July 10, 2012, MS12-043
Flaw: CVE-2012-1889 (XML Core Services, Critical)
Initial identification: May 30, 2012
Minimum window of active attack: 42 days
==
Patch Date: June 12, 2012, MS12-037
Flaw: CVE-2012-1875 (Same ID Property, Critical)
Initial public identification: June 1, 2012
Minimum window of active attack: 12 days
==
Patch Date: April 12, 2011, MS11-026
Flaw: CVE-2011-0096 (MHTML Mime-formatted XSS, Important)
Initial public identification: Jan 28, 2011
Minimum window of active attack: 74 days
==
Patch Date: April 12, 2011, MS11-018
Flaws: CVE-2011-0094 (Layouts memory handling, Critical)
Initial public identification: N/A
Minimum window of active attack: N/A
==
Patch Date: Feb. 8, 2011, MS11-003
Flaw: CVE-2010-3971 (CSS Memory Corruption, Critical)
Initial acknowledgment: Dec. 20, 2010.
Minimum window of active attack: 51 days
==
I realize that browser choice is often a personal matter, and that people still get emotionally and habitually attached to browsing the Web in a certain way. However, I hope the above information makes clear that temporarily switching browsers to avoid real zero-days is a very sane and worthwhile approach to staying secure online. Although it is true that all software has vulnerabilities, the flaws we should truly be motivated to act on are those that are actively being exploited.
Brian,
I think it may be well past time to seriously suggest to the average person on the street that they need to use a set of VMs with isolated browsers.
When sites that would normally be trusted like the Nobel Peace Prize site is attacked, what hope has the average person got of protecting themselves?
I would suggest:
The base OS is used for most uses except for browsing.
A non-dispoable ChomeOS VM or similar for daily browsing that forces updates and remembers state with minimal host integration to the “Downloads” folder
A highly protected ChomeOS VM that can only go to Internet banking and e-commerce sites over HTTPS via a whitelisted firewall policy, preferably with no or minimal integration to a statements folder
A disposable live CD image without any host integration for let’s call it “adult and dodgy sites”
It would be great if someone could write a front end to Chrome for the base OS to farm the rendering to the VMs in a very transparent fashion, so basically a super duper sandbox to isolate damage when (not if) it occurs.
Thoughts?
For a “super duper sandbox” see:
http://theinvisiblethings.blogspot.com/2012/09/how-is-qubes-os-different-from.html
Your description seems to fit exactly what a startup called Bromium seems to be doing…
@Andrew van der Stock: “I think it may be well past time to seriously suggest to the average person on the street that they need to use a set of VMs with isolated browsers.”
That would suggest a belief in the strength of VMs which I do not share. All large software systems (like VMs) have flaws which destroy security guarantees. “Sandboxes” are similar. Then, if malware does get through the VM (or sandbox), even once, we can expect it to infect the hard drive (or flash drive) essentially forever. That is not a success, and should not be a “serious suggestion.”
When possible, we are much, much better off with a boot DVD than VMs or sandboxing, because harsh experience has taught us that software cannot be trusted for security. In contrast, we *can* trust that a DVD which is not in the drive is *impossible* for malware to infect.
As an alternative, I suggest having one ordinary machine for general browsing, which we call “insecure.”
And then another machine, without a hard drive, but with hardwired LAN, booted from DVD immediately before online banking or other secure use, with the DVD immediately removed. This “secure” machine should be powered off or physically disconnected from the LAN when not online.
Possibly the insecure machine could be just a different DVD with less-restrictive browsing. That would not, however, reduce the probability of BIOS or “hardware infection” which would then also be present in the secure machine. We pay a large price in confused reasoning by not being able to guarantee the detection of an arbitrary bot infection.
I need to disagree on this one. Security By Isolation while using something like bare metal Hypervisor (type 1) such as XEN as a far smaller attack surface then anything else. Simply look at XEN security vulnerabilities. This is real, high quality sandboxing and its bringing real value if you look at Qubes OS implementation.
Reduced attack surface is nice. Problem with Xen is Dom0, which must be considered part of the TCB typically. Alternatives such as Nizza Security Architecture, Perseus Framework, OKL4 Hypercell, INTEGRITY Workstation & the MILS kernels didn’t have this problem. Recently, Flicker claims to have the TCB down to around a few hundred LOC on an untrusted system. Quite a few more like that released in academia in past year or two. Xen’s better than native Windows or Linux+browsers, but not “secure” by far. (Note: the project to get Xen certifiable even to EAL5 wrt to assurance evidence has been going for years & still isn’t finished.)
Qubes is a nice project, though. They reinvented the wheel a bit, but even I gave them props on what they were able to accomplish. Ease of use, hardware support & power management come to mind. My desktop + KVM + little ARTIGO-style (or nettop) machine for web surfing is much more effective for isolating web stuff.. Plus it’s close to provable isolation w/out putting the work in: just buy the stuff, wire it up, turn it on, and use. Using Qubes, SecVisor, L4Linux or something like that in the browsing machine can still help for damage limitation or speedy (more trusted) recovery.
Nick P
schneier.com
I’ll just remark that I used to have a job at a non-profit agency where the Powers That Be decided we must all use Popular Alternative Browser. Maintaining it was the cause of many late nights going from office to office, manually patching one system after the next, after the employees had gone home. Unlike Internet Explorer, Popular Alternative Browser had zero central-manageability capabilities
I quit that job.
If you’re writing for just the home/SOHO audience with a few computers and a few user accounts, this may not matter to them. In a business environment, one would think long and hard before deciding to swap browsers on hundreds or thousands of users and their computers as a knee-jerk response to a zero-day.
Firefox has this problem not only for corporate environments. I maintain a multi-user computer at home for my family and it is impossible to eg. enforce use of certain plugins (like NoScript) from the administration account or to enforce central policies for those plugins.
IBM switched their employees to Mozilla Firefox in 2010 and the U.S. Department of State switched their employees to Google Chrome earlier this year (2012). And if Microsoft Windows remains the default operating system for most users in these organizations, then they also have access to Internet Explorer which is, presumably, patched on Patch Tuesdays and out-of-band.
One wonders if email alerts are sent to employees when there are exploits in-the-wild for Internet Explorer (similar to what the German government does for its citizens). Ditto for Mozilla Firefox as the in-the-wild exploit in October, 2010, post-dated IBM’s switch to Firefox. One also wonders about the specifics of the enterprise deployment of alternate web browsers by IBM and the State Department.
At least two very large, global in-fact, organizations are supporting both Internet Explorer and an alternate web browser.
http://news.softpedia.com/news/IBM-Switches-to-Firefox-as-the-Default-Browser-145970.shtml
http://midsizeinsider.com/en-us/article/us-state-department-switches-to-google-c
I use Comodo’s version of Chrome, which is Dragon, and I like it way better than IE9, as it opens crazy fast and loads pages like lightning; but I must admit, I don’t use it in my honey pot lab. I use IE because all my clients use it, and that way I can stumble around just like them, to hit a minefield.
Amazingly though – The browser blocks or otherwise foils about 85% of the zero day threats I purposely expose it to! This is before even thinking of implementing an improvement like EMET.
This is not even considering the other mitigations, and solutions I use to thwart attack. At least IE and FireFox both can run Rapport; I’ve never done a test to see if Dragon can prevent session riding, but then; it is difficult to setup such a test, so it will be a while before I try it.
Since I don’t know when you were using the Popular Alternative Browser, there is a GPO add-on for a certain Popular Alternative Browser that does allow you to manage it within a Active Directory environment.
At the time, the PAB that can be managed via GPO didn’t exist, and we were on a WinNT domain anyway, so no Active Directory for me. I did have telemetry on detected virus attacks via McAfee ePolicy Orchestrator. Browser-driven threats actually went up, not down.
I second RHM’s pondering of whether dual-browser enterprises instruct their employees to flip-flop between browsers in response to known zero-days. Personally, I’d be more apt to mitigate. Does my gateway’s antivirus recognize these exploits? Does our desktop antivirus recognize these exploits? Are there other practical, effective mitigations in place, or that I can put into place with a temporary GPO?
Taking the recent zero-day as a test case, where I work now, we already had two strong mitigations in place (ActiveX filtering and EMET), and antivirus detection within a day. Tangentially, it would be interesting to investigate how long it took security vendors to cover the various exploits mentioned in the article.
I agree – it would be interesting indeed!
You should have suggested they move to something like FM Firefox, which features full AD integration.
BTW, were login scripts some kind big unknown at that organization? You can script quiet installations. You can even create your own MSI from the official FF installer if you don’t care about central management. It would have saved you from walking from office to office at the very least.
That’s not a bad idea, and we did in fact attempt to create our own custom .MSI as you’re suggesting. It was a big FAIL.
You suggested using FM FireFox. My take: given a choice between FireFox, which runs at Medium integrity with no sandboxing or other limitations to constrain its as-yet-unrecognized zero-days; or IE9 with Protected Mode, Low integrity, and a known zero-day that I can easily mitigate against, I’ll still pick IE9. Mozilla is overdue for some sort of sandboxing technology if they want to compete with Chrome or IE on actual mitigation features.
Oh, and
“BTW, were login scripts some kind big unknown at that organization?”
Well, keep in mind this was a WinNT domain and all my users were non-Admins. Their log-on scripts lacked the Admin privileges required to install software. Map network drives and printers? Sure.
The ulterior motive for cramming PAB down my users’ throats was that the Powers That Be were edging towards a Linux thin-client environment, and figured they’d need to start migrating the employees off anything that tied them to Microsoft platforms, starting with the browser and then our evil, evil Exchange server.
The users largely shunned PAB in favor of IE (which they figured out how to launch without any IE shortcuts), and I couldn’t blame them… IE launched about 15 seconds faster. After I quit, the Exchange server did get killed, but their planned alternatives didn’t work out, and nowdays they use… Gmail! Well whatever, not my problem anymore 🙂
I think there is a bias in the measurement: Microsoft has much better view telemetry in to what gets exploited. I don’t think Mozilla has anything close to Microsoft’s telemetry capabilities. I think Google has a better view than Mozilla, but I don’t think they reach the levels of Microsoft.
Not to say that the results would be any different; but I think your article should note this bias.
” I would argue active exploitation is the most important qualifier of a true zero-day”
Yup, 89 overlapping days since 2011 where IE was being actively exploited is a worrisome number. IT department might love using IE b/c of the delayed, and infrequent Windows/IE patch days, but it is a disaster for browser security.
Also could you comment on the NSS labs test results? Not sure what to think of it.
I agree that IE is definitely the browser most at risk. Just the fact that it runs Active X controls is sufficient to take it out of the “secure” category.
But it’s also quite clear that it is the browser most targeted, just as Windows is the most targeted OS. One can argue that other browsers and OS are equally insecure, or would be if they had the same market share, but that argument is weak and will remain so until they DO have the same market share.
I tell my clients to run Firefox with NoScript and AdBlock – and now I also tell them to not install Java or disable Java in the browser. These steps alone will vastly improve security for the average HOME end user.
If he’s part of an organization that is being targeted by competent hackers, however, all bets are off. There are just too many ways a hacker can compromise someone who’s on the Internet. Organizations should be adopting policies similar to what Andrew recommended above: completely isolated and sand-boxed – and monitored – browsers.
In fact, frankly, I think corporations should prohibit ANY Internet access for end users except those who NEED to access specific sites to do their function. Those sites should be white listed and everything else blocked. For those employees who have to have greater access, they should be doing it on machines unconnected to the rest of the corporate network – the same sort of dual machine use, one for “classified” work, one for “unclassified” work, that the military uses.
I know this flies in the face of “employee empowerment” – and I for one regret that more than most – but it’s the only answer. By far the majority of breaches today come from manipulated Web sites, browser exploits, Internet technologies like Flash and Java, and email phishing. Corporations are going to have to amend their policies to mitigate these attacks.
I guess it depends on the business, but where I work, we have to access web based programs daily for required work. Now, the company does try to limit what employees access, but there are sites that many users need to hit to do their jobs.
One of our company web based applications will not allow employees to access it from any computer other than the one they are assigned. Employees that travel cannot access this application unless they are using a company laptop that is assigned to them. On a trip a few years ago, I discovered a back door and have used it whenever I travel. The head of our IT dept saw me use it last week, and I thought he was going to have heart attack. I hadn’t realized that it was an unknown backdoor-I thought it was one put in place by IT when they need to access our computer.
An interesting point is whether companies should be developing Web-based apps that are accessible from outside the network. Given the poor track record of SDLC, I’d say there’s an issue there.
Before the Internet came along, companies had remotely-accessible applications. You logged in via a remote-access device, but the app was written in whatever language or database script was the standard for the company.
Might be time to go back to that. I suspect it’s harder to fuzz an in-house app written in a compiler language than it is to compromise one written with Web scripting languages and Web technology – especially if it’s done by programmers with little experience in secure app development. I can’t prove that, but I suspect it’s true.
But it’s also quite clear that it is the browser most targeted, just as Windows is the most targeted OS.
Except that IE is no longer the most used browser, unlike Windows still being the most used OS.
Probably true, I haven’t been following the browser usage stats. But I suspect neither Chrome nor Firefox individually exceeds IE use, but the two together probably true. However, that may vary by country.
Yes, it depends heavy on the country.
Also if you include tablets, mobiles and other devices into the stats (remember users keep sensitive informations there too, do online-banking, etc) then Microsofts IE falls behind Google’s offer. Only if you exclude everything else and look only on desktop PC (but please then also no calc Win8 /Win8RT on tablets luke surface or mobiles with WP in) then IE still leads market share. I think from a security pov there is no valid reason to do such kind of random exclusive comparisions.
I see one flaw in your argument Brian.
How does one accurately deduce whether a browser with a zero day vulnerability is being actively exploited or not? For sure, many zero-day vulnerabilities get exploited quickly & publicly with chatter on hacking sites, inclusion in Metasploit/exploit kits and submissions to VirusTotal. Those are easy to spot.
However, you’re never going to know whether an elite lone-wolf hacker sitting in Russia (or US government sponsored hackers for that matter) is surreptitiously exploiting a zero-day vulnerability in Firefox in a highly targeted manner.
For all we know, there may be multiple zero days in Firefox right now which are being actively exploited and we know nothing about any of it.
Yo must acknowledge that this is a possibility and thus negates your argument to some degree.
What say you?
Of course there may be unknown unknowns. It’s logically hard to argue that. My post was based purely on data that’s available. The sad (and probable) truth is we live in an era in which there seem to fairly constantly be multiple zero-days for multiple broadly-used software products.
I downloaded the plugin but still cant play games,I only go on Facebook and play there any suggestions ? Thank you 🙂
Uneducated users, longest oday and unpatchable.
To add to that – mobile OS and apps are vastly outnumbering Microsoft installations, so the browsers that inhabit the mobile space will be a fat juicy target for the up and coming cracker.
As long as their kit runs on Atom processors they will be in a golden age soon – if not already! B-)
re Ferguson’s argument that it is OK to do nothing. Ferguson’s argument holds – at best – only when one considers the statistics in isolation. But we know more in this situation than just the statistics. We know that there is an exploit against a 0-day in IE. This fact is in addition to the statistics and it defines the probabilities. During the time the IE exploit was known to be active, the Probability that I was susceptible to an IE exploit against which I was unprotected is 1. For any alternative browser – about which we know only the statistics – the probability that I’m susceptible to an exploit against it is less than 1. For example, if Browser A had 180 days in the last twelve months when there were active exploits against 0-days, the probability on any given day that I am susceptible to that exploit is only 0.5.Therefore, we should change browsers; even to one that – on average – puts users to greater risk. The German Government was right, anything but IE.
Thanks for the good work, Brian, excellent reasoning.
@Tom:
Talking about “highly targeted” attacks do you agree that those targets could have enough information and guidance at hand to not fall for 0days?? – At least Microsoft is a multi-billion multi-national mass market company … and I really hope they (futher) improve to serve that market. 😎
@Uzzi (@Tom), No, I do not agree. Your phrasing implicitly blames the victim (“fall for a 0 day”), which is already assuming a model that is not in reality true. If you are not familiar with the state of the art in malware, there are exploits that use pre-fetch and preview features (such are part of many modern software packages) to implement drive-by downloads that require no user interaction and are quite possibly completely transparent to the user.
If you’ve watched (as I have) the Adobe window flicker and go blank for second or two before displaying the attachment you just opened, while the instrumentation on your malware analysis system shows the file creation dropping a clean copy and the process creation as the compromise executes and the network activity reaching out to the C2 channels you’ll know what I mean, if not just trust that it’s not users “falling” for 0 days any more than it’s KIAs “falling” for a sniper.
Has anyone produced a table of browsers by months by 0days, by active versus supposedly less exploited? I’d have to do a lot of work to extract that from these commentaries.
Is Microsoft taking more flack because its 0days tend to get more actively exploited, and more actively studied, due to the large number of folks who don’t switch to other browsers?
I saw one, but the information is obsolete by now, as that was several years ago. Such articles do get published at Tech Republic occasionally. Just watching the stats at Secunia can give a person at least a foggy idea what is happening.
True, but at the same time, it’s security by avoidance. That sort of security tactic is valid, but it is not without it’s flaws. It is not impossible to envision a scenario where a massive IE zero day forces many organizations to force their userbases to switch, and then specifically target those organizations with a Firefox or Chrome vulnerability.
My point is not just to say that people shouldn’t switch at all. Again, it’s valid and reasonable to avoid problems. However, that must come in conjunction with pressure on the alternate browser developers to make sure they’re vigilant as well in closing their own vulnerabilities in a timely fashion. Rik Ferguson has his own valid point about vulnerabilities existing in other browsers, and the avoidance technique will only work until those other browser vulnerabilities start to be exploited.
It’s a very simple and obvious evolutionary move to employ alternate browser exploits. It would be incumbent on Firefox, Chrome, Opera, and all other developers to make sure those problems are addressed when they’re discovered. They’re safe now, but that’s only because they’re not yet targeted. That can all too easily change, hence the need for vigilance on their part as well.
Whoops. I just realized that my post may read as if I’m saying that Firefox’s, Chrome’s, and other browser developers are not on the ball about security. That’s not what I meant. Rather, I was simply saying that they need to remain vigilant and continue to patch and address vulnerabilities. Remain, or better yet increase their efforts. I’m definitely not saying they’re bad at this point in time. What I’m warning against is a relaxation of security efforts. Thankfully, that’s not something that appears imminent at this time, but it still warrants mention.
This is the point I keep making: ANY security measure has a limited use, because sooner or later it will be bypassed by someone figuring out a way to do it, or some new technology which makes it obsolete.
Security is something that is never fixed. Switching from IE en mass WILL merely cause the alternative browsers to be the new target. That’s obvious.
But it’s still valid as a short-term – however one measures “short-term”, which should be measured in terms of how long it will take hackers to switch tactics – measure.
The over all point: you’re dealing with intelligent adversaries. It’s impossible to take ANY security tactic or technology and then rest on one’s laurels. Security is not a technology.
I try to look at IT security with the eye of a junk yard dog. I think that is pretty healthy practice in-and-of-itself! ]:)
That’s stating the obvious isn’t it – why do you feel the need to keep repeating it again and again?
It’s a habit of his. He does it everywhere.
Because it bears repeating.
I keep seeing security advice being given, even by the “gurus” – and certainly by many security blogs – which tends to ignore these facts.
i’m not a computer expert, so my problems with firefox may be due to that. everytime i use firefox i have problems. the font on webpages is very small. the web page looks very different. and forget about using firefox to access hotmail. hotmail looks really wierd in firefox.
Chrome is even worse – they do have extensions for that – but then you open yourself up to another exploit with every plug-in/extension you place in the browser.
Is there data for 32 vs 64-bit vunerabilities? IE9 vs IE10?
Good question – as for the last vuln patch; I don’t believe the x64 bit browser even needed it. Only the embedded 32 bit version.
I need to see if I can get LastPass to work on the x64 version of IE9; I’m sure it does.
I assume that any browser has zero-day vulnerabilities, and therefore I don’t count on it as my only layer of protection. Since using Sandboxie for the last two years, I have been insulated from all the drama concerning browser vulnerabilities. My non-technical wife can safely surf any dodgy site without worries. When the browser starts acting strangely, we know to close the browser, and Sandboxie will automatically delete all the changes made since the opening the browser. When going to any site to do financial transactions, we always close and reopen the browser to ensure a clean browser.
Sandboxie is inherently immune to most browser attacks when configured with a white list of applications allowed to run in the sandbox (Start/Run access restrictions) and another white list of sandboxed applications allowed to access the internet. Sandboxie has many other protections such as forcing sandboxed apps to drop admin rights.
The only problem I see with Sandboxie is that if Brian started promoting it, it would take most of the drama out of internet security, and his blog would become less relevant. Even though I have gotten off the roller coaster of anxiety about internet security, I still find Brian’s blog interesting to read every week.
However –
http://radlab.cs.berkeley.edu/w/upload/3/3d/Detecting_VM_Aware_Malware.pdf
Sandboxie is like a VM, if in fact just like a VM – If malware can work around that, you could still be financially impacted. Please remember that even legitimate sites can be infected, even some banks sites, and load VM aware malware into the Virtual Machine environment.
As long as one is observant, and follows up with the bank or other shopping institution, a complete disaster can, of course, still be avoided. The best mind set is one where you always assume you are in fact – infected – and mitigate with tools that work in an infected environment. This can also increase the odds, that you will notice attempts at subversion of these tools.
If malware detects that it is within a Sandboxie sandbox, and it lays dormant, it will then be deleted when the browser is closed (since the browser is a leading process that signals Sandboxie to delete the whole sandbox). With Run/Start restrictions, even if the malware detects Sandboxie, it is nearly impossible for the malware to escape the sandbox and cause security problems in the PC.
I agree that it is good to assume that infections exist inside Sandboxie’s sandbox. That is why we close and re-open the browser before doing a financial transaction.
If a web site, such as a banking site, gets hacked, Sandboxie protects the PC’s security. Technically, having a bank account logon credentials stolen by malware on a hacked site is a privacy threat, not a security threat. Even a browser with no vulnerabilities doesn’t protect against such privacy threats. Sandboxe helps protect against privacy threats by allowing the user to block access to certain folders or files by sandboxed processes. But that is not complete privacy protection.
Most of the time, the malware payload is hosted on a different site than the hacked site. This is why whitelisting allowed script sites with NoScript successfully blocks most malware even on hacked sites. Thus, NoScript is an important part of privacy protection.
But we are diving into the details here. Let’s get back to the big picture… Brian’s blog post points out that none of the leading browsers can provide complete protection against zero-day security threats. The average reader then, naturally, feels helpless. My point is that readers would be well served to learn about Sandboxie so that they can at least solve the security problem and greatly improve upon privacy protection. If we point out all the privacy threats while discussing Sandboxie, then the average reader will still feel helpless, and therefore may not take any action. I would rather encourage readers to take action by installing Sandboxie (for free) and learning how to use it. Once that is accomplished, the reader can further improve privacy by installing NoScript.
I encourage readers not to get overwhelmed by all these blog posts and reader comments involving fear, uncertainty and doubt (FUD). Sandboxie offers such powerful security protection that many Sandboxie users don’t even run anti-virus software as a secondary layer. Since I scan all new download executables at VirusTotal.com, my anti-virus hasn’t found a single malware since installing Sandboxie a couple of years ago, even though we have stumbled on many rogue and hacked sites.
Cool story bro
“If malware detects that it is within a Sandboxie sandbox, and it lays dormant, it will then be deleted when the browser is closed (since the browser is a leading process that signals Sandboxie to delete the whole sandbox).”
You’re missing why he probably mentioned sandbox-aware malware. One key reason malware developers use this feature is to make you think a file is harmless. An important, seemingly-harmless file is more likely to stay on the system. In Sandboxie, if I recall correctly, you can keep certain files you download. So, sandbox-aware malware is a threat in that the user is more likely to get hit by it when they manually release it from the sandbox.
This doesn’t negate the protection Sandboxie offers, nor diminish the case for using it. However, it IS a problem that would affect many users of a program like Sandboxie if attackers start targeting that sandbox. The Sandboxie users have enjoyed an extra security through obscurity benefit for a while now. Let them continue enjoying it for now.
Realistically, Sandboxie is a good solution until it becomes specifically targeted. It’s quite easy to use, imposes little loading overhead compared to VM’s, stops most malware, & clears persistent state upon close. It also works for apps besides web browsers. It’s also cheap.
I know many people who have used it for years without getting an infection. We can’t ignore solutions that work now because they might be subverted later.
@Nick P: “I know many people who have used it for years without getting an infection.”
On normal systems, people may *claim* to not be infected, but the reality is almost never *known*. Absent an instrumented machine and extensive analysis, an infection may hide too well to be found. Tools simply do not exist which even claim to find an arbitrary bot infection. It is easy to claim to have avoided infection when almost everyone is incapable of detecting the infection which is there.
If we really could detect bots in practice, our bot problem would be over: We would just ask everyone to check for bots before getting online to their bank. Then, if they had a bot, they could re-install their OS (or do whatever) and check again, repeating until clean. The fact that we still have a bot problem *means* that bots cannot be detected in practice with reasonable effort. That is why extreme solutions remain in play. (Of course, if our computer hardware did not support infection, we would not need to look for it.)
@Terry Ritter, your wish has come true! See https://www.techsupportalert.com/content/how-know-if-your-computer-infected.htm
After running Sandboxie+Firefox+NoScript+Thunderbird for a couple of years, no infections detected using all the above tests.
@OutOfBox: “See https://www.techsupportalert.com/content/how-know-if-your-computer-infected.htm”
Before malware attack, it is possible to instrument a computer to detect the data modifications which constitute infection. But we cannot expect programs running on a powned OS (or BIOS) to report truth after the fact. And, in any case, malware is very clever at avoiding detection.
We know that tools which guarantee to find bots do not exist, because if they did people could use them before banking and we would not have the problems we have. Anti-vi companies would trumpet a complete solution to bots, yet we still have bots. The continuing bot problem *means* that no bot-detection tools are sufficiently powerful and practical to stop the progress of the problem.
@Terry Ritter
I applaud you for offering hypothesis. But if you don’t offer solutions that readers can understand, you are spreading FUD. What specific solutions do you offer fellow readers?
@OutOfBox: “I applaud you for offering hypothesis.”
I cited facts in support of my analysis. If you can imagine any way that effective bot tests can exist for you without having been co-opted for profit by the anti-vi makers, let us know. If you have contradictory facts, trot ’em out.
“But if you don’t offer solutions that readers can understand, you are spreading FUD.”
First of all, Truth does not become FUD just because it disagrees with you. You claim to have found tests to detect malware. While they may work for some older malware, they have somehow been unable to solve our current problems. As I pointed out, comprehensive and effective malware detection *cannot* exist, because if it did, anti-vi makers would get it, everyone would be told to use it, and there would no longer be a problem.
“What specific solutions do you offer fellow readers?”
Many times, on this blog, and in my articles:
http://www.ciphersbyritter.com/COMPSEC/
and in my recommendations to the US Government:
http://www.nist.gov/itl/upload/Ritter_ADVISING-THE-GOVERNMENT-ON-BOTS.pdf
I have suggested using Puppy Linux instead of Microsoft Windows on the web. I recommend using a machine without a hard drive, booting from the Puppy DVD immediately before any banking, and removing the DVD after booting. I recommend using Firefox, specifically because it supports a range of security add-ons (such as Adblock Plus, Certificate Patrol, Ghostery, LastPass, NoScript, Perspectives, Safe, ShowIP, and URL Tooltip). And since Firefox in Puppy looks and works exactly like Firefox in Windows, I can use the same browser in the same ways on both my diskless machine and the insecure disk-based Win7 media system in the living room.
Hope this helps.
@Terry Ritter
Thank you for detailing your specific solution(s). With the solutions offered, readers can choose the balance between security and cost/convenience that is right for them (and their family).
G Chrome vs. Safari or Why almost nobody can count
Quote Rik: “Google’s Chrome had an all time high of 275 new vulnerabilities reported … Apple’s Safari, which also had 45.”
Safari uses the same engine – WebKit – as Chrome. G Chrome adds some features, and another JS engine, but also security protections. Saying that Safari fares best and G Chrome fares worst with more than 6 times (!) as many vulnerabilities is simply unrealistic.
Most statistics are based on what the vendor or other people report about a browser, and they don’t actually test exploits. A vendor that is lying to its customers by hiding bugs will fare much better in statistics than one that rather fixes and reports any bug that might be exploitable. The latter is far more secure, though.
Browser security comparisons can’t be merely counting third party data. That’s a charade.
It needs tests from an independent, competent party. And yes, it needs to consider severity of the bug and days of exposure for end users.
Yes, and as a matter of fact, Apple has been getting caught more and more, doing just like Microsoft, and sitting on vulns known inside the organization, and patching them much later. Just do a search for old Apple vulnerabilities, and you will see what I mean.
Whether any exploit exists in the wild for these, is besides the point in my book, because it is always just a matter of time before the criminals decide coding for it, is worth the time and trouble.
No, Safari and Chrome are not using the same engine just like KHTML and Chrome do not even when Chrome is based on WebKit which is based on the KDE web engine KHTML. Based on != the same.
“many security pundits rank vulnerabilities as ‘zero-day’ as long as they are both publicly identified and unfixed. … I would argue active exploitation is the most important qualifier of a true zero-day”
I argue that there is no difference between the two. If a security hole is publicly known, it will be exploited.
I know the focus of your article was on browsers and zero-days, but you touched on a topic that I’ve been giving a lot of thought to lately, which is vulnerabilities that are ACTIVELY being exploited. As someone else mentioned, all software has vulnerabilities. But if I were to line up all 52,705 vulnerabilities currently in the CVE database and try to prioritize which software products I must spend my time keeping patched, I would focus first on the ones being actively exploited… the ones getting hammered the hardest. Especially if it involved products my organization relied on that protecting critical data.
When I look at the daily ATLAS summary of global attacks and see how actively NON-zero-day, totally ancient, vulnerabilities are being attacked (vulnerabilities that have had fixes for like, a decade), I’m always momentarily stunned that so many organizations are getting attacked using exploits of old vulnerabilities. I think that if we did as good a job warning people about the importance of patching ALL their software, not just those with zero-days, we could bring even more sunshine to the problem.
The only problem with the “actually in the wild” argument is that it means little for my clients who become victims of organized IP thieves, and there is little news of it. Since money is not involved the FBI won’t help, and the media could care less(except for Brian).
So many of these vulnerabilities are in fact being exploited, but there is no publicity for it. If you have a heavy hitter after you in the world of industrial espionage, it is little comfort that no “known” exploit is supposedly in the wild. Only a few targeted individuals suffer under these circumstances, but it is still tragic!
I agree with you that coders do have to prioritize; or they will never catch-up. My only complaint is that the big money makers like Microsoft and Apple, need to step up to the plate and put a little more money and effort in to it, and stop using us as guinea pigs to test their patch cycle.
@JCitizen: “I agree with you that coders do have to prioritize; or they will never catch-up.”
One of the largest computer security lessons of the past decade, at least for me, is the demonstration that patching does not lead to security. I had expected security patching to be like bug patching, to start out massive and then trail off toward perfection. That has not happened. Security patching does not trail off. There will be no catching up. No form of conventional production programming produces systems which can be patched into security.
“I had expected security patching to be like bug patching, to start out massive and then trail off toward perfection. That has not happened. Security patching does not trail off.”
That is because many of these issues stem from a legitimate use of the code, something that was put in on purpose and does have some legitimate uses. Until every single program is custom built for a particular one use purpose that can be controlled from start to finish, you will always have things being used in unintended ways that require perennial bandaids to fix.
That can cause the problem, but usually doesn’t. The underlying problems leading to the bugs we must patch are many. They include imprecise/incorrect requirements/specifications (major issue), unexpected bad interactions of components in complex software, coding errors, documentation errors, poor configuration, dependence on untrustworthy external entities (major issue), & lack of domain knowledge in construction of secure or reliable systems.
There’s more, but these are all exceedingly common & won’t go away. This is primarily due to economic & psychological forces. I expound on the reason in the comment below directed at Bruce Schneier re whose to blame about USB stick issues.
Manufacturer or OS developer’s fault because “The problem is that the OS will automatically run a program that can install malware from a USB stick.” (Schneier)
ME:
“…that’s surely a problem. Why does this problem exist? Because manufacturers don’t focus on building secure systems. Why don’t they build secure systems? >>BECAUSE USERS DON’T BUY THEM!
Most users want the risk management paradigm where they buy insecure systems that are fast, pretty and cheap, then occasionally deal with a data loss or system fix. The segment of people willing to pay significantly more for quality is always very small and there are vendors that target that market (e.g. TIS, GD, Boeing and Integrity Global Security come to mind).
So, if users demand the opposite of security, aren’t capitalist system producers supposed to give them what they want? It’s basic economics Bruce. They do what’s good for the bottom line. The only time they started building secure PC’s en masse was when the government mandated them. Some corporations, part of the quality segment, even ordered them to protect I.P. at incubation firms and reduce insider risks at banks. When the government killed that & demand went low again, they all started producing insecure systems again. So, if user demand is required and they don’t demand it, who is at fault again? The user. They always were and always will be.
On the bright side, those same users are the reason I can send photo’s to friends on a thin, beautiful smartphone. They also gave us short-lived 1TB hard disks whose low cost made the short-lived part tolerable. They are also probably why I have a full-featured, fast, cheap wireless router at the home. So, at least some good comes from the users choices of demand. But, they definitely don’t accept the tradeoffs of real security, they don’t demand it, it doesn’t pay to produce it, & that’s why it’s their fault.”
Steve Lipner, who worked on an A1-class system, illustrates the economic side of it in a fast moving industry
http://blogs.msdn.com/b/sdl/archive/2007/08/23/temp.aspx
The “all-browsers-are-equally-exposed argument” is a logical fallacy know as “argument from ignorance”.
Making such an argument that “Oh, yeah, IE has a vulnerability, but one of the other might have one too” is a failed argument.
Any such argument can be ignored on its face.
Jeff G.
That’s not really a fair characterization of the claim. It was never “… one of the others might have one too”, it was “Here are the numbers for each browser; note that Firefox, Chrome, and Safari each also have an equal share according to this source…”, and then the source is named.
The points in Brian’s article were never an argument from ignorance. On the contrary, the numbers and citation were given.
As always the Microsoft as opposed to others generates the most lively conversation and the most diverse opinions. Most enjoyable.
I would like to add one additional risk statistic to the conversation. Doug Hubbard (author of How To Measure Anything) did some measurement around the risks of zero day exploits and concluded the change of getting infected by a zero day before the vendor fixes the flaw is approximately .3%. That’s a 3 in 1,000 chance. In all seriousness I’m I really going to go through all the pain of switching my users to another browser (even temporarily) for this level of risk? I don’t think so.
Brian,
I agree with your post for a SOHO environment, however, for a business environment, switching to another browser is just not a serious mitigation option because of the following reasons:
1. Just because a vulnerability doesn’t have a publicly known exploit does not mean the exploit doesn’t exist. It probably does, with a price tag, you just don’t know it.
2. Users are very dependent on browsers, particularly for intranet applications, most of them will not take the ‘new temporary web browser the IT guys suggested’ lightly that is just not how it works. You need to run compliance tests, you need to guarantee centralized administration, you need to explain to upper management why you are installing a new unsupported browser on 5,000+ computers all of a sudden. Will that happen on the next ‘zero day’ too? Then why are you using IE to start with?
3. All other browsers are as vulnerable as IE, however IE is the main target for exploit development because is deployed in highly valued environments.
4. Finally, suggesting that people change browsers because of an ‘active zero day’, is like suggesting changing OS as well because of the ‘active zero days’, the fact is that most breaches are not initiated by zero days, but from common misconfiguration practices and poor implementation of security mechanisms, and that, unfortunately, IS always ‘active’, unlike zero days.
But yeah, for home users, switching is an interesting thing to do, maybe to learn something new along the way too.
“for a business environment, switching to another browser is just not a serious mitigation option
That depends entirely on the organization. Both IBM and the U.S. Department of State currently provide alternative web browsers for their employees. IBM expects all their end users across all platforms to use Mozilla Firefox as their default web browser:
http://www.sutor.com/c/2010/07/ibm-moving-to-firefox-as-default-browser/
Whereas the State Department is providing their end users with Google’s Chrome web browser as an *option* to use:
http://googleenterprise.blogspot.com/2012/03/secretary-clinton-announces-state.html?utm_source=entblog&utm_medium=blog&utm_campaign=Feed%3A+OfficialGoogleEnterpriseBlog+%28Official+Google+Enterprise+Blog%29
Organizations that provide alternative web browsers to their end users (for whatever reasons) have the flexibility to send out an email alert making users aware of a zero-day exploit in-the-wild for browser X and request that they use browser Y until browser X is either patched or mitigated. And then provide a follow-up email when they can safely use browser X again.
Since security is everyone’s business in an enterprise, making end users aware of a zero-day exploit in-the-wild for commonly-used enterprise software can be viewed as an educational opportunity for security awareness. Especially if there is a lag in patching or mitigating the software in question.
I understand your comment, however what you mention are planned migrations with a bunch of resources invested in order to allow users to have an option. It is not what this article suggested, of switching temporarily to another browser because of a new zero day. Any company can change to any browser with enough planning or at least offer it as an option, that’s for sure.
My point initially, was that switching technologies on the fly without previous planning because of a new zero day, was not a serious alternative in most business environments.
I wonder what effect IE’s policy sandbox has on Windows Vista and 7 (and now 8)…
The browser runs at mandatory low integrity level on the newer versions of Windows. Without some means of bypassing that, my understanding is that attempts to create a persistent infection will probably not get anywhere.
Sandboxes are not perfect. But I would hazard a guess that most infections through IE are still on Windows XP, or on computers on which UAC has been turned off.
Note however that turning off UAC is probably more common than one might think. I once had a job at a small branch office of a sizable corporation; their computer policy required UAC to be turned off entirely on their Windows Vista machines, in order to ensure compatibility with third-party webinar software and with their own in-house software.
(They also had every employee running in a full admin account on his or her workstation, again mostly for compatibility reasons. And yes, they were handling data of a decidedly confidential nature. One should not assume that “professionals” always know what they are doing.)
On Windows Vista and Win7, IE’s Protected Mode restricts the browser to Low integrity so it can’t interact with higher-integrity processes, and restricts its ability to write to the filesystem to specific locations. This all assumes Protected Mode is in use, of course… if they’ve disabled UAC or switched off Protected Mode for the Internet Zone, then that’s all out the window.
Escaping Protected Mode is possible; security researchers find various techniques to accomplish that. In real-world scenarios, I think browser add-ons remain the main threat, however. Survey some of the infection statistics panels from exploit kits. Java exploits lead the pack, and can be used to bypass browser mitigations (ASLR, DEP, sandbox, Protected Mode, integrity level). PDF and Flash typically take the runner-up honors. Exploiting the browser itself doesn’t appear to be very common. On that note, you might find Dino Dai Zovi’s “Attacker Math” presentation interesting. Here it is in PDF form:
http://www.trailofbits.com/resources/attacker_math_101_slides.pdf
On Windows 8, IE10 has two versions and they have different Protected Mode behaviors by default. If you’re interested in the convolutions of that subject, here’s Microsoft’s blog that tries to clarify it:
http://blogs.msdn.com/b/ieinternals/archive/2012/03/23/understanding-ie10-enhanced-protected-mode-network-security-addons-cookies-metro-desktop.aspx
It’ll be interesting to see how EPM fares in real life.
Thank you, that was enlightening.
And I have a bad feeling about EPM. Apparently it is not compatible with browser plugins, so its focus is exactly where the least focus is currently needed, i.e. direct exploits against the browser.
Good post and thanks for the link.
In my honey pot lab – I’d say 85% of the zero day threats are stopped by the IE9 browser. After that it is whatever security defense I’m using that attempts to stop the rest.
I’m rather impressed with that – in fact it is a little annoying when I’m trying to find out how effective new AV/AM solutions are!
Comparing the number of vulnerabilities between pieces of software and using that to judge their relative security is completely bogus and a logical fallacy.
For one, the severity of the bug matters. Is it merely a DOS or does it allow complete system compromise? Does it require user interaction or is it a drive-by? If Chrome has 45 unpatched bugs in the wild of minor severity and IE only has one known bug, yet that bug allows complete system compromise, is it fair to say IE is more secure because of the 45-1 ratio? Obviously not.
Second, Firefox and Chrome are open-source, IE is not. (In a sense IE violates Kerckhoff’s principle as does all closed-source software). Ever noticed how conveniently IE and Opera always report fewer vulnerabilities? Is it because they have better coders than the competition? Umm, no. It’s because they are both closed-source. Firefox and Chrome do not have that luxury. If someone finds a bug, it is made public. It’s hard to hide vulns in open-source code. On the other hand, MS can secretly discover and fix bugs in their code and never have to report it. This skews the numbers.
When you have the source code, finding bugs is easier. This fact causes some idiotic tech bloggers to proclaim FOSS software is less secure. However, the upside is that finding bugs is *always* preferable to not finding them and then hoping and praying some blackhat doesn’t find them first. I will bet with my bottom dollar that if IE opened their code tomorrow and posted it on Github that the number of vulns reported would sky-rocket and outnumber Chrome and Firefox’s *put together.* At least for a while until the FOSS world were able to fix MS’s coding flaws for them.
Google offers rewards for bugs, so that means their browser code is probably the most well vetted of any browser code on the planet. On the other hand, only a small team of people at MS have the IE code. Which would you trust more? Code vetted by the entire world looking to make a dollar? Or code vetted by a few MS programmers? I’ll take the former, thanks.
And finally, all the bugs in the world don’t matter if there are other technologies in place to prevent their exploitation. Google and Microsoft sandbox their browsers, whereas Firefox and Opera do not. It took *years* and several Pwn2Owns before anyone was able to break out of Chrome’s sandbox on a fully patched Windows 7 machine. Finally someone did it by stringing three separate exploits together and won a nice reward from Google for demonstrating it. The winner said it was the hardest thing he has ever exploited.
One can also apply Brian’s multi-browser argument in the mobile device space. Android 2.3, released in early December, 2010, was found to have a serious security vulnerability involving information disclosure. Google quickly patched the vulnerability, but a NC State University researcher discovered that the patch can be bypassed.
In response, Android-related web sites warned Android 2.3 users in late January, 2010, of the situation including mitigation:
o remove the microsSD card from the device, or
o disable JavaScript in the default web browser
For users that find either of these mitigations onerous, the recommendation was to switch to a third-party web browser.
http://www.engadget.com/2011/01/29/android-2-3-security-bug-shows-microsd-access-vulnerability/
http://www.androidauthority.com/android-is-vulnerable-8885/
Even today, Android 2.3 usage share is approximately 56%:
http://en.wikipedia.org/wiki/Android_%28operating_system%29#Usage_share
And remember that, depending on the device manufacturer and carrier, Android updates can be slow and, for some devices, non-existent. For those still running Android 2.3 on their devices, has your device been patched? And, if so, have you applied the patch?
Just curious, imagining yourself as a corporate mobile device administrator, what steps would you have taken (or take) in this situation?
P.S. Anyone still like the idea of BYOD in the enterprise?
Correction: “In response, Android-related web sites warned Android 2.3 users in late January, 2011, of the situation including mitigation”
This vulnerability was patched on April 28, 2011, in Android version 2.3.4. Hopefully, all current Android 2.3 users are at or above this version.
And here’s the original vulnerability report:
http://www.csc.ncsu.edu/faculty/jiang/nexuss.html
> P.S. Anyone still like the idea of BYOD in the enterprise?
As I agree with your post you must admit that a network could follow BYOD policies AND be perfectly safe depending on security architecture and policies?! (Safety should be a never ending effort in every network architecture and from an awareness pov BYOD could be of advantage over mediocrity…!?)
Almost all leading OS’s are North-American…
As long as the North-Americans are afraid… for almost everything… (especially the republicans are) their government wants to keep in touch of any information which is exchanged by this leading OS’s… Which will keep all these OS’s vulnerable… Or are there no good maths in the North-Americans which could design flawless OS’s?
Too me North-Americans are chicken, why else would they spend so much money on ‘Defense’?
Since this article is mostly about alternative web browsers, you’re in luck. The Opera web browser is for you as it is developed in Norway. Opera is both a very fine company and web browser. [Microsoft (Internet Explorer), Google (Chrome/Chromium), Apple (Safari) and the Mozilla Foundation (Firefox/Sea Monkey) are all headquartered in the United States, North America. Why confine your suspicions to OSs?]
You can run Opera on any one of the following OSs:
o Canonical’s Ubuntu (UK)
o Mageia (France/Brazil, recently forked from Mandriva)
o Puppy Linux (Australia)
o Qubes OS (Poland)
o SliTaz (France)
o SUSE’s SLED or openSUSE (Germany)
o Trisquel (Spain)
o Utoto (Argentina)
This is a short-list of GNU/Linux distros developed (primarily) outside of North America. Note that Qubes OS is more of a Xen distro and version 1.0 uses modified Fedora, I believe without SELinux, for its AppVM’s. The best thing is that the source code is available for you to review (except for the Opera web browser, which is proprietary).
As for sandboxing Opera on your GNU/Linux distro of choice, I would recommend that you consider either the Tomoyo or AppArmor Linux Security Module (LSM). Tomoyo is developed (primarily) in Japan. AppArmor, while created and initially developed in the United States, North America, is now maintained by Canonical, Ltd., in the UK.
P.S. 1 Although Linux Torvalds, the creator and lead maintainer of the Linux kernel, is originally from Finland, he now resides in the United States, North America.
P.S. 2 The Free Software Foundation (the GNU in GNU/Linux) is located in the United States, North America.
P.S. 3 While Xen was created in the UK, Citrix Systems acquired XenSource and the Xen Advisory Board is a Who’s Who of North American tech companies.
P.S. 4 Although Theo de Raadt, the leader of the OpenBSD project, is originally from South Africa, he now resides in Canada, North America. Sadly, Opera is not supported on OpenBSD. Also, sadly, the OpenBSD project has worked very hard since 1996 to audit their code with the objective of eliminating bugs.
P.S. 5 I assume that, because of TrustedBSD, you are probably not interested in FreeBSD. Pity, because Opera supports FreeBSD.
P.S. 5 You might be interested in China’s premier GNU/Linux distro, Red Flag Linux. I don’t know whether Opera supports Red Flag Linux or not.
P.S. 6 There are enough bugs in software, both OSs and web browsers, that backdoors aren’t really necessary. Even when Security Development Lifecycles are followed.
P.S. 7 If all else fails, an abacus can easily be had.
Umm, Opera is closed-source software. No thanks. Both Firefox and Chrome are 100% open-source. Moreover, Chrome is far more secure, at least on Linux where it utilizes the bpf-seccomp sandboxing technique which is quite cool (google it). It also uses a chroot() sandbox, so it effectively has two sandboxes. And if you add in AppArmor, you can have three.
“Umm, Opera is closed-source software. No thanks. Both Firefox and Chrome are 100% open-source.
Indeed, I stated that Opera was proprietary in my post.
As for Chrome being 100% open-source, here is a link showing the differences between the Chrome and Chromium web browsers:
http://code.google.com/p/chromium/wiki/ChromiumBrowserVsGoogleChrome
Google’s Chrome browser is NOT 100% open-source, although with its WebKit and Chromium underpinnings, it is mostly open-source. If you are using the open-source code is more secure than proprietary code argument (which I disagree with, btw), then you want to be using the Chromium browser, Gnash and your open-source PDF reader of choice on Linux or any other OS. Ditto if you dislike proprietary software for other reasons. And, depending on which Linux distro you use, Chromium may or may not be sandboxed.
P.S. 1 As I understand it, Opera for OS X is sandboxed when downloaded from Apple’s Mac App Store as application sandboxing is now a requirement for the App Store.
P.S. 2 You obviously don’t have a problem with software developed by North Americans (neither do I) as did the poster I responded to.
Yes I am, aware Chrome itself is a binary. Chromium is the open-source version. But considering Chrome is built directly from Chromium, there aren’t many changes in the code (maybe small changes but not many). I see very little difference in the two browsers just from using them.
Right now I am using the proprietary Chrome since the Ubuntu team does not update Chromium anymore for whatever reason (and there is no official PPA for it, though there are a few individuals who have made their own PPA. However, I prefer to use official builds or to build it myself).
You said:
“If you are using the open-source code is more secure than proprietary code argument (which I disagree with, btw), then you want to be using the Chromium browser, Gnash and your open-source PDF reader of choice on Linux or any other OS”
Being more secure is relative. All software, whether written by Microsoft, Apple, Adobe or by open-source projects, will have bugs. If you can show me any human being on earth who can write a complex application *without* exploitable bugs, I would like to meet him/her as they would be the smartest human on earth.
The advantage FOSS does have is there are no secrets. Every bug is made public and fixed promptly. If there is a security issue in the Linux kernel, for instance, it is almost always patched and pushed within a day or two (I’ve seen it done the same day). Contrast this with MS or Apple who often take months or years in some cases, to fix security bugs.
As for proprietary software, yes I am forced to use a tiny bit on Linux. I use the Nvidia drivers and Adobe Flash. Both are necessary evils. Nvidia refuses to open their driver code and Flash is necessary until HTML5 becomes the norm. I do not use Adobe’s PDF reader as there is a great fully-functional open-source reader already included in many distros — Evince. Google also includes their own PDF reader in Chrome.
I have an ideological problem with closed-source code. Not because I am some communist pinko, but because I feel it results in better quality software. I also feel it is a user’s right to know exactly what is going onto his machine. If he is technically inclined, he should be able to examine all source-code from the kernel up. Not that anyone is actually going to do this, but I like the freedom to be able to do so if one chooses.
YFTR: Red Flag Linux cooperates with Opera and someone could have found Opera’s press release from Nov. 24, 2004 which says “Red Flag Offers Opera to Chinese Users” on any search engine. 😉
Uzzi, thanks for the note. But, go easy as I didn’t say that Opera did NOT support Red Flag Linux (as I did with OpenBSD).
Let’s be positive about the Opera web browser here as it has been dinged in a couple of posts. First, the Opera web browser has most of the Firefox NoScript add-on functionality built-in and has for some time. Second, it has the functionality of the Firefox Flashblock add-on built-in and one can enable all plug-ins to load only on demand. Third, ad (and content) blocking functionality is built-in to the browser. Fourth, on the OS X platform, as per the requirement in Apple’s Mac App Store, Opera now runs in a sandbox. This is the last I heard on this topic:
http://my.opera.com/desktopteam/blog/2012/05/23/scrolling-performance-improvements
Still waiting for sandboxing in Windows …
> Still waiting for sandboxing in Windows …
Congratulations on your high expectations… (I’ve had the disputable pleasure to contribute one or another question to their CEOs, experts and management over the years and learned the hard way that most people there run on a low intellectual level. Maybe for now we have again to wait for some regulations… at the latest when collateral damage outperformes their profits. ;-))
Remember boys and girls… Risk = Threat x Vulnerability x Cost. If Vulnerability is a constant, then threat AND cost matter if you care at all about risk. Risk after all is what should be considered before a decision of action for mitigation. This is also easier to explain to management-types provided you are in touch enough to put the figures in language they can understand and be honest about the unknowns. IT security – not just for the dungeon dwelling binary reading geeks anymore. :p
That’s something that’s very easy to say. It’s almost meaningless, though. This isn’t one of those risks where we have a bunch of actuarians with great numbers. Researchers and commercial groups are STILL debating the best way to approach risk determination & ROI in INFOSEC. There are numerous competing methodologies. The numbers in area of threat, vulnerability & cost are varied (read: unreliable).
The best approach is a mix of risk management & basic prevention. The Australian security group showed just four basic things stop 75% of attacks on company networks. Krebs has his rules. We also routinely can choose among IT products/services whereby one has enormous risk & one doesn’t, so choosing the low risk choice can have advantages.
The other part is the risk management. Here, we look at what assets a business has & their operational goals. We look at how the IT systems fit into that. We can then derive some costs & other metrics that can guide risk management decisions. Further INFOSEC investment should be driven by these to make financial sense & have continued management support.
Note, though, that doing one or the other leaves you open to problems. INFOSEC isn’t as simple as Risk = Threat x Vulnerability x Cost. It’s more like Hacked? = Opportunity x Popularity of Target Vectors x Number of Target Vectors x Number of People Skilled in Attacking Target Vectors. Plenty of companies doing it like a accounting checklist have fallen to “APT”‘s & red teams. That should have told you something.