Posts Tagged: apple


26
May 18

Why Is Your Location Data No Longer Private?

The past month has seen one blockbuster revelation after another about how our mobile phone and broadband providers have been leaking highly sensitive customer information, including real-time location data and customer account details. In the wake of these consumer privacy debacles, many are left wondering who’s responsible for policing these industries? How exactly did we get to this point? What prospects are there for changes to address this national privacy crisis at the legislative and regulatory levels? These are some of the questions we’ll explore in this article.

In 2015, the Federal Communications Commission under the Obama Administration reclassified broadband Internet companies as telecommunications providers, which gave the agency authority to regulate broadband providers the same way as telephone companies.

The FCC also came up with so-called “net neutrality” rules designed to prohibit Internet providers from blocking or slowing down traffic, or from offering “fast lane” access to companies willing to pay extra for certain content or for higher quality service.

In mid-2016, the FCC adopted new privacy rules for all Internet providers that would have required providers to seek opt-in permission from customers before collecting, storing, sharing and selling anything that might be considered sensitive — including Web browsing, application usage and location information, as well as financial and health data.

But the Obama administration’s new FCC privacy rules didn’t become final until December 2016, a month after then President-elect Trump was welcomed into office by a Republican controlled House and Senate.

Congress still had 90 legislative days (when lawmakers are physically in session) to pass a resolution killing the privacy regulations, and on March 23, 2017 the Senate voted 50-48 to repeal them. Approval of the repeal in the House passed quickly thereafter, and President Trump officially signed it on April 3, 2017.

In an op-ed published in The Washington Post, Ajit Pai — a former Verizon lawyer and President Trump’s pick to lead the FCC — said “despite hyperventilating headlines, Internet service providers have never planned to sell your individual browsing history to third parties.”

FCC Commissioner Ajit Pai.

“That’s simply not how online advertising works,” Pai wrote. “And doing so would violate ISPs’ privacy promises. Second, Congress’s decision last week didn’t remove existing privacy protections; it simply cleared the way for us to work together to reinstate a rational and effective system for protecting consumer privacy.”

Sen. Bill Nelson (D-Fla.) came to a different conclusion, predicting that the repeal of the FCC privacy rules would allow broadband providers to collect and sell a “gold mine of data” about customers.

“Your mobile broadband provider knows how you move about your day through information about your geolocation and internet activity through your mobile device,” Nelson said. The Senate resolution “will take consumers out of this driver’s seat and place the collection and use of their information behind a veil of secrecy.”

Meanwhile, pressure was building on the now Republican-controlled FCC to repeal the previous administration’s net neutrality rules. The major ISPs and mobile providers claimed the new regulations put them at a disadvantage relative to competitors that were not regulated by the FCC, such as Amazon, Apple, Facebook and Google.

On Dec. 14, 2017, FCC Chairman Pai joined two other Republic FCC commissioners in a 3-2 vote to dismantle the net neutrality regulations.

As The New York Times observed after the net neutrality repeal, “the commission’s chairman, Ajit Pai, vigorously defended the repeal before the vote. He said the rollback of the rules would eventually benefit consumers because broadband providers like AT&T and Comcast could offer them a wider variety of service options.”

“We are helping consumers and promoting competition,” Mr. Pai said. “Broadband providers will have more incentive to build networks, especially to underserved areas.”

MORE OR LESS CHOICE?

Some might argue we’ve seen reduced competition and more industry consolidation since the FCC repealed the rules. Major broadband and mobile provider AT&T and cable/entertainment giant Time Warner are now fighting the Justice Department in a bid to merge. Two of the four-largest mobile telecom and broadband providers — T-Mobile and Sprint — have announced plans for a $26 billion merger.

The FCC privacy rules from 2016 that were overturned by Congress sought to give consumers more choice about how their data was to be used, stored and shared. But consumers now have less “choice” than ever about how their mobile provider shares their data and with whom. Worse, the mobile and broadband providers themselves are failing to secure their own customers’ data.

This month, it emerged that the major mobile providers have been giving commercial third-parties the ability to instantly look up the precise location of any mobile subscriber in real time. KrebsOnSecurity broke the news that one of these third parties — LocationSmartleaked this ability for years to anyone via a buggy component on its Web site.

LocationSmart’s demo page featured a buggy component which allowed anyone to look up anyone else’s mobile device location, in real time, and without consent.

We also learned that another California company — Securus Technologies — was selling real-time location lookups to a number of state and local law enforcement agencies, and that accounts for dozens of those law enforcement officers were obtained by hackers.  Securus, it turned out, was ultimately getting its data from LocationSmart.

This week, researchers discovered that a bug in T-Mobile’s Web site let anyone access the personal account details of any customer with just their cell phone number, including full name, address, account number and some cases tax ID numbers.

Not to be outdone, Comcast was revealed to have exposed sensitive information on customers through a buggy component of its Web site that could be tricked into displaying the home address where the company’s wireless router is located, as well as the router’s Wi-Fi name and password.

It’s not clear how FCC Chairman Pai intends to “reinstate a rational and effective system for protecting consumer privacy,” as he pledged after voting last year to overturn the 2015 privacy rules. The FCC reportedly has taken at least tentative steps to open an inquiry into the LocationSmart debacle, although Sen. Ron Wyden (D-Ore.) has called on Chairman Pai to recuse himself on the inquiry because Pai once represented Securus as an attorney. (Wyden also had some choice words for the wireless companies).

The major wireless carriers all say they do not share customer location data without customer consent or in response to a court order or subpoena. Consent. All of these carriers pointed me to their privacy policies. It could be the carriers believe these policies clearly explain that simply by using their wireless device customers have opted-in to having their real-time location data sold or given to third-party companies.

Michelle De Mooy, director of the privacy and data project at the Center for Democracy & Technology (CDT), said if the mobile giants are burying that disclosure in privacy policy legalese, that’s just not good enough.

“Even if they say, ‘Our privacy policy says we can do this,’ it violates peoples’ reasonable expectations of when and why their location data is being collected and how that’s going to be used. It’s not okay to simply point to your privacy policies and expect that to be enough.”

Continue reading →


5
Jan 18

Scary Chip Flaws Raise Spectre of Meltdown

Apple, Google, Microsoft and other tech giants have released updates for a pair of serious security flaws present in most modern computers, smartphones, tablets and mobile devices. Here’s a brief rundown on the threat and what you can do to protect your devices.

At issue are two different vulnerabilities, dubbed “Meltdown” and “Spectre,” that were independently discovered and reported by security researchers at Cyberus Technology, Google, and the Graz University of Technology. The details behind these bugs are extraordinarily technical, but a Web site established to help explain the vulnerabilities sums them up well enough:

“These hardware bugs allow programs to steal data which is currently processed on the computer. While programs are typically not permitted to read data from other programs, a malicious program can exploit Meltdown and Spectre to get hold of secrets stored in the memory of other running programs. This might include your passwords stored in a password manager or browser, your personal photos, emails, instant messages and even business-critical documents.”

“Meltdown and Spectre work on personal computers, mobile devices, and in the cloud. Depending on the cloud provider’s infrastructure, it might be possible to steal data from other customers.”

The Meltdown bug affects every Intel processor shipped since 1995 (with the exception of Intel Itanium and Intel Atom before 2013), although researchers said the flaw could impact other chip makers. Spectre is a far more wide-ranging and troublesome flaw, impacting desktops, laptops, cloud servers and smartphones from a variety of vendors. However, according to Google researchers, Spectre also is considerably more difficult to exploit.

In short, if it has a computer chip in it, it’s likely affected by one or both of the flaws. For now, there don’t appear to be any signs that attackers are exploiting either to steal data from users. But researchers warn that the weaknesses could be exploited via Javascript — meaning it might not be long before we see attacks that leverage the vulnerabilities being stitched into hacked or malicious Web sites.

Microsoft this week released emergency updates to address Meltdown and Spectre in its various Windows operating systems. But the software giant reports that the updates aren’t playing nice with many antivirus products; the fix apparently is causing the dreaded “blue screen of death” (BSOD) for some antivirus users. In response, Microsoft has asked antivirus vendors who have updated their products to avoid the BSOD crash issue to install a special key in the Windows registry. That way, Windows Update can tell whether it’s safe to download and install the patch. Continue reading →


2
Aug 17

Flash Player is Dead, Long Live Flash Player!

Adobe last week detailed plans to retire its Flash Player software, a cross-platform browser plugin so powerful and so packed with security holes that it has become the favorite target of malware developers. To help eradicate this ubiquitous liability, Adobe is enlisting the help of Apple, Facebook, Google, Microsoft and Mozilla. But don’t break out the bubbly just yet: Adobe says Flash won’t be put down officially until 2020.

brokenflash-aIn a blog post about the move, Adobe said more sites are turning away from proprietary code like Flash toward open standards like HTML5, WebGL and WebAssembly, and that these components now provide many of the capabilities and functionalities that plugins pioneered.

“Over time, we’ve seen helper apps evolve to become plugins, and more recently, have seen many of these plugin capabilities get incorporated into open web standards,” Adobe said. “Today, most browser vendors are integrating capabilities once provided by plugins directly into browsers and deprecating plugins.”

It’s remarkable how quickly Flash has seen a decline in both use and favor, particularly among the top browser makers. Just three years ago, at least 80 percent of desktop Chrome users visited a site with Flash each day, according to Google. Today, usage of Flash among Chrome users stands at just 17 percent and continues to decline (see Google graphic below).

For Mac users, the turning away from Flash began in 2010, when Apple co-founder Steve Jobs famously penned his “Thoughts on Flash” memo that outlined the reasons why the technology would not be allowed on the company’s iOS products. Apple stopped pre-installing the plugin that same year.

The percentage of Chrome users over time that have used Flash on a Web site. Image: Google.

The percentage of Chrome users over time that have used Flash on a Web site. Image: Google.

“Today, if users install Flash, it remains off by default,” a post by Apple’s WebKit Team explains. “Safari requires explicit approval on each website before running the Flash plugin.”

Mozilla said that starting this month Firefox users will choose which websites are able to run the Flash plugin.

“Flash will be disabled by default for most users in 2019, and only users running the Firefox Extended Support Release will be able to continue using Flash through the final end-of-life at the end of 2020,” writes Benjamin Smedberg for Mozilla. “In order to preserve user security, once Flash is no longer supported by Adobe security patches, no version of Firefox will load the plugin.” Continue reading →


24
Feb 17

iPhone Robbers Try to iPhish Victims

In another strange tale from the kinetic-attack-meets-cyberattack department, earlier this week I heard from a loyal reader in Brazil whose wife was recently mugged by three robbers who nabbed her iPhone. Not long after the husband texted the stolen phone — offering to buy back the locked device — he soon began receiving text messages stating the phone had been found. All he had to do to begin the process of retrieving the device was click the texted link and log in to the phishing page mimicking Apple’s site.

applephish

Edu Rabin is a resident of Porto Alegre, the capital and largest city of the Brazilian state of Rio Grande do Sul in southern Brazil. Rabin said three thugs robbed his wife last Saturday in broad daylight. Thankfully, she was unharmed and all they wanted was her iPhone 5s.

Rabin said he then tried to locate the device using the “Find my iPhone” app.

“It was already in a nearby city, where the crime rates are even higher than mine,” Rabin said.

He said he then used his phone to send the robbers a message offering to buy back his wife’s phone.

“I’d sent a message with my phone number saying, ‘Dear mister robber, since you can’t really use the phone, I’m preparing to rebuy it from you. All my best!’ This happened on Saturday. On Sunday, I’d checked again the search app and the phone was still offline and at same place.”

But the following day he began receiving text messages stating that his phone had been recovered.

“On Monday, I’d started to receive SMS messages saying that my iphone had been found and a URL to reach it,” Rabin said. Here’s a screenshot of one of those texts:

buscariphonetext

The link led to a page that looks exactly like the Brazilian version of Apple’s sign-in page, but which is hosted on a site that allows free Web hosting.

fakeapple Continue reading →


2
May 16

How the Pwnedlist Got Pwned

Last week, I learned about a vulnerability that exposed all 866 million account credentials harvested by pwnedlist.com, a service designed to help companies track public password breaches that may create security problems for their users. The vulnerability has since been fixed, but this simple security flaw may have inadvertently exacerbated countless breaches by preserving the data lost in them and then providing free access to one of the Internet’s largest collections of compromised credentials.

PwndlistPwnedlist is run by Scottsdale, Ariz. based InfoArmor, and is marketed as a repository of usernames and passwords that have been publicly leaked online for any period of time at Pastebin, online chat channels and other free data dump sites.

The service until quite recently was free to all comers, but it makes money by allowing companies to get a live feed of usernames and passwords exposed in third-party breaches which might create security problems going forward for the subscriber organization and its employees.

This 2014 article from the Phoenix Business Journal describes one way InfoArmor markets the Pwnedlist to companies: “InfoArmor’s new Vendor Security Monitoring tool allows businesses to do due diligence and monitor its third-party vendors through real-time safety reports.”

The trouble is, the way Pwnedlist should work is very different from how it does. This became evident after I was contacted by Bob Hodges, a longtime reader and security researcher in Detroit who discovered something peculiar while he was using Pwnedlist: Hodges wanted to add to his watchlist the .edu and .com domains for which he is the administrator, but that feature wasn’t available.

In the first sign that something wasn’t quite right authentication-wise at Pwnedlist, the system didn’t even allow him to validate that he had control of an email address or domain by sending him a verification to said email or domain.

On the other hand, he found he could monitor any email address he wanted. Hodges said this gave him an idea about how to add his domains: Turns out that when any Pwnedlist user requests that a new Web site name be added to his “Watchlist,” the process for approving that request was fundamentally flawed.

That’s because the process of adding a new thing for Pwnedlist to look for — be it a domain, email address, or password hash — was a two-step procedure involving a submit button and confirmation page, and the confirmation page didn’t bother to check whether the thing being added in the first step was the same as the thing approved in the confirmation page. [For the Geek Factor 5 crowd here, this vulnerability type is known as “parameter tampering,” and it involves  the ability to modify hidden parameters in POST requests].

“Their system is supposed to compare the data that gets submitted in the second step with what you initially submitted in the first window, but there’s nothing to prevent you from changing that,” Hodges said. “They’re not even checking normal email addresses. For example, when you add an email to your watchlist, that email [account] doesn’t get a message saying they’ve been added. After you add an email you don’t own or control, it gives you the verified check box, but in reality it does no verification. You just typed it in. It’s almost like at some point they just disabled any verification systems they may have had at Pwnedlist.” Continue reading →


12
Apr 16

New Threat Can Auto-Brick Apple Devices

If you use an Apple iPhone, iPad or other iDevice, now would be an excellent time to ensure that the machine is running the latest version of Apple’s mobile operating system — version 9.3.1. Failing to do so could expose your devices to automated threats capable of rendering them unresponsive and perhaps forever useless.

Zach Straley demonstrating the fatal Jan. 1, 1970 bug. Don't try this at home!

Zach Straley demonstrating the fatal Jan. 1, 1970 bug. Don’t try this at home!

On Feb. 11, 2016, researcher Zach Straley posted a Youtube video exposing his startling and bizarrely simple discovery: Manually setting the date of your iPhone or iPad all the back to January. 1, 1970 will permanently brick the device (don’t try this at home, or against frenemies!).

Now that Apple has patched the flaw that Straley exploited with his fingers, researchers say they’ve proven how easy it would be to automate the attack over a network, so that potential victims would need only to wander within range of a hostile wireless network to have their pricey Apple devices turned into useless bricks.

Not long after Straley’s video began pulling in millions of views, security researchers Patrick Kelley and Matt Harrigan wondered: Could they automate the exploitation of this oddly severe and destructive date bug? The researchers discovered that indeed they could, armed with only $120 of electronics (not counting the cost of the bricked iDevices), a basic understanding of networking, and a familiarity with the way Apple devices connect to wireless networks.

Apple products like the iPad (and virtually all mass-market wireless devices) are designed to automatically connect to wireless networks they have seen before. They do this with a relatively weak level of authentication: If you connect to a network named “Hotspot” once, going forward your device may automatically connect to any open network that also happens to be called “Hotspot.”

For example, to use Starbuck’s free Wi-Fi service, you’ll have to connect to a network called “attwifi”. But once you’ve done that, you won’t ever have to manually connect to a network called “attwifi” ever again. The next time you visit a Starbucks, just pull out your iPad and the device automagically connects.

From an attacker’s perspective, this is a golden opportunity. Why? He only needs to advertise a fake open network called “attwifi” at a spot where large numbers of computer users are known to congregate. Using specialized hardware to amplify his Wi-Fi signal, he can force many users to connect to his (evil) “attwifi” hotspot. From there, he can attempt to inspect, modify or redirect any network traffic for any iPads or other devices that unwittingly connect to his evil network.

TIME TO DIE

And this is exactly what Kelley and Harrigan say they have done in real-life tests. They realized that iPads and other iDevices constantly check various “network time protocol” (NTP) servers around the globe to sync their internal date and time clocks.

The researchers said they discovered they could build a hostile Wi-Fi network that would force Apple devices to download time and date updates from their own (evil) NTP time server: And to set their internal clocks to one infernal date and time in particular: January 1, 1970.

Harrigan and Kelley named their destructive Wi-Fi network "Phonebreaker."

Harrigan and Kelley named their destructive Wi-Fi test network “Phonebreaker.”

The result? The iPads that were brought within range of the test (evil) network rebooted, and began to slowly self-destruct. It’s not clear why they do this, but here’s one possible explanation: Most applications on an iPad are configured to use security certificates that encrypt data transmitted to and from the user’s device. Those encryption certificates stop working correctly if the system time and date on the user’s mobile is set to a year that predates the certificate’s issuance.

Harrigan and Kelley said this apparently creates havoc with most of the applications built into the iPad and iPhone, and that the ensuing bedlam as applications on the device compete for resources quickly overwhelms the iPad’s computer processing power. So much so that within minutes, they found their test iPad had reached 130 degrees Fahrenheit (54 Celsius), as the date and clock settings on the affected devices inexplicably and eerily began counting backwards.

 

Continue reading →


22
Feb 16

The Lowdown on the Apple-FBI Showdown

Many readers have asked for a primer summarizing the privacy and security issues at stake in the the dispute between Apple and the U.S. Justice Department, which last week convinced a judge in California to order Apple to unlock an iPhone used by one of assailants in the recent San Bernardino massacres. I don’t have much original reporting to contribute on this important debate, but I’m visiting it here because it’s a complex topic that deserves the broadest possible public scrutiny.

Image: Elin Korneliussen

Image: Elin Korneliussen (@elincello)

A federal magistrate in California approved an order (PDF) granting the FBI permission to access to the data on the iPhone 5c belonging to the late terror suspect Syed Rizwan Farook, one of two individuals responsible for a mass shooting in San Bernadino on Dec. 2, 2015 in which 14 people were killed and many others were injured.

Apple CEO Tim Cook released a letter to customers last week saying the company will appeal the order, citing customer privacy and security concerns.

Most experts seem to agree that Apple is technically capable of complying with the court order. Indeed, as National Public Radio notes in a segment this morning, Apple has agreed to unlock phones in approximately 70 other cases involving requests from the government. However, something unexpected emerged in one of those cases — an iPhone tied to a Brooklyn, NY drug dealer who pleaded guilty to selling methamphetamine last year. Continue reading →


23
Dec 15

Expect Phishers to Up Their Game in 2016

Expect phishers and other password thieves to up their game in 2016: Both Google and Yahoo! are taking steps to kill off the password as we know it.

passcrackNew authentication methods now offered by Yahoo! and to a beta group of Google users let customers log in just by supplying their email address, and then responding to a notification sent to their mobile device.

According to TechCrunch, Google is giving select Gmail users a password-free means of signing in. It uses a “push” notification sent to your phone that then opens an app where you approve the log-in.

The article says the service Google is experimenting with will let users sign in without entering a password, but that people can continue to use their typed password if they choose. It also says Google may still ask for your password as an additional security measure if it notices anything unusual about a login attempt.

The new authentication feature being tested by some Gmail users comes on the heels of a similar service Yahoo! debuted in October 2015. That offering, called “on-demand passwords,” will text users a random four-character code (the ones I saw were all uppercase letters) that needs to be entered into a browser or mobile device.

yahoogetstarted

This is not Yahoo!’s first stab at two-factor authentication. Another security feature it has offered for years — called “two-step verification” — sends a security code to your phone when you log in from new devices, but only after you supply your password. Yahoo! users who wish to take advantage of the passwords-free, on-demand password feature will need to disable two-step verification for on-demand passwords to work.

Continue reading →


17
Jun 15

Critical Flaws in Apple, Samsung Devices

Normally, I don’t cover vulnerabilities about which the user can do little or nothing to prevent, but two newly detailed flaws affecting hundreds of millions of Android, iOS and Apple products probably deserve special exceptions.

keychainThe first is a zero-day bug in iOS and OS X that allows the theft of both Keychain (Apple’s password management system) and app passwords. The flaw, first revealed in an academic paper (PDF) released by researchers from Indiana University, Peking University and the Georgia Institute of Technology, involves a vulnerability in Apple’s latest operating system versions that enable an app approved for download by the Apple Store to gain unauthorized access to other apps’ sensitive data.

“More specifically, we found that the inter-app interaction services, including the keychain…can be exploited…to steal such confidential information as the passwords for iCloud, email and bank, and the secret token of Evernote,” the researchers wrote.

The team said they tested their findings by circumventing the restrictive security checks of the Apple Store, and that their attack apps were approved by the App Store in January 2015. According to the researchers, more than 88 percent of apps were “completely exposed” to the attack.

News of the research was first reported by The Register, which said that Apple was initially notified in October 2014 and that in February 2015 the company asked researchers to hold off disclosure for six months.

“The team was able to raid banking credentials from Google Chrome on the latest Mac OS X 10.10.3, using a sandboxed app to steal the system’s keychain and secret iCloud tokens, and passwords from password vaults,” The Register wrote. “Google’s Chromium security team was more responsive and removed Keychain integration for Chrome noting that it could likely not be solved at the application level. AgileBits, owner of popular software 1Password, said it could not find a way to ward off the attacks or make the malware ‘work harder’ some four months after disclosure.”

A story at 9to5mac.com suggests the malware the researchers created to run their experiments can’t directly access existing keychain entries, but instead does so indirectly by forcing users to log in manually and then capturing those credentials in a newly-created entry.

“For now, the best advice would appear to be cautious in downloading apps from unknown developers – even from the iOS and Mac App Stores – and to be alert to any occasion where you are asked to login manually when that login is usually done by Keychain,” 9to5’s Ben Lovejoy writes.

SAMSUNG KEYBOARD FLAW

Separately, researchers at mobile security firm NowSecure disclosed they’d found a serious vulnerability in a third-party keyboard app that is pre-installed on more than 600 million Samsung mobile devices — including the recently released Galaxy S6 — that allows attackers to remotely access resources like GPS, camera and microphone, secretly install malicious apps, eavesdrop on incoming/outgoing messages or voice calls, and access pictures and text messages on vulnerable devices. Continue reading →


22
Oct 14

Google Accounts Now Support Security Keys

People who use Gmail and other Google services now have an extra layer of security available when logging into Google accounts. The company today incorporated into these services the open Universal 2nd Factor (U2F) standard, a physical USB-based second factor sign-in component that only works after verifying the login site is truly a Google site.

A $17 U2F device made by Yubikey.

A $17 U2F device made by Yubico.

The U2F standard (PDF) is a product of the FIDO (Fast IDentity Online) Alliance, an industry consortium that’s been working to come up with specifications that support a range of more robust authentication technologies, including biometric identifiers and USB security tokens.

The approach announced by Google today essentially offers a more secure way of using the company’s 2-step authentication process. For several years, Google has offered an approach that it calls “2-step verification,” which sends a one-time pass code to the user’s mobile or land line phone.

2-step verification makes it so that even if thieves manage to steal your password, they still need access to your mobile or land line phone if they’re trying to log in with your credentials from a device that Google has not previously seen associated with your account. As Google notes in a support document, security key “offers better protection against this kind of attack, because it uses cryptography instead of verification codes and automatically works only with the website it’s supposed to work with.”

Unlike a one-time token approach, the security key does not rely on mobile phones (so no batteries needed), but the downside is that it doesn’t work for mobile-only users because it requires a USB port. Also, the security key doesn’t work for Google properties on anything other than Chrome. Continue reading →