President Obama on Monday outlined a proposal that would require companies to inform their customers of a data breach within 30 days of discovering their information has been hacked. But depending on what is put in and left out of any implementing legislation, the effort could well lead to more voluminous but less useful disclosure. Here are a few thoughts about how a federal breach law could produce fewer yet more meaningful notice that may actually help prevent future breaches.
The plan is intended to unify nearly four dozen disparate state data breach disclosure laws into a single, federal standard. But as experts quoted in this story from The New York Times rightly note, much rides on whether or not any federal breach disclosure law is a baseline law that allows states to pass stronger standards.
For example, right now seven states already have so-called “shot-clock” disclosure laws, some more stringent; Connecticut requires insurance firms to notify no more than five days after discovering a breach; California has similar requirements for health providers. Also, at least 14 states and the District of Columbia have laws that permit affected consumers to sue a company for damages in the wake of a breach. What’s more, many states define “personal information” differently and hence have different triggers for what requires a company to disclose. For an excellent breakdown on the various data breach disclosure laws, see this analysis by BakerHostetler (PDF).
Leaving aside the weighty question of federal preemption, I’d like to see a discussion here and elsewhere about a requirement which mandates that companies disclose how they got breached. Naturally, we wouldn’t expect companies to disclose the specific technologies they’re using in a public breach document. Additionally, forensics firms called in to investigate aren’t always able to precisely pinpoint the cause or source of the breach.
But this information could be publicly shared in a timely way when it’s available, and appropriately anonymized. It’s unfortunate that while we’ve heard time and again about credit card breaches at retail establishments, we know very little about how those organizations were breached in the first place. A requirement to share the “how” of the hack when it’s known and anonymized by industry would be helpful.
I also want to address the issue of encryption. Many security experts insist that there ought to be a carve-out that would allow companies to avoid disclosure requirements in a breach that exposes properly encrypted sensitive data (i.e., the intruders did not also manage to steal the private key needed to decrypt the data). While a broader adoption of encryption could help lessen the impact of breaches, this exception is in some form already included in nearly all four dozen state data breach disclosure laws, and it doesn’t seem to have lessened the frequency of breach alerts.
I suspect there are several reasons for this. The most obvious is that few organizations that suffer a breach are encrypting their sensitive data, or that they’re doing so sloppily (exposing the encryption key, e.g.). Also, most states also have provisions in their breach disclosure laws that require a “risk of harm” analysis that forces the victim organization to determine whether the breach is reasonably likely to result in harm (such as identity theft) to the affected consumer.
This is important because many of these breaches are the result of thieves breaking into a Web site database and stealing passwords, and in far too many cases the stolen passwords are not encrypted but instead “hashed” using a relatively weak and easy-to-crack approach such as MD5 or SHA-1. For a good basic breakdown on the difference between encrypting data and hashing it, check out this post. Also, for a primer on far more secure alternatives to cryptographic hashes, see my 2012 interview with Thomas Ptacek, How Companies Can Beef Up Password Security.
As long as we’re dealing with laws to help companies shore up their security, I would very much like to see some kind of legislative approach that includes ways to incentivize more companies to deploy two-factor and two step authentication — not just for their customers, but just as crucially (if not more so) for their employees.
PRIVACY PROMISES
President Obama also said he would propose the Student Data Privacy Act, which, according to The Times, would prohibit technology firms from profiting from information collected in schools as teachers adopt tablets, online services and Internet-connected software. The story also noted that the president was touting voluntary agreements by companies to safeguard energy data and to provide easy access to consumer credit scores. While Americans can by law get a free copy of their credit report from each of the three major credit bureaus once per year — at annualcreditreport.com — most consumers still have to pay to see their credit scores.
These changes would be welcome, but they fall far short of the sorts of revisions we need to the privacy laws in this country, some of which were written in the 1980s and predate even the advent of Web browsing technology. As I’ve discussed at length on this blog, Congress sorely needs to update the Electronic Communications Privacy Act (ECPA), the 1986 statute that was originally designed to protect Americans from Big Brother and from government overreach. Unfortunately, the law is now so outdated that it actually provides legal cover for the very sort of overreach it was designed to prevent. For more on the effort to change the status quo, see digitaldueprocess.org.
Also, I’d like to see a broader discussion of privacy proposals that cover what companies can and must/must not do with all the biometric data they’re collecting from consumers. Companies are tripping over themselves to collect oodles of potentially very sensitive such data from consumers, and yet we still have no basic principles that say what companies can do with that information, how much they can collect, how they can collect it or share it, or how they will protect that information.
There are a handful of exceptions at the state level; read more here). But overall, we’re really lacking any sort of basic protections for that information, and consumers are giving it away every day without fully realizing there are basically zero federal standards for what can or should be done with this information.
Coming back to the subject of encryption: Considering how few companies actually make customer data encryption the default approach, it’s discouraging to see elements of this administration criticizing companies for it. There is likely a big showdown coming between the major mobile players and federal investigators over encryption. Apple and Google’s recent decision to introduce default, irrevocable data encryption on all devices powered by their latest operating systems has prompted calls from the U.S. law enforcement community for legislation that would require mobile providers to allow law enforcement officials to bypass that security in criminal investigations.
In October, FBI Director James Comey called on the mobile giants to dump their new encryption policies. Last week, I spoke at a conference in New York where the panel prior to my talk was an address from New York’s top prosecutor, who said he was working with unnamed lawmakers to craft new legal requirements. Last week, Sen. Ron Wyden (D-Ore.) reintroduced a bill that would bar the government from requiring tech companies to build so-called “backdoor” access to their data for law enforcement.
This tension is being felt across the pond as well: British Prime Minister David Cameron also has pledged new anti-terror laws that give U.K. security services the ability to read encrypted communications on mobile devices.
I wonder if the Federal Government will exempt themselves and their agencies from these proposed regulation?
“I wonder if the Federal Government will exempt themselves and their agencies from these proposed regulation?”
…of course, they almost never hold cops or Federal government employees accountable for anything. They are the biggest hypocrites of all.
Probably. But the problem is that ANY back door, even for law enforcement “only”, can be exploited by the bad guys also.
They already are exempt from disclosure. That was put into effect in a previous ruling. They are not required to disclose.
Of course not. The government is the biggest offender. A few years ago, it came out that the state of California had put all the info about drivers on the web. When that became public, they just took it down. No one lost their job or anything. There is no telling how much damage was done.
About a year ago, when Obamacare started up, it got out that people convicted of fraud and ID theft were being hired to take people’s info. The feds just wouldn’t respond about it.
More political nonsense that won’t go anywhere because our dysfunctional Congress. You can’t enforce state or federal breach laws, when the majority of the cyber criminals are not in the United States.
Ok, I’ll bite – @Donald Trump, what has *breach* laws (I.e., what to do to own up to your customers when you’ve been hacked) have to do with where the attack came from?
What’s relevant is where the companies that actually have the duty of safeguarding the data are located — usually that’s in the US and in general the care of companies for the integrity of the data they have about their clients is really poor.
I really am struggling to undertand your point regarding location of the attacker. Maybe you’re OK with someone from the U.S. to have (illegitimate) access to your personal data??
Personally I’d rather have:
* some assurance that the companies collect data for a clear, given purpose and just the minimum ammount of data necessary instead of “Hoover as much as you can, just because I can, in case I can use it for something” (yes, that’s likely outside of the proposed regulation – but it’d be SO good!)
* some minimum level of protection (and for minimum I hope it’s “not abysmally low”) that companies have to apply to the data collected — and some teeth to ensure they actually do.
* regulations that mandate that when (I was going to put “if” but given the recent history that’s too optimistic) companies are breached, and my data is somewhere I didn’t plan on, I be notified in a *timely* and *actionable* way so that I might protect myself. I stress timely and actionable as there is a worrying trend of “if I can’t stop them, let’s accelerate” – faced with the duty of informing, companies opt for flooding everybody with notices, knowing that if the volume is large enough people will just ignore them.
* some “stick” to make this work: if customers are given an easy way to reclaim damages *AND* companies are fined even if customers don’t sue them, there is some glimmer of hope for improvement.
Yes, that’s likely going to make workig with private data more expensive. And that’s the whole purpose of the exercise – when companies have to “pay” for doing something they tend to be more careful and think about whether it’s really necessary.
I’m **not** a fan of overreaching regulations – but I think in this case self regulation hasn’t work.
Ok, I’ll step down from my soapbox now 🙂
Yet these same companies send off all the customer data for ‘support’ to call centers in India or the Philippines without a second thought.
@lone_wolf
Your right, the way I wrote the short response is rather confusing
What I meant , is that you can have all the state or federal laws to protect the users data, but the cyber criminals will continue on regardless how strict you enforce the regulations against the companies who end up getting breached. The cyber criminal in Eastern Russia and China are laughing at the United States, thinking those American fools are protecting the internet user after the fact. You have to make every company responsible at the same level when it comes to securing customers data, otherwise the criminal will continue to look for weak points in the system. It’s either you increase the standard of protection, or you are wasting your time doing these kinds of regulations . Making laws to protect after the fact, will not work in today’s technology based society because most people are not internet security literate enough to understand the full risk of giving up their data The better solution is to better secure the data first since the skills and knowledge of the criminal is a lot better then the person who doesn’t know better by unwillingly trusting the company and ends up being part of the breach. If people stopping stopped putting full trust in companies and instead better educated themselves in protecting themselves first and foremost then maybe companies would increase their standard of protection.
I think this could be a step in the right direction but I think that it is more important to look into how these attacks are happening because that is the only way to prevent them. People also need to be more aware of general cyber security knowledge so that they aren’t falling into traps that could be easily prevented.
I agree, I also feel that knowing they will have to tell the customers how they got hacked if it happens may make the IT admins and Company owners more apt to work on security before breaches happen instead of after, but then again there still is that if it happens part.
As long as the federal government itself participates in massive personal data collection through the American Community Survey (not to mention the NSA), I question whether or not meaningful privacy legislation will ever be achieved. Persons receiving this census bureau survey can expect to receive a highly intrusive 28-page questionnaire full of questions like “do you have trouble making decisions?,” “what was your income last year?,” “how many times have you been married?,” and of course the infamous query: “do you have a flush toilet?” (see http://www.census.gov/acs/www/Downloads/questionnaires/2015/Quest15.pdf), all linked to names, addresses, phone numbers and birthdates. Big business loves this survey because companies use it as the basis for free marketing information, and it’s not information they can easily otherwise get. Persons receiving the survey area are informed that they are required under penalty of fines and possible imprisonment to respond to all questions.
The line between public and private data collection is murky. Business loves the big data generated by the Census Bureau. In turn, the Bureau is testing the cross comparison of its survey data with “administrative records,” (read “personal data”) collected by those same private businesses and social media. Although the brief paper entailed “Big Data” by Assistant Director of the Census Ron Jarmin describing the process has been removed from the census.gov website, you can find it cached at http://webcache.googleusercontent.com/search?q=cache:JU1EAnFMauEJ:https://www.census.gov/cac/census_scientific_advisory_committee/docs/20140918_abstract_big_data.pdf+&cd=1&hl=en&ct=clnk&gl=us Ye gods. All of this from a federal agency building a massive personal database so poorly protected that it was the subject of an RSA exploit attack in 2011 (see http://krebsonsecurity.com/2011/10/who-else-was-hit-by-the-rsa-attackers/#more-11975). What privacy and accountability standards are in place to protect data collected by the census bureau?
Does anyone really believe that Congress is going to throttle the collection of personal data or force companies to reveal how it is exploited, stolen, or sold? I’m not optimistic. Information is power.
State data breach laws are supposed to help protect consumers privacy by mandating reports of breach of PII, as defined by that state. A federal base may help define PII and also make privacy professionals lives easier by having a standard for time based reporting. None of these will help protect us from the attacks that cause the breach in the first place.
We need to have opt-in as the default for information sharing, this gives control back to the consumer and gives the consumer the power to control what data the corporations can keep.
We need to have a place that we can report cyber crime while it is happening. Many of us have tried to use the FBI IC3 site only to find that we never hear anything back, and I suspect that nothing is actually looked at when it is reported. If I have an active attack running and I simply block it, the attacker moves on to the next target. No harm no foul? Actually no consequence to the attacker at all.
Unless you are a big corporation and have high visibility like SONY, no one seems to care that you are being attacked all day every day. We just keep running our perimeter defenses and go on with our duties.
Any conversation on improving things is welcome, I just fear that the ognorance of the non-technical politicians will trump the common sense
Some reasons the encryption clause of breach laws doesn’t seem to slow breach alerts:
1) As you state, encryption (especially key management) is hard to do correctly and still allow all (legitimate) users and processes to access access the data in question.
2) If there was a breach, but the data was encrypted, we wouldn’t hear about it (as no notification is required)
3) At rest encryption doesn’t do anything if it is the application that was breached. The front end of ANY application that accesses the data can obviously decyrpt the data somewhere along the chain (for example web server/app server/database server/file system, etc). As it is usually something close to the storage doing the encryption (like the file system), the attacker only need to find a a flaw in any of that chain (or the device hosting any piece of the chain), rendering the encryption useless.
I contend that encryption for at-rest server/network data storage only protects some attack vectors, so it may cause the organization to have too much confidence in their security posture.
I always felt the “no notification required because it was encrypted” clause in the breach laws generally only helps with stolen equipment, so you want to encrypt that mobile device, laptop, desktop, thumb drive and backup media.
Agree w StMurray (#3). There is little value in encrypting data at rest, unless it’s on a mobile device. Most of the breaches were a result of credentials being compromised (social engineering). As Mr. Krebs points out, multi-factor authentication would, however address most of these breach situations.
What if a specific technology is why they got breached?
Also maybe a related/unrelated topic, any news about truecrypt and if that ever got proven to have holes in it? I tend to distrust encryption from any fortune 500 company as mandated backdoors seem very likely and there don’t seem to be many alternatives to the container/portable features truecrypt had.
“While Americans can by law get a free copy of their credit report from each of the threat major credit bureaus…”
Yes, they are indeed a threat!
Same case as always: more bigshots creating laws about things they don’t understand in order to look like they are doing something. They do not know how to run a business let alone how to protect computers against hackers and malware.
In the end the laws are not followed anyways. Look at “gun control”, a great example. “Just make some laws over the already-existing ones so it looks like we are taking action.” Bunch of BS if you ask me.
If my network was ever breached I would just blame the North Koreans. Seems to be an easy way out and no one will hold you accountable and the FBI will cover it up for you.
Breach notification is an absurd concept that permits politicians to pretend they are doing something. What am I supposed to do with this info?
What government needs to do is secure the Internet so the breaches don’t happen in the 1st place. A good place to start is to reengineer our email systems to make spoofing impossible.
Brian, you seem to imply that encryption is better than hashing. That is not the case with passwords. One should never encrypt passwords (unless you are creating a password vault to store them). Hashing is better, because it is not reversible, and authentication systems don’t need the reversibility of encryption. The issue is the hashing approach needs to be strong. I say ‘approach’ rather than ‘algorithm’ because the most important aspect is to salt the hash. If a well salted and hashed password file is breached, even if an attacker can guess the original values, the salt ensures those original values are not the passwords, and would not work on a different system with a different salt.
Hrm. I’m going to assume you did not look at the story/interview I linked to there, which goes into great detail on the benefits of password hashes like Bcrypt and Scrypt over cryptographic hashes like MD5?
Doh. I read the first link, but not the interview. It is good, and he is absolutely right about the algorithm, although I think he misses some reasons why salting is of primary importance (and he’s right about multifactor too). However, my gripe is with the phrase ‘in far too many cases the stolen passwords are not encrypted but instead “hashed” using a weak algorithm…’ This implies you should encrypt the passwords, which is wrong. Passwords should be stored as hashes, not encrypted.
Now, the reasons why I believe salting is paramount, even more than the choice of algorithm are:
1. Salting is easy. If you have the tools to hash, you have the tools to salt. So say you are protecting some low value data, and you have reasons not to use a better algorithm- maybe your tool set doesn’t support it, maybe cost issues – whatever. You still can – and should – salt.
2. More important is the way hash attacks work. The problem with MD5 & SHA-1 is you can easily generate initial values that, when hashed, match the hash. However, due to collisions, the initial value may or may not be the actual password. For the system that was attacked, it doesn’t matter – whatever initial value you get will work. But, it isn’t transferrable – that is, that initial value will not work on a different system with different salting. (If the salt is the same, it will work on the other system – which is why appending or prepending the user id is not a sufficient salt – the salt needs to be unique to the user and to the system). Given people’s propensity to use the same password on multiple sites, preventing transferability is paramount.
3. As Ptacek points out, even with a strong hashing algorithm, a good attack will still get some initial values. (Again, due to collisions, these may not be the actual passwords, but on that system, they are equivalent.). Again, preventing transferability is why salts are important, even with good hashing.
Brian, password hashing is still hashing, just using an algorithm that’s more suited to passwords, as Ptacek explains very well. I think MrPaul was commenting on your use of “encryption” when you meant hashing.
Encryption of data at rest doesn’t help. Why? Because most systems rely on access control. If a particular user has access priviledges to some data, the system will decrypt the data for the user. Hacks usually exploit escalation of priviledges and don’t need the decryption key to view data.
I was reading an article on the internet about identity theft protection to monitor my credit, medical information, bank account, credit cards, drivers license, address change, social security number, etc. I’d like to sign up but can’t afford the $20/month. What if the government were to provide this service for free?
Jim,
Given the number of breaches over the last year, there’s a good chance about every American with a credit card is covered for free. Have a look at some of the breaches covered in this blog and Google to see if you already have a code available.
http://krebsonsecurity.com/category/data-breaches/
More importantly, understand what credit monitoring services are and what they aren’t.
http://krebsonsecurity.com/2014/03/are-credit-monitoring-services-worth-it/
Before anyone gets too excited about the value of those “free” services, it might be a good idea to check out some of the reviews of AllClear ID, the company most often chosen to provide the service, i.e. by Home Depot, among others.
http://bestidtheftcompanys.com/company/all-clear-id
Not exactly confidence inspiring.
I agree, Allclearid did not inspire me with much confidence either. And with all the personal info thhey want/need to do the job,well I’m delaying signing up. When I went to register,the form was printing my information backwards. No kidding! So I contacted them about it,and also asked them what kind of security they use to protect my personal information. Only generic responses,and then I scaned their website,only to find out they are using GoDaddy on Apache servers,which may not have been patched at the time of my inquiries.
At any rate,I think people want more transparency from the government and industry. And for us consumers to be LESS transparent! That would be the right balance.
Jim,
Nothing supplied by the government is “free”, someone’s tax dollars paid for that.
Brian, This could be an opportunity for you to be an organizer for reasonable, appropriate and effective policies. I’m concerned that if the government peeps or industry do this it will only make laws and ways to charge people – permits and fines. But do very little at a tech level to be effective, in the end just being a pile of red tape that breaks small businesses and burdens consumers.
You have a better motivation than politicians and corperate heads.
The data ownership, distribution and profiteering issue of privacy is still on the table. The information model needs a full re-tooling vs. just add encryption. The arguments for data-base encryption have been “hashed over” pretty intensely. What we really need to discuss is proper information protection at the source of creation and the proper management of that information based upon an “owners” preference for privacy and security.
The custody of information has been broken from the start as applications and devices distribute, leak and profit based upon a model of obscurity. This is the privacy side of the coin. The opposite side is data-base and cloud management of both corporate and personal information. That information can be under strict policy, protection and distribution rights based upon the owners definition of their security policy. We have the tools as an industry to develop an information security model that is flexible and secure. The question is, will we?
Why should we have to ever pay for credit monitoring?
How about a law that states, everytime our credit information is sold / marketed, we get a percentage, and permanent, persistent free credit reporting / monitoring with credit score?
I’d love to see the “how” included, and I would also love to see some real privacy laws adopted.
Instead, I fear that the Sony hack and CC breaches that have made headlines over the past year or so are going to be turned into political ammunition to create more online legislation that we are told is there to “protect” us, but in effect will just erode more freedoms, and give the government more power to invade our lives…..