27
Oct 15

Cybersecurity Information (Over)Sharing Act?

The U.S. Senate is preparing to vote on cybersecurity legislation that proponents say is sorely needed to better help companies and the government share information about the latest Internet threats. Critics of the bill and its many proposed amendments charge that it will do little, if anything, to address the very real problem of flawed cybersecurity while creating conditions that are ripe for privacy abuses. What follows is a breakdown of the arguments on both sides, and a personal analysis that seeks to add some important context to the debate.

Up for consideration by the full Senate this week is the Cybersecurity Information Sharing Act (CISA), a bill designed to shield companies from private lawsuits and antitrust laws if they seek help or cooperate with one another to fight cybercrime. The Wall Street Journal and The Washington Post each recently published editorials in support of the bill.

Update, 6:57 p.m. ET: The Senate this afternoon passed CISA by a vote of 74-21.

Original story:

“The idea behind the legislation is simple: Let private businesses share information with each other, and with the government, to better fight an escalating and constantly evolving cyber threat,” the WSJ said in an editorial published today (paywall). “This shared data might be the footprint of hackers that the government has seen but private companies haven’t. Or it might include more advanced technology that private companies have developed as a defense.”

“Since hackers can strike fast, real-time cooperation is essential,” the WSJ continued. “A crucial provision would shield companies from private lawsuits and antitrust laws if they seek help or cooperate with one another. Democrats had long resisted this legal safe harbor at the behest of plaintiffs lawyers who view corporate victims of cyber attack as another source of plunder.”

The Post’s editorial dismisses “alarmist claims [that] have been made by privacy advocates who describe it as a ‘surveillance’ bill”:

“The notion that there is a binary choice between privacy and security is false. We need both privacy protection and cybersecurity, and the Senate legislation is one step toward breaking the logjam on security,” the Post concluded. “Sponsors have added privacy protections that would scrub out personal information before it is shared. They have made the legislation voluntary, so if companies are really concerned, they can stay away. A broad coalition of business groups, including the U.S. Chamber of Commerce, has backed the legislation, saying that cybertheft and disruption are “advancing in scope and complexity.”

But critics of CISA say the devil is in the details, or rather in the raft of amendments that may be added to the bill before it’s passed. The Center for Democracy & Technology (CDT), a nonprofit technology policy group based in Washington, D.C., has published a comprehensive breakdown of the proposed amendments and their potential impacts.

CDT says despite some changes made to assuage privacy concerns, neither CISA as written nor any of its many proposed amendments address the fundamental weaknesses of the legislation. According to CDT, “the bill requires that any Internet user information volunteered by a company to the Department of Homeland Security for cybersecurity purposes be shared immediately with the National Security Agency (NSA), other elements of the Intelligence Community, with the FBI/DOJ, and many other Federal agencies – a requirement that will discourage company participation in the voluntary information sharing scheme envisioned in the bill.”

CDT warns that CISA risks turning the cybersecurity program it creates into a backdoor wiretap by authorizing sharing and use of CTIs (cyber threat indicators) for a broad array of law enforcement purposes that have nothing to do with cybersecurity. Moreover, CDT says, CISA will likely introduce unintended consequences:

“It trumps all law in authorizing companies to share user Internet communications and data that qualify as ‘cyber threat indicators,’ [and] does nothing to address conduct of the NSA that actually undermines cybersecurity, including the stockpiling of zero day vulnerabilities.”

ANALYSIS

On the surface, efforts to increase information sharing about the latest cyber threats seem like a no-brainer. We read constantly about breaches at major corporations in which the attackers were found to have been inside of the victim’s network for months or years on end before the organization discovered that it was breached (or, more likely, they were notified by law enforcement officials or third-party security firms).

If only there were an easier way, we are told, for companies to share so-called “indicators of compromise” — Internet addresses or malicious software samples known to be favored by specific cybercriminal groups, for example — such breaches and the resulting leakage of consumer data and corporate secrets could be detected and stanched far more quickly.

In practice, however, there are already plenty of efforts — some public, some subscription-based — to collect and disseminate this threat data. From where I sit, the biggest impediment to detecting and responding to breaches in a more timely manner comes from a fundamental lack of appreciation — from an organization’s leadership on down — for how much is riding on all the technology that drives virtually every aspect of the modern business enterprise today. While many business leaders fail to appreciate the value and criticality of all their IT assets, I guarantee you today’s cybercrooks know all too well how much these assets are worth. And this yawning gap in awareness and understanding is evident by the sheer number of breaches announced each week.

Far too many organizations have trouble seeing the value of investing in cybersecurity until it is too late. Even then, breached entities will often seek out shiny new technologies or products that they perceive will help detect and prevent the next breach, while overlooking the value of investing in talented cybersecurity professionals to help them make sense of what all this technology is already trying to tell them about the integrity and health of their network and computing devices.

One of the more stunning examples of this comes from a depressingly static finding in the annual data breach reports published by Verizon Enterprise, a company that helps victims of cybercrime respond to and clean up after major data breaches. Every year, Verizon produces an in-depth report that tries to pull lessons out of dozens of incidents it has responded to in the previous year. It also polls dozens of law enforcement agencies worldwide for their takeaways from investigating cybercrime incidents.

The depressingly static stat is that in a great many of these breaches, the information that could have tipped companies off to a breach much sooner was already collected by the breached organization’s various cybersecurity tools; the trouble was, the organization lacked the human resources needed to make sense of all this information.

We all want the enormous benefits that technology and the Internet can bring, but all too often we are unwilling to face just how dependent we have become on technology. We embrace and extoll these benefits, but we routinely fail to appreciate how these tools can be used against us. We want the benefits of it all, but we’re reluctant to put in the difficult and very often unsexy work required to make sure we can continue to make those benefits work for us.

The most frustrating aspect of a legislative approach to fixing this problem is that it may be virtually impossible to measure whether a bill like CISA will in fact lead to more information sharing that helps companies prevent or quash data breaches. Meanwhile, history is littered with examples of well-intentioned laws that produce unintended (if not unforeseen) consequences.

Having read through the proposed CISA bill and its myriad amendments, I’m left with an impression perhaps best voiced in a letter sent earlier this week to the bill’s sponsors by nearly two-dozen academics. The coalition of professors charged that CISA is an example of the classic “let’s do something law” from a Congress that is under intense pressure to respond to a seemingly never-ending parade of breaches across the public and private sectors.

Rather than encouraging companies to increase their own cybersecurity standards, the professors wrote, “CISA ignores that goal and offloads responsibility to a generalized public-private secret information sharing network.”

“CISA creates new law in the wrong places,” the letter concluded. “For example, as the attached letter indicates, security threat information sharing is already quite robust. Instead, what are most needed are more robust and meaningful private efforts to prevent intrusions into networks and leaks out of them, and CISA does nothing to move us in that direction.”

Further reading: Independent national security journalist Marcy Wheeler’s take at EmptyWheel.net.

Tags: , , , , , , , , ,

73 comments

  1. I see more than a few okay IOC`s shared/traded everyday. Real APTs have bypassed C&C and exfiltration traffic network address/protocol pattern IOC detection for a couple of years now. (Check your copy for the Snowden docs if you want to learn a couple of techniques)

    The solution probably doesn’t involve handing out billion dollar liability waivers like they are just a million dollar tax exemption. In fact solutions may actually involve liability for suppliers that supply critical infrastructure organizations with code.

    Why are there every day new:
    – bufferoverflow bugs 27 years after the Morris worm
    – interpreted code (SQL, shell) injection bugs 19 years after phf
    – and cross-site scripting bug nine 9 years after samy?

    This stuff could be explained when these classes of bug were relatively unknown, when there was hundreds of thousands of lines of legacy code to work with and when IT was pretty close to the hobby computer projects people did in their garage…

    Since the morris worm systems have been completely replaced with new designs from the ground up at least two or three times (Say “unix”, windows, “linux”/android or whatever works for your field) and there is a little bit more at stake now.

    It would be interesting to know the bureaucratic excuses of why bufferoverflows, code injection en xss still show up in 1.0 versions of hundred thousand dollar projects from people with all the expected education. But with less and less paper withdrawal forms/manual override toxic sludge lever backup options we have passed the point where we can muse and brainstorm about the blame. We just have to assign the blame/responsibility somewhere so someone in the line of coming up with a cool idea and putting code in production will fear the fines more than reasonable project delays. Someone has to refuse all code without well known practical structural solutions for bugs that have been in the top-10 for decades.

    Security bugs don’t cause visible problems at random times or during normal testing, that doesn’t mean no one messed up until the bad guy came along. Bad guys should by now be expected, therefore security bugs should not.

    It is terrible that many backward compatible IT systems have bugs, but whatever. For new cars, industrial control systems and hospital equipment developed in the past 5 years there is no excuse, lives were at risk, someone has to be made to pay. That sucks for the engineers whose impressive product will be the butt of jokes over something that just never was in the product requirements. I guess it also sucks for the management types who have to explain a couple of quarters of missing forecasted bottom line figures despite all their important work on synergy and stuff. But we cant start to carefully assign the blame until we agree that there actually is blame.

    Now everyone finding bugs plays nice with everyone making them in the interest of getting things fixed. Now the billion dollar DOD cyberWarWeaponInteligenceRetaliationOperationsCommand contracts go to the exact same firms whose billion dollar IT systems are full of the holes that caused the problem in the first place. If even the brand spanking new leading brand IT systems are full of well known bugs and everyone just shrugs and (hopefully) patches this is not gonna be fixed.

    • Rabid Howler Monkey

      Liability, check. In fact, increased liability (I favour joint and several liability for data breaches in order to apportion the blame) along with the removal or nullification of liability exemptions in commercial software EULAs. If a company sells a software product to a customer, it is responsible for that product. Period.

      It’s also high time that that software engineers (including web site developers) were licensed professionals as are other engineers (e.g., civil, electrical, mechanical, chemical, nuclear, mining, petroleum, etc.). The Principles & Practice exam for software engineers should stress both secure coding and testing practices.

      In addition, the Engineer-in-training exam currently given to engineers is probably not the best choice for software engineers and other IT professionals. Thus, a new EIT exam should be crafted specifically for the IT disciplines that includes all the basics including coding, data persistence, networking, testing, etc. After passing the EIT exam, software engineers will need to practice for a period of time under the direction of licensed software engineers (this will be rocky at first).
      Software engineering, like other engineering disciplines, is now squarely a matter of public interest.

  2. Couple of thoughts –

    1. The government does not share what it knows about on-going / current threats. I do not believe anything in the bill addressed that.
    2. This already does not work. I’ve worked in critical infrastructure and I cannot tell you how many times I get notices weeks or months after something has occurred.
    3. I do not see how this is any dramatic change from what is currently happening.

    Maybe they will share information that they get from companies but what else is dramatically different from right now? You can call the FBI, DHS, etc…..and give them info. What is the real incentive to do that when they are not sharing what they know with us?

  3. Has anyone ever noticed how much BK looks like Rick Astley?

    BK’s no stranger to Fraud
    We know the rules and so does he
    A full commitment’s what I’m thinking of
    You wouldn’t get this from any other guy
    I just wanna tell you how I’m feeling
    Gotta make you understand

    BK’s Never gonna give us up
    BK’s Never gonna let us down
    BK’s Never gonna let this site go down and 404 you
    BK’s Never gonna make you cry…
    Unless your a bad guy…
    BK’s Never gonna tell a lie and hurt you

  4. Great article by Brian (as always).

    There are many reasons for security failures. They include:
    1. Developers who have no clue
    2. Sysadmins who have no clue
    3. Users who have no clue.
    4. Companies are cheap.
    5. Many dev & product managers want code deployed now!
    6. Lots of security tools, many a huge hassle or poorly designed.

    Let’s add outsourcing to the mix (see #4) and you get so-called security people from other companies who are incompetent or don’t care like real employees do.

    Mainly though from the top down (CEO), companies say all this stuff about security (they really care!), but it’s a bunch of baloney. Very few companies are willing to maintain a good security staff AND let them do their job (see #5).

  5. Hi Brian,

    don’t know if you noticed, but the URDN (stophaus) got raided by SBU in ukraine…

    Check out at their website…

  6. I have an unrelated question that I hope someone can help me with. I use Adobe Reader (11.0.10). This morning there was a note on my computer to upgrade to Adobe Acrobat Reader DC (V15.009.20069). Since it was from Adobe, I installed it. Since the name is different, maybe I should not have. I uninstalled it and re-installed Adobe Reader. What should I do?

    • Have you considered prayer?

      If you are installing executables without looking carefully at who exactly signed those, then it may be your best option at this point.

    • Adobe stopped selling XI when DC came out.
      This from a Adobe forum: “If you’re mainly using acrobat for fill-able forms and e-signatures, Acrobat DC is by far the better choice. There are many more features for scanning, creating, and formatting fill-able forms. You can even “fill out” a form that is not formatted for fill-able forms. You simply place a text box, check mark, or signature field anywhere on the page. However, it’s important to note that several of the features require you to purchase the program through a subscription pricing model (rather than purchase it outright). You can try out Acrobat DC and see which version you like best and which version works best for you. …”

  7. Call me cynical, but it sounds like just another way for the NSA (etc) to get their creepy fingers deeper into our personal data. I don’t see any benefit to CISA, but I see plenty of potential for abuse by a government increasingly obsessed with tracking and spying on its own citizens.