TwitterFacebookEmailRSS

Target Settles Out

DISCLAIMER: This post is based on preliminary reports from press accounts without having reviewed the full and final settlement terms.

It’s no secret, I’d like to see a more “cyber secure” nation. If you’ve read some of my past posts, you’ve probably heard me struggle with the effects of giving incentives (right, wrong and outright “disincentives”) to both every day folks and security pros to help improve our cybersecurity posture.   Earlier this month I gave a (dry even by sauvignon blanc standards) synopsis of some legislative developments as legwork for a following discussion on the incentives government gives and industry responds (or doesn’t respond) to in order improve our collective security. At least that was “the plan.” Skipping ahead a few steps, but definitely related, we have more developments in the Target data breach litigation.

As of this writing, Target appears poised to settle litigation over the massive 2013 data breach of their IT systems for a total of $10M (plus up to $6.75M in fees to settlement class counsel.) The breach affected 40 million accounts and up to 110 million individuals. From an “industry incentives” point of view, there are some mixed messages about security incentives in this news.

STANDING –Traditionally, cases like this are dismissed early in the process on the basis that cardholders and customers lack standing to sue because they suffered no actual, demonstrable harm as a result of the breach. The Target case (as one of a recent “four-pack” of cases indicating a coming change in this view of standing, see “CASES” below) is somewhat notable from the standpoint that it survived a standing challenge. Or as I heard defense counsel recently quip, “It’s getting to the point where I may actually have to defend one of these cases on the merits soon.” Talking incentives, this seems to be a clear indicator courts are giving more weight to favoring standing on likelihood of harm to customers grounds.

ACTUAL HARM – Surviving a challenge to standing hasn’t exactly paved a road to compensation, however. Let’s pause for a moment to do some math. Assuming a maximum potential affected class of 40 million accounts and/or 110 million individuals, Target and plaintiffs’ counsel are contemplating 25¢ or 9¢ in full-class compensation respectively? Not exactly. The low compensation to class member ratio probably lies in the likely difficulty class members will face in proving they’re entitled to compensation under the terms of the settlement. Individuals must:

  • Prove they used a debit or credit card at a US Target store (not the Target.com website) over the 19 day range of November 27, 2013 through December 15, 2013 [i]
  • Declare if they actually received a breach notice or if they simply believe their information was compromised [ii]
  • And must demonstrate they experienced at least one of the following:
    • Unauthorized, unreimbursed charges on their credit or debit card
    • Time spent addressing those charges
    • Fees to hire someone to correct their credit report
    • Higher interest rates or fees on the accounts
    • Credit-related costs
    • Costs to replace their identification, Social Security number or phone number
    • Loss of access or restricted access to funds

If you set the “way-back” machine to last December and my comments on Breach Fatigue (specifically, the difficulty in collecting and assessing information about your exposure as an “included individual” in a breach) it’ll be interesting to see how many people actually get compensation as a result of this settlement. But settlements are all about controlling risk and outcomes. While Federal District Judge Paul Magnuson got to decide that standing existed for litigation to proceed, Target gets to decide the parameters of what constitutes “actual harm” with these settlement compensation requirements.

DUTY OF CARE/SECURITY IMPROVEMENTS – It’s worth noting that this settlement affects consumer negligence claims that survived dismissal. Why’s that important? Because if the litigation proceeds to trial it then opens up the possibility the court carves out a duty of care for security and handling of customer data for data stewards like Target. Again on the notion of controlling risk, it’d be risky to roll the dice on that standard being developed at Target’s peril if an affordable alternative exists. It’s also somewhat unique that as part of the settlement, Target has agreed certain security improvements be imposed as part of the court order. Similar to controlling the terms of determining actual harm, Target gets to control the dialogue around what steps should be taken in light of the breach. Settlement steps include:

  • Appointing a Chief Information Security Officer
  • Maintain a security program that identifies risks to shoppers’ personal information
  • Have a process for monitoring security risks
  • Give security training to employees

To be fair, as a level 1 PCI merchant with nearly $52B in market cap, you could probably argue that these are “Security 101” steps for such an organization. And most, if not all, were probably already implemented between the time of the breach and the settlement. [iii]  But by including these security improvements in the final order, we are getting some market indicators of thoughts around an evolving duty of care regarding security, even if this particular settlement provides no acknowledgement or precedent to that effect.

With new info from the Target consumer case regarding valuation (40M consumers and payout likely far south of the maximum $10M), standing (broader inclusion), actual harm and security duty of care (negligence claims allowed to proceed and parties including security improvements as part of order), are these more and better incentives to secure information or is this just an exercise in post breach risk management?


CASES

  • In re Adobe Sys. Privacy Litig., F. Supp. 2d, 2014 WL 4379916 (N.D. Cal. Sept. 4, 2014) – expansive treatment of “standing”, accepting notion of “increased risk of future harm” & cost of steps to mitigate fraud & malfeasance.
  • FTC v. Wyndham Worldwide Corp., 10 F. Supp. 3d (D.N.J.2014) – Refusal to dismiss claims and FTC contention that “reasonable” steps were not taken to protect data.
  • In re LinkedIn User Privacy Litig., 2014 WL 1323713 (N.D. Cal. Mar. 28, 2014) – for paying premium users, payment of fee with misrepresentation of own privacy policy gives rise to standing

 

[i]  Can’t help but wonder then (here we go again with the math…) for a breach scope of 40M compromised accounts, does citing this specific date range mean that 12.6% of the US population (2013) shopped at Target using credit/debit over those 19 days??

[ii]  As discussed in the opening disclaimer, I’m not intimately acquainted with the 97 page potential settlement doc, but I’m guessing not receiving a notification will be a barrier to receiving compensation.

[iii]  For example, Target appointed Brad Maiorino CISO last summer.

Cybersecurity: Recent Legislation

I know, talking legislation is sexy stuff. But a short look back at some recent developments will be foundational to some important coming discussions.

The past several months have been packed with Cybersecurity legislation. Law making is inherently an iterative process and at the risk of sounding cynical, despite all the activity it’s fair to say we haven’t covered much new ground in 2015. But don’t interpret the following synopsis as cynicism. The legislation is absolutely indicative of substantive forward progress, but I feel there’s an opportunity at hand for larger leaps forward. A short recap of recent legislation and recurring themes to frame a later discussion:

Private Sector Information Sharing: 2015’s State of the Union address included a section focusing on Cybersecurity, specifically with a call for better efforts to “integrate intelligence.” Less than a month later, the President would raise that concern again at a Cybersecurity and Consumer Protection Summit. The summit featured the introduction of an Executive Order (EO) “Promoting Private Sector Cybersecurity Information Sharing.” Information sharing is hardly virgin ground for Cyber legislation.[i] The call for Cybersecurity Information Sharing and Analysis Organizations (ISAO) can be found in the Homeland Security Act of 2002, the 2013 State of the Union Address and the 2013 and 2015 versions of the proposed Cyber Intelligence Sharing and Protection Acts. The 2013 State of the Union Address also gave rise to its own EO (EO 13636) calling for the creation of a framework outlining processes for voluntary private sector information sharing. Two of four Cybersecurity bills passed late in 2014 called for similar collaboration, the Cybersecurity Enhancement Act and the National Cybersecurity Protection Act. Last month’s EO would add additional detail to ISAOs calling for formalized changes and updates to sharing policy as well as the creation of a non-government ISAO Standards Organization. If recent legislation has a predominant theme it’s “Please share…pretty please??”

Cyberspace as Critical Infrastructure: Legislation has also emphasized the protection of U.S. Cyberspace as key to economic, military and national security stability. Several sectors of U.S. Cyberspace were therefore defined as critical infrastructure in the Homeland Security Act of 2002. Provisions were made for critical infrastructure protection (CIP) in 2003’s Homeland Security Presidential Directive-7. Those CIP prescriptions were further refined in Cybersecurity guidance of 2013’s EO 13636 (titled Improving Critical Infrastructure“) & resiliency prescriptions of Presidential Policy Directive 21. And the Cybersecurity Enhancement Act of 2014 once again highlighted the importance of our cyber assets and infrastructure to American prosperity and well being. Second theme: “This stuff is important.”

Privacy and Civil Liberties Are Essential: There have been two recent long form attempts to describe privacy rights and civil liberties in Cyberspace, 2012’s Consumer Privacy Bill of Rights and last week’s 2015 version of the Consumer Privacy Bill of Rights. In the span between those two offerings preserving privacy has also been a stated outcome of 2013’s EO 13636, this year’s EO Promoting Private Sector Cybersecurity Information Sharing and The National Cybersecurity Protection Act of 2014. At one point President Obama also threatened to veto the 2013 version of the Cyber Intelligence Sharing and Protection Act if wasn’t amended to ensure privacy and civil liberty protections.

Now that the legal pedigree is behind us, that’s thirteen years of the same three part harmony:
Share information.”
Cyberspace is critical.”
Privacy is essential.”

And let’s be clear who the intended audience is – private sector. Because as of 2009 the vast majority of public sector organizations began some migration to some subset of the same government security standards. So after thirteen years, how are we still in roughly the same position? I believe it’s because, on their face, those three previous statements don’t provide enough incentive to bring the right stakeholders to consensus yet. I believe we’re close. I’ve offered what I think it will take to get us there absent some positive change. But I also believe public sector (and not necessarily just the Feds) has at least one more compelling incentive it can offer. I think evidence of that can be heard echoed here in the reactions of Trustwave’s Phil Smith, RSA’s Mike Brown and Denim Group’s John Dickson.

To be continued…


[i]  Information sharing is one means of attempting to scale efforts to combat an “open source adversary.” Open source adversaries, like cyber criminals and advanced persistent threats, can cheaply and easily replicate attack methods and vectors using scale to incredible advantage. J. Michael Daniel, cyber-security coordinator at the White House, gave this explanation of the counter tactic benefits: “We have seen industries that have increased their information sharing—such as in the financial services industry—and that does make a meaningful difference in being able to cut out a lot of the low-level attacks and intrusions. When you do that, then you can focus your humans on the more sophisticated intruders. I see this as a sort of baseline for us just to stay in the game.”  For a brief treatment on open source cyberwar see John Robb’s blog or the excellent example in his book Brave New War.

The Facebook Novel (That No One Will Read)

A good lawyer will tell you, “Never sign anything that you haven’t read or don’t understand.”  The same goes for accepting online agreements. But even in the face of so much sound advice, personal experience and anecdotal evidence suggest that reading the whole contract is batting well below the “Mendoza Line.”  You, however, are a diligent, meticulous, go-getter that always sweats the contract details, devouring the whole document for the smallest of minutia – right?  But what if that document includes links?  Do you read the links?  Are they incorporated into the parent agreement “by reference?”[1]  What if there are a lot of links?  What if the links contain links?  Enter Facebook’s newly updated terms, policies & “Privacy Basics”:

As you might’ve heard, on January 30th Facebook will be rolling out new language for their Statement of Rights and Responsibilities, Data Policy and Cookies Policy.  The opening paragraphs of Facebook’s initial announcement link to an explanatory privacy tool and 3 separate policies to be updated.  I won’t even attempt to form a legal opinion about what’s incorporated by reference in these updates and what isn’t.  Instead I wanted to figure out, as a techie lawyer, how much documentation and legalese does one have to consume to wrap their head around their rights and responsibilities as one of Facebook’s 1.35 billion active users.  The initial announcement alone included 14 unique links – to do this correctly, clearly I was going to need some terms and guidelines of my own.

“Supersizing” Facebook’s New T’s & C’s

Morgan Spurlock’s “Super Size Me” kept popping to mind.  In his 2004 documentary [2] Spurlock eats nothing but McDonald’s, 3 full meals a day, for a month and documents the results.  He lays out a few ground rules for the experiment but there was one that seemed relevant:  If the person taking the order asks if he’d like to “super size” the meal he’d say yes.  My analog was, “If the referenced document includes a link, I will click on that link as well.”  Unfortunately, the Facebook policy ecosystem is a wee bit more complex than the McDonald’s menu board, so this was going to require a few additional conditions to keep the results relevant & manageable.  My rules for a link to be included in the relevant Facebook T’s & C’s:

  • Only pages reached by following downstream links of the Facebook updated terms & policies announcement are in scope.
  • The linked content is related to privacy, security, rights and/or responsibilities.
  • The linked content is in the main context of the linked page (not the menu, template, banners, style sheets, peripheral web assets, etc.)
  • The content is applicable to users in the U.S., and not just users of a specific state.
  • Settings, configurations and purely technical information are not included unless they are related to managing or understanding some aspect of privacy, security, rights and/or responsibilities.
  • Links to external domains may be considered relevant and included, but no other links from those external domains can be clicked (“1 outside click” rule.)

Sounds like I’m considering a lot of content “out of bounds”, doesn’t it?  Again, I’d just like to restate the objective here:  If we assume incorporation by reference, starting with the Terms & Policy Update Announcement and using only clicks on relevant provided links, how much reading is involved in grasping everything Facebook wants you to know about terms, conditions, rights and responsibilities?

Any guesses?  In terms of links?  Unique documents?  Pages?  Words?  Last chance to formulate a guess…

Hint:  My link tree just for organizing this project was 26 pages and nearly 4,500 words long (with each URL being interpreted as only one word, mind you.)

STATISTICS : [3]

  • 358 total links
  • 118 “unique” links
  • 5 external links
  • 2 instances of the “Privacy Dino
  • 2 broken links
  • 1 additional set of terms that you’d need to accept to use Custom Audiences features.
  • The total, nonduplicative text of these 118 unique pages comprises:  67,401 words
  • Or a 164 page standard format Word document

WHAT I LEARNED READING THE FULL TEXT OF THE FACEBOOK TERMS & POLICY UPDATES:

  • If it were a book, this text of the T’s & C’s would be longer thanTreasure Island, The Color Purple, The Scarlet Letter, All Quiet on the Western Front or Lord of the Flies
  • One of the closest books in terms of word count is John Green’s “The Fault in Our Stars” , and while I have been neglecting the adolescent romance titles of late, I can’t say for certain which text I would rather have been reading this past weekend. [4]
  • A good portion of the privacy content is geared at privacy with respect to other users, while that is important and the Facebook internal mechanisms aren’t completely opaque there are still a lot of questions about what goes on with our data under the covers and with Facebook’s partners.
  • This link allows you to opt out from online behavioral advertising campaigns with over 100 companies.
  • Of the 118 unique links visited, this is the only one that actually underlines (potentially legally incorporated by reference) links and doesn’t force you to squint for the nearly indiscernible dark blue link text as opposed to the regular black font.
  • The Facebook Principles are pretty good stuff.
  • Really want to trigger your “Privacy Spidey Sense?”  Check out your Facebook metadata.
  • These Facebook Companies links lead to 10 other Facebook subsidiaries with “privacy” in the URL.  The majority of these companies have their own privacy link tree and are not using these terms & conditions.
  • Facebook Payments, while not a separate company per se, also has its own privacy policy.
  • Facebook has a lot of products and services (e.g. Mobile, Messenger, Paper, etc.) in the Facebook ecosystem.  Regarding those services:  “in some cases, products and services that we offer have their own separate privacy policies and terms.”
  • So, 67,000 words (or one “Fault in Our Stars”) later, if something is designated as a “service” (like whether I’m using the mobile app or the messaging function) then none of this may apply???

But perhaps the most interesting link, in my opinion, is the page for “How can I report a legal violation of my rights other than copyright or trademark rights?”  This link will net you 64 total words asking you to write out the reason you’re writing, the right you believe was infringed, and your legal basis for claiming the right.  And that’s just it – we probably need 67,000 word Terms & Conditions frameworks because we live in a litigious, open society that has done little to define rights and expectations around data, identities, and identity attributes beyond the monetization of those elements under Intellectual Property law.  There is a presumptive expectation that because someone built these online services and you avail yourself of those services, that any data gathered, actions tracked and correlations realized are the sweat equity and the work product of the service.  But after another look at your Facebook Metadata, is there a question about whether the value lies in the service or is inherently an inextricable part of the individuals using it?

Let’s be clear, this is not intended as an indictment or a defense of Facebook. I appreciate the ecosystem of people and interactions Facebook provides.  Their policy update was worth exploring because of the number of users and the nature of the data involved.  But the core issue that forces a company to draft a 67,000 word terms framework in the first place is not unique to Facebook.  We have a critical mass of personal information, growing and expanding avenues of intended and unintended exposure of that information, and next to no substantive guidelines steering our expectations, responsibilities and duties in handling it.  We must set out to create those guidelines – or am I the only person who feels it’s just as unreasonable to maintain an environment where every online company drafts a 67,000 word policy structure as it is to expect every user to read it?


[1] Because every reputable explanation of a legal concept includes a Wikipedia link.

[2] And potential Jim Gaffigan dream sequence.

[3] Now might be the right time to point out that I made my best effort to catch every link, document meticulously, keep all included links relevant and have no duplicative material. I don’t believe I made any mistakes in capturing this material but I would not be surprised in the least if mistakes were made. Further, “relevant to privacy, security, rights and/or responsibilities” is a subjective concept (especially in regard to technical controls, interface settings, and configurations). Rational thinking people may have differing opinions as to whether I was too restrictive or lenient in my inclusions. Regardless, given my guidelines, I am very comfortable with these figures.

[4] Or as one friend informed me, “A maudlin nonsense piece about a girl with cancer whose love for her boyfriend doesn’t cure cancer.”

Cybersecurity: Battling the Bruce Hornsby Effect

Last week I was applying my “finishing touches” on a cybersecurity presentation:  one last look at Twitter, glance at a feed aggregator & skim the day’s headlines.  Between breaking news on the Sony Pictures hack, progress on class action suits over the Home Depot compromise, and some continuing local coverage (in the Twin Cities) of the Target breach, there were plenty of updates.  I even worked in a Dennis Rodman slide.  This Sony thing is sort of his fault, after all (I kid.)  Regardless of the audience or the technical subject matter there’s always something in current events that updates the content on threat vectors, victim trends, cyber liability, regulatory landscape, etc.  Every time, with every presentation, some “breach du jour” or something related leads to an update in my deck.  It’s a steady diet of ubiquitous bad cybersecurity news that somehow hasn’t already led to effective steps to stem the tide of compromises.  I find that surprising – but should I?

A few days ago I was trying to wrap my head around the notion of “breach fatigue” vis-à-vis the average American.  I’m not just concerned with it affecting my neighbors next door, but also the fellow security geeks working in offices down the hall.  Case in point:  When news of Target broke, I was certain it was going to be the tipping point that would lead us down a path to change on a large scale (Spoiler alert:  sometimes I’m wrong about stuff.)  That was my conclusion, one I’ve aired more than a handful of times, and in the year-plus since it happened, a non-trivial number of colleagues have let me know they disagree.  Some common sentiments from those doubting Thomases:

– “Breach fatigue – there are just so many headlines that people start to tune it out.”
– “As long as it’s not costing the [consumers/company/shareholders] money, they don’t care.”
– “Sure, another breach, but nothing ever changes.”

In order, what about breach fatigue?  Are peers and colleagues really telling me, with a straight face, “There’s so much hacking that we have to ignore it?”  I stated last week I believe the root cause is that incident information is so inadequate and alternative options so scarce, it creates a “Bruce Hornsby Effect” (“That’s just the way it is, some things will never change”) in the average breach victim.  That may explain why there’s not an overwhelming groundswell of victims calling for substantive security changes, but what’s IT Operations’ excuse?  You’d sprain your brain trying to derive a clearer “canary in the coalmine” example of cyber threat trend analysis (or “Let’s say this Twinkie represents the normal amount of nefarious cyber activity…”)

Next comes the notion that breaches aren’t hitting pocketbooks and bottom lines.  I wrote last week about the difficulties the average consumer faces in calculating the costs of a breach.  But what about the hacked organizations themselves and their shareholders?  As a security guy, I know there are many ongoing efforts to track and quantify the cost of a breach – probably the most notable being the Ponemon Institute’s Annual Cost of a Data Breach Study (which places the 2014 average cost at $201 per record lost.)  Similarly, you can find staggering estimates around recent high profile breaches:  Home Depot warns that costs will surpass its initial $70M report, Sony’s two breaches account for over a quarter billion in losses over three years, and one Target estimate eclipses $1B.  But then you may notice something interesting, for companies reportedly hemorrhaging money post-incident, none of their stocks appear to be in an all-out freefall.  They trend down in short term response to the attacks, but all seem to mount comebacks that make you wonder if The Street really thinks these attacks are taking a toll.[i]  You would think loss figures with that many commas in them would constitute “bet your business” or existential threats, even for organizations of this size. But who’s really bearing the cost of the breach?  We’re seeing that payment card breaches drop the heaviest costs on issuing banks and many of the targeted retailers are drawing on cyber insurance policies to cover some, if not all, of the losses.  It becomes another area where net effects and actual costs of an incident are hard to pinpoint.  While that abstraction has made it hard to tally losses and pinpoint accountability, at least the banks have come to the conclusion that they shouldn’t be bearing the full brunt of these breaches and they’re litigating.  As of this writing, Target is facing over 100 breach related lawsuits and Home Depot nearly 50.  Both companies have recently suffered preliminary rulings allowing cases against them to proceed.  It seems the previously murky gulf between an incident and who owns the financial fallout of that incident may be getting some clarity soon.[ii]

“Sure, another breach, but nothing ever changes.”  I find it disturbing that this particular symptom of the Bruce Hornsby Effect predominantly occurs in security professionals.  Maybe you’re fighting a lot of organizational inertia and have poor, abstract metrics at your disposal, but there is a flashing neon business case for revisiting and reevaluating security posture, standards and readiness here.  We could continue diving into the reasons things haven’t changed, or we can ask the much more important follow up question:  Is this course sustainable?  That surprise I feel every time I scan the headlines prepping for another presentation is disbelief that we haven’t already had our hand forced by some compelling event.

So what will it take to see real change?

Litigation – Litigating a breach is not a new approach.  But from talking to attorney colleagues about their current case load, it seems this round of litigation involves a lot more thought about what a negligence standard and duty of care for data stewards might look like.  The technical understanding of the courts has matured from previous waves of cases as well.  And even factoring in inertia and breach fatigue, when the “class” in a class action suit numbers in the hundreds of millions, awards are likely to scale to levels that cause even the biggest industry players and sectors to wince.  A successful suit that helps define security and negligence standards would likely bring substantive security improvements forward.

Underwriting – In addition to banks, insurers are shouldering a large portion of the burden from this last wave of high profile breaches.  I’ve reviewed examples of underwriting qualifications and policy exclusions based on specific security capabilities.  One policy I’ve seen specifically states that unencrypted PII is out of scope for cyber liability coverage (Here’s an example of a similar exclusion related to portable devices/removable media.)  A broad-based industry initiative or underwriting shift by major insurers to require specific security controls (encryption, SIEM tools, segregation of duties, etc.) or compliance with a strategic security framework as a basis for coverage would also force substantive improvements to security postures at large.

Catastrophe – These are severely damaging events that carry tremendous impact.  Some catastrophes may involve events we’ve caught bite-size glimpses of and perhaps even prepared for on a smaller scale:  natural disasters (Hurricane Katrina), cyber warfare (Georgia, Ukraine), sabotage (Stuxnet), and general concerns about terrorist or nation-state attacks on critical infrastructure (power, water, transportation) and targeted systems disruption (financial market collapse.)  But it would take an event of unprecedented magnitude to produce wholesale change in our approach to security and readiness.  There are plenty of nightmare scenarios you can paste into this space, and numerous indicators that we are trending toward such an event.  For our purposes though, my contention is that it would take a scenario with measurably greater impact than what we’ve seen to date to in order to induce real change (To wit, nearly a decade after Hurricane Katrina, how many organizations still have their disaster recovery site in the same general geographic region as their production data center?)  It’s entirely plausible (I’d argue predictable) to imagine that in the wake of a truly massive, crippling cyber event, many critical sectors of U.S. infrastructure would engage in a massive correction (and likely overcorrection) in security practices and standards as they did with physical security protocols following the September 11th attacks.[iii]

The previous section once read, “What would it take to see real change?”  “Would” now reads “will” as I feel all three subsequent scenarios are extremely likely, if not forgone conclusions.  Litigation and underwriting will continue to evolve in ways that redistribute the financial fallout of breaches and place the onus of protection back with IT as it carves out more appropriate and prescriptive security terms.  While that iterative and reactive process takes place, the baddies will continue to outpace security improvements and drive us ever closer to a tipping point cyber event.  The smart money is on proactively preparing for these eventualities and getting ahead of the threats and risks, right?  It’s common sense, it’s intuitive, and traditionally it just doesn’t happen because of things like inertia, fatigue, and Bruce Hornsby.

But stick to your guns, folks because “that’s just the way it is…ah, but don’t you believe them.”


[i] Time will tell with Sony.

[ii] There’s probably an entire series of discussions on cyber liability that could fork off of this thread. For the purposes of this discussion though, the most important development is that between issuing banks, breached companies, shareholders, insurers, and affected individuals, actions are proceeding to correct what some of these players perceive to be an unfair or imbalanced distribution of the costs associated with a breach.

[iii] While I’m advocating for change and improvement to security practices, I’m concerned that undertaking those changes as part of a hasty, irrational, knee-jerk response to a cyber event might actually exacerbate problems. The preferred approach is to develop an approach under normal operating circumstances, before such a catastrophe occurs, with an eye toward minimizing the impact of such an event and getting the organization back to normal business operations.

Breach Letter Excerpt

How You Learned To Ignore Over a Half Billion Data Breaches

Ever read one of these addressed to you? If not, congratulations. But based on statistics, headlines, and personal experience, I’m going to guess you have (or you’re really bad at getting to your snail mail.) My most recent came from a financial institution that handles a retirement account for me. I first heard about the breach from the usual online sources long before receiving a letter. This news was accompanied by a disproportionate increase in stomach acid, a search for more information, a check on my account status, and then eventually a return to what I was doing. En masse people have credit cards canceled, accounts drained, identities stolen, yet somehow less than an hour after being alerted to a potential risk to what I’ve put away for my retirement, I’m back to working on someone else’s security architecture. How exactly did we, as individuals, end up in a position where we’ve basically learned to ignore over a half billion breaches of our data?[i]

Let me qualify what I mean by “our data.” I mean large cross sections of individual and consumer data. Personally I’ve received a relatively small number of these notification letters over the years, I’ve only caught one fraudulent credit card charge, and any shortfalls in my retirement planning are still “unforced errors.” But in following the never ending flow of new breaches affecting millions, apparently I’m also nearing the point of shrugging my way back to our regularly scheduled programming. It can be called “breach fatigue” and I’m trying to put my finger on exactly how with headlines like this: “CYBERATTACKS NOW COST OVER $1.5 TRILLION A YEAR”, it’s a very real thing (even for someone who works in security.)

Here’s a cursory list of what I figure to be the most important (I’m sure there are more ways to slice and dice this issue) variables at work underneath breach fatigue:

Actual cost

Comprehension

Market alternatives

Ability to affect level of risk

Emotional/intangible impact

Actual Cost – There is a delta here that probably helps explain a large chunk of the breach fatigue phenomenon. That is, the actual cost to you will be wildly different if you are merely an included individual in a large breach, than if your data is actually leveraged to commit fraud or other malfeasance. If there’s an “upside” to a large breach (spoiler alert: there’s not) it’s that in a huge compromise a very small percentage (not number) of included individuals will likely see a high actual cost, assuming the breach isn’t mishandled. For those whose data is leveraged the cost, even beyond actual dollars in terms of time and productivity, is often crippling. The very real “downside” is that large scale and repeated notifications to those whose data isn’t actually leveraged begin to interpret each successive notification as a “zero actual cost” event as opposed to a dire warning about how their bank, retailer, service, etc. almost completely jacked up their financial and reputational future . This stream of notifications should reflect increasing risk to our sensitive data, instead it is taken by many as increased frequency of “zero actual cost” breaches. It’s a completely inverse read on the actual risk being presented and a recipe for breach fatigue.

Comprehension – Enterprise security can be pretty dense subject matter. It’s not clear that, even with sufficient after-action reports and technical details (both rarities), that an average person will read about a breach and conclude better risk mitigation steps were available and should have been taken. I recall a number of reports from major institutions that lead me to say things like, “How have they been passing their PCI assessments all this time??”, “Aren’t these guys HIPAA/HITECH regulated??”, “Isn’t that a ‘Security 101’ mistake??” etc. While this raises major red flags in my mind about doing business with an organization, I realize these are not the questions or the concerns of the average customer. Little public concern follows even major security gaffes and there is seldom substantive change beyond a couple terminations, resignations, and general lip service. It’s difficult to make informed decisions about a breach when you’re not getting much detail to begin with and IT jargon reads like a David Lynch dream scene to you.

Market Alternatives – Let’s build off of the notion of a comprehension factor and address two scenarios:

  1. You are completely oblivious to all things IT and a thorough reading of your compromised bank’s breach report could trigger an infringement action by the holder of the Ambien patent. All you want to know is if they have your money and it’s safe to keep it with them.
  2. You’re the sort of geek who reads blogs on technology, security, and related policy in your free time and you’re not happy with what you just read from your business partner.

If you’re the person in the first scenario and the notion of a breach at your bank of choice upsets you, will switching banks help? How would you know? As the headlines detail other, also trusted and reputable, banks being compromised how does the average person interpret their market alternatives for a “secure banking (or retailer, service provider, etc.) option?” If the biggest and most trusted names in a sector are making the headlines, how does the average person discern the merits of their security offerings? Again, the avalanche of breach news adds more noise to the signal.

If you’re in the security savvy set, you may determine that a particular vendor has been playing fast and loose with your data. You’re really disappointed and it’s time for you to part ways. How can you ensure that your next vendor is any better? Of course, there are offerings for banks, retailers, social media, etc. that allow for enhanced security measures. Things like multifactor, out of band, and hardened authentication may speak to an organization’s commitment to security, but it’s hardly a “complete picture.” The details of enterprise security plans and safeguards are not something companies are hot to share or publish. Assuming you can find an organization that hasn’t fallen prey to a similar compromise, can you really get enough information to make a determination that Company Two will be an improvement on Company One’s security practices? How do you assure that it isn’t just a matter of time before your replacement vendor joins your original vendor on datalossdb.org?

Whenever I get wind that I might be in the affected class of a breach, my first inclination is to do something. I’ve called out “market alternatives” as a variable because often I find, even after thorough research, little to no evidence that a change of vendor or provider will definitively enhance the security of my information going forward.

Ability to Affect Level of Risk – Again, finding I’m included in a breach makes me want to act to protect myself and I just addressed what limited options we have for market alternatives. Like many failing relationships, you may find yourself wondering “What if the problem is me?” It’s probably a logical stretch to think that your inclusion in an eight or nine figure table of compromised records is somehow based on your individual behavior, but it’s logical to wonder “Could I have done something to prevent this?” There are some compromises close to the end user (ATM Skimmers, account hijacks, etc.) where hardened authentication methods and enhanced paranoia might decrease your odds of being leveraged. When it comes to large scale breaches, however, the enterprise nature of the compromise really takes compromised subjects’ behavior out of the equation. As with “Market Alternatives” there’s nothing substantive I can do, in this case with my own behavior, to proactively or reactively change the level of risk I face.

Emotional and “Intangible” Impact – I work largely with public sector clients. In general, public vs. private sector security discussions raise a fair amount of “apples to oranges” objections. Over a year later, however, my clients still haven’t unclenched their teeth over the Target breach. In a field largely apathetic to private sector, bottom line focused, PCI regulated concerns, why Target? Why does that particular breach register on radar when I can probably count on one hand the number of clients who have even mentioned larger breaches like Sony PSN, Heartland, or J.P. Morgan/Chase? I believe it goes to impact affecting them as individuals as opposed to their role as a public sector CIO, CISO or Security Architect.

Like every other factor mentioned “emotional and intangible impact” is hard to quantify and measure, but it’s more easily absorbed and internalized. The notions of canceled credit cards during the holiday shopping season, sitting on hold with an issuing bank as 70,000,000 cards are replaced in parallel, and what about those Target gift cards I just gave out? Pile all that disorder on to the manic anxiety of holiday shopping season and apparently it hits harder than a notice about a threat to your retirement account or a “DECLINE” code when you go to purchase that DQ Blizzard (maybe I’m showing my bias for small cash transactions here, but who was using a credit card at Dairy Queen in the first place??) While the variable may be somewhat “intangible” in nature, I believe Target shows us that this is the element that people most understand and react to. The intangible aspect is the only part of the equation solely derived from the effected individuals. It also shows us, when compared to other breaches, that emotional impact may not have a strictly linear relationship to things like actual cost or the total number of records breached. My informal observation is that, even without actual cost, individuals can still be affected by the emotional impact of a breach more than a year after the fact.

Looking back, the average person may find breach information is hard to comprehend, has a low signal to noise ratio, and doesn’t present them with many alternate courses of action. That’s not to say that breach fatigue equals breach apathy. It seems Americans worry more about online security than everything except walking alone at night. Could it just be that the amount and format of breach information leaves the average breach victim largely in the dark? How else can you ignore more than a half billion breaches?

 


[i] That’s total number of records breached as taken from the Identity Theft Resource Center’s statistics. I realize that’s not 673,293,959 individual actions resulting in breaches, but it is 673M records breached. However you slice it, it’s a LOT to ignore.