If your medical data, credit card number, Social Security number, personal email, or other information were stolen, would you even know about it? After ten years handling incident response and forensics, I’ve been repeatedly shocked at the number of times that organizations sweep data breaches under the rug.
When upper management is notified of a data breach, they have to choose between:
a) Announcing publicly and in a timely manner, which would result in major reputational damage, financial drain, loss of business, and potentially huge lawsuits.
b) Keeping quiet and hoping that no one ever finds out (in which case, nothing happens).
Of course, usually upper management doesn’t find out at all. There is little incentive for IT staff to report compromises all the way up the chain, since it just makes them look bad. System administrators fear that if they detect a compromise on their own servers, managers will accuse them of doing a bad job. Also, the breaches have to be detected in the first place– and often security staff are overworked and have limited resources for tuning IDS or following up on alerts.
The bottom line is that no one is motivated to do a good job detecting and publishing breaches– not corporations, not upper management, not IT staff, and in many cases not even security teams themselves. Ethics can hardly compete against real financial incentives and fears for job security.
Don’t Companies Have to Report Breaches?
|“The irony is that companies with the worst security practices, who do not keep logs or configure IDS systems effectively, are the ones who get off scot-free because they do not collect or retain the evidence of a breach.”|
Many states have data breach notification laws, but these tend to have major loopholes. Importantly, they don’t provide clear guidelines for deciding whether a “security breach” happened. As a result, if an attacker destroys important evidence or if the company does not retain records that would explicitly prove inappropriate access, then the company will probably decide that they are not required to report. Customers affected never even hear that there was concern about a breach in the first place.
The assumption is that the data is secure unless there is explicit evidence which proves otherwise. This is backwards! When log retention creates a liability, companies have reduced incentive to collect or retain detailed records. If we assume the data is secure unless there is proof otherwise, then there is no reason for companies to work to retain evidence.
The irony is that companies with the worst security practices, who do not keep logs or configure IDS systems effectively, are the ones who get off scot-free because they do not collect or retain the evidence of a breach.
What about the proposed federal Data Accountability and Trust Act?
The Data Accountability and Trust Act which passed the US House of Representatives last month does nothing to address this loophole. It requires that “Any person engaged in interstate commerce that owns or possesses data in electronic form containing personal information shall, following the discovery of a breach of security of the system maintained by such person that contains such data…notify each individual…”
OK, so what is a “breach of security”?
“(1) BREACH OF SECURITY- The term `breach of security’ means unauthorized access to or acquisition of data in electronic form containing personal information.”
How do you decide if there has been “unauthorized access to or acquisition of data”? The bill does not provide any guidance. As long as the organization does not keep records which would *prove* that confidential data was accessed or exported, their legal counsel may advise them that they do not have to report. I am not a lawyer, but I have seen this happen repeatedly with respect to existing data breach regulations.
How Can We Fix This Loophole?
Here are some ideas:
- Assume insecurity. Companies should be able to produce access logs and records which confirm that the data has been kept safe, rather than vice versa. This will motivate companies to collect and retain access logs in much greater detail than they do now.
- Proactively audit large organizations that retain lots of personal data.
- Publish yearly certificates based on audit results, the same way health inspectors publish certificates for restaurants. This way the public can decide which companies to give our information to, based on how well they secure it.
Today, the vast majority of security breaches are never reported. When you examine the incentives and the myriad of holes which exist in reporting regulations, it’s easy to understand why. Detailed logging and monitoring practices result in greater liability. Reporting incidents to the public can lead to financial ruin. There’s little incentive for organizations to do a genuinely good job tracking access to confidential data.
In this backward system, it’s a wonder we hear about any breaches at all. The fact that we do hear about data breaches frequently should make you stop and think about the number that are *really* occurring, but are never detected, let alone reported. Speaking from experience, I can tell you that the data breaches you hear about are just the tip of the iceberg.
|PGP-signed text: 2010-01-02 (current)|