Health, Safety and Security

Written by

Forget the bond of doctor-patient confidentiality, cyber-attacks pose a much bigger threat to your sensitive medical data, finds Wendy M. Grossman as she assesses the recent spate of healthcare breaches

In 2014, the UK government announced a plan to improve medical research using patient data generated by NHS England’s 55 million registered users. The program, ‘care.data’, was a PR fiasco. There was near universal support for the stated goal, but near universal condemnation for the finer details, which included selling personal data to commercial companies. The program is being rethought, but the lesson is clear: healthcare data is precious.

Consequently, you might expect maximally secure data practices around healthcare data. Yet reports of data breaches in healthcare organizations are frequent and growing. The statistics from the Ponemon Institute’s 2014 Fifth Annual Benchmark Study on Privacy and Security of Healthcare Data indicate inadequate safeguards: 91% of healthcare organizations have had at least one data breach in the last two years, and 40% have had more than five. The average cost of a data breach to a healthcare organization is more than $2.1m, which aggregates to $6bn for the industry per year across the US.

“Health data is the most valuable data about you, bar none,” says Deborah Peel, the founder of DC-based advocacy group Patient Privacy Rights. “Finance went through this cycle 10 years ago. Healthcare doesn’t bother to learn.”

A case in point: in May 2015, the insurer Columbia Casualty Company filed suit to demand that Cottage Healthcare System repay $4.1m after a breach involving 32,500 customer records, claiming that Cottage had failed at basic information security, including up-to-date patching and regular audits.

Breaches are growing in number and size. In February, a data breach at medical insurer Anthem exposed 78 million records, including names, addresses, medical IDs, birth dates, employment information, and income data. In March, Premera Blue Cross announced a breach had exposed 11 million records. Both of these breaches were due to cyber-attacks, which Ponemon found became the number one cause of such breaches in 2014, surpassing lost unencrypted laptops or USB sticks.

Recently, eight people were indicted after a clerk sold information taken from 12,000 patient records from Montefiore Hospital, which was used to buy luxury goods at retail stores.

Security Failures

Privacy consultant Bob Gellman explains: “The healthcare industry is woefully underinvested in IT, which is why the Obama administration has been pushing electronic medical records.” Even with that, problems remain. These include: insurance fraud enabled by copying and pasting between medical records; lack of interoperability to aid customer lock-in; and the prevailing view of security as a bottom-line cost.

Peel argues that fundamentally risks are attributable to system weaknesses and the large number of people who can access data. A complicating factor is opacity to patients, who typically don’t discover their medical identity has been stolen for two to three years. In the US, at least, such records can’t be repaired the way credit records can. By law, nothing can be deleted.

A particular problem in the US, Peel says, is the entanglement of motives and data type; healthcare companies use medical data “not for curing but for figuring out new ways to charge us for things,” while stolen medical records can enable large-scale insurance fraud as well as individual attacks.

Not included in Ponemon’s cost-of-breach figures quoted above is the price to consumers, though it estimates that the average cost of recovery to each individual is $13,500. Unlike banks, healthcare organizations do not offer protective services, so if a medical record is copied and dispersed, nothing can restore the victim’s medical privacy. But that’s only one piece of the problem.

Indeed, Pam Dixon, founder of the World Privacy Forum, notes that healthcare breaches are a driver of sophisticated phishing and other attacks. She also sees a trend of attackers digging deeply into systems, lurking and exploring over time. This yields sensitive dataset combinations which can include clinical, bank account, email, and other financial data.

“This kind of data is extraordinarily helpful in creating synthetic identities or in conducting total ID theft, where new bank accounts are opened, new IDs created, and so forth,” she says. “It is a very difficult attack to recover from, and these kinds of identity takeovers can be used by criminals to commit crimes.”

Law and Compliance

In EU countries, medical data is covered by data protection laws; in the US the relevant law is the Health Insurance Portability and Accountability Act. HIPAA, says Dixon, can be both distracting and unhelpful: distracting because organizations focus on compliance rather than security, unhelpful because it has gaps.

“HIPAA does not specifically require the encryption of a back-end database,” she argues. So it’s possible that a breach like Anthem’s could expose 78 million records without ever violating HIPAA. 

Gelman asserts that HIPAA is deliberately written to give some discretion because of the varying nature of healthcare organizations: “You can’t impose the same requirements on the Mayo Clinic as you do on a solo practitioner.” Some of the rules are requirements; others are merely ‘addressable’, like encryption, and he argues that in 1996, when HIPAA was drafted, this may have made sense. Even now, he says, “Doing encryption is hard, and it is doubly hard in a healthcare system with millions of user accesses per day. Still, there is no excuse for not encrypting laptops and the like. BYOD makes this all harder.”

Dixon has another complaint: the act is widely misinterpreted in ways that add risks for consumers. “One of the most dangerous trends in the last few years has been healthcare providers requiring scans of government identification to prevent identity theft.” The goal, of course, is to ensure that patients are who they say they are so no one gains access to medical care they’re not entitled to.

But HIPAA has no requirement for identification, Dixon explains, and “the scanning and saving of government ID with a clinical file increases the problems of medical ID theft and increases risk of damage in a data breach. The same goes for palm scans, iris scans, and so on. The healthcare sector does not have adequate security protocols to store this data securely.”

Criminals who successfully breach these systems gain much better templates for fake IDs – a vastly increased security risk for patients. Security personnel, she says, should conduct a risk analysis.

Data Flow

Looking at calls to action, Dixon advocates that healthcare organizations should adopt tiered access practices that ensure clinical data is kept separate from identity and financial information. Peel believes a deeper change is needed: putting patients in control of their own data. “Data shouldn’t flow without you knowing,” she says. “That alone would limit data breaches.”

Phil Booth, the founder of medConfidential, which campaigns for medical privacy, says that it’s clear from the breach stories that not enough care is being taken: “Every medical establishment should have someone who is responsible for information governance of the medical records.” Booth, like most people, favors the idea of sharing medical data to aid research and improve healthcare, but says that, “because the definition of direct care has become badly blurred, people are becoming risk-averse to sharing when they should, but carp at looking after data when they shouldn’t be doing certain things.”

Similar problems confront Sarah Lawson, head of IT and information security for the National Perinatal Epidemiology Unit attached to Oxford University. NPEU doesn’t typically have patients, just their data. Because of the fallout from care.data, everyone who takes data from the Health and Social Care Information Centre (HSCIC, see box, page 42) is being required to sign a new, overarching contract. Unfortunately, she says, the contract is “basically very ill-thought-out,” and although it’s stopped the flow until everyone signs, it is “not helping confidentiality.”

NPEU’s data arrives in several different ways: in addition to the government flow there is data originating from direct contracts with consenting patients. The information provided is often very full and follow-up may continue for decades. A separate government department provides the Information Governance Toolkit, which she describes as “not as strong as HIPAA” but a “relatively sensible piece of process information we use to ensure we’re doing the right thing.” In the present situation, Lawson may find data blocked that pertains to patients that have given their express consent to its use at NPEU.

Internally, the unit’s lack of interaction with actual patients makes it easy for the data to become abstract, Lawson suggests. “Those dealing with the data start to disconnect from that person at the end. People just stop thinking, because they’re busy with research and have all this information and no name, and they forget that in one lump, if it’s lost, it would beautifully identify everyone walking around them.”

There is a final issue that will have an undeniable but imponderable impact: the trend toward health monitoring devices, including those that will become part of the internet of things. Today’s Fitbits and wearable glucose meters will soon be joined by cameras, microphones, and sensors. All these devices collect health data, but it’s stored by organizations that are not considered healthcare providers and are not subject to HIPAA (though the EU’s data protection rules would likely apply). Today’s problems are only likely to escalate.


This feature was originally published in the Q3 2015 issue of Infosecurity – available free in print and digital formats to registered users


What’s hot on Infosecurity Magazine?