Are pundits discussing the personal information allegedly accessed by a campaign staffer for Bernie Sanders? No, not really, and that is the point.

Scheduled to debate tonight at St. Anselm College in Manchester, New Hampshire, Democratic presidential candidates Bernie Sanders and Hillary Clinton are almost certain to joust over an alleged intrusion into Clinton’s voter data by a Sanders campaign staffer. According to reports, the staffer accessed confidential voter data maintained by a vendor, NGP VAN, while the firewall protecting that data had been removed. (hmmm…a third party vendor) In response, the Democratic National Committee (DNC) terminated the Sanders campaign’s access to all voter data, including the campaign’s own data. Litigation followed, a deal was reached, but reverberations continue. Turn to your favorite cable news channel.

One hears “data breach” and immediately Social Security numbers, credit card data, or medical information come to mind. In this case, the personal information reported to be involved included names, addresses, ethnicity, and voting history, hardly considered to be sensitive personal information in the United States. In fact, none of the state data breach notification laws would require notification based solely on these data elements. (But see, e.g., FTC settlement involving email addresses). But, some of the information, particularly analytical data concerning voter preferences, can be tremendously helpful to a campaign. So it is easy to see why it is causing such a stir, particularly for the Sanders campaign.

Why is this important beyond presidential politics?

Organizations are beginning to recognize the need for data breach preparedness. This is good – we are seeing more internal teams being assembled and comprised of key stakeholders within organizations. They are meeting, learning and developing data breach response plans including sample investigation checklists and policies, template notification letters, vendor relationships and engaging in tabletop exercises.

Their initial focus, however, is often exclusively on breaches involving personal information that would trigger notification obligations under federal (e.g., HIPAA) and state laws. The Sanders breach and others before it should make clear that these teams need to look beyond Social Security numbers and payment cards and account for data breaches that could initiate an entirely different set of concerns, exposures, considerations and mitigation steps.

If breached, an organization’s proprietary data, internal email communications among executives and management, customer or client data, sales information, and as we are seeing even voter data can have catastrophic consequences for an organization. A breach exposing insensitive email correspondence in the c-suite about customers, or suggesting systemic discriminatory employment practices, or outlining detailed labor management strategies can have significant implications for a company’s market position and workforce management. It can also trigger unwanted litigation and adversely impact the organization’s reputation. Putting data belonging to others at risk also could result in the loss of access to critical business information help by others, as in the Sanders breach. These are only a handful of examples and one need only think about some of the sensitive business information maintained or accessed by their own organizations that is not personal information to understand the effects of a breach of that information.

Organizations cannot prevent all unflattering emails that are sent and received by members of their workforce, they cannot avoid collecting or accessing sensitive business information entirely, nor can they prevent all data breaches from occurring. But they can take steps to be prepared in the event of a breach and in doing so, should consider the broad range of breaches they could encounter. Organizations engaged in data breach response planning, therefore, need to consider a wide range of data breaches that could affect their organizations – those affecting personal information and those affecting other sensitive and critical business information.

On December 17, 2015, following four years of sometimes acrimonious debate, the EU Parliament and Council of the European Union informally agreed on the final draft of the General Data Protection Regulation (“GDPR”). The GDPR will replace what privacy experts refer to simply as “95/48” –or the 1995 law known as EU Data Protection Directive— once officially adopted by the Parliament and Council of the EU. It will go into effect two years from passage.

Multinational companies should use the next two years to begin aligning privacy policies and practices with the principles in the new regulation. Key elements of the GDPR include:

  • One Law/One Rule: Unlike 95/46, which was enacted by EU individual member states, the GDPR applies to all EU member nations and is intended to create more consistency across the EU regarding data protection. A business that operates in more than one member state will now deal only with the Data Protection Authority (“DPA”) in the country where the business is most established. This lead DPA will handle cross border data transfers.
  • Broader Brush: The GDPR is expressly extra-territorial. It applies on its face to data controllers and processors outside the EU where their data processing activities affect EU residents. Also, the definition of “personal data” has been expanded to include information related to a data subject’s physical, physiological, genetic, mental, economic cultural or social identity.
  • Consent Rules: Consent remains a valid basis upon which to process data, though likely not in the employment context. Under the GDPR, consent must be freely given, specific, informed and constitute an unambiguous indication of the data subject’s agreement to the processing of the data subject’s personal data.
  • Data Breach Notification: The GDPR establishes a uniform data breach notification requirement applicable to all data controllers. In the event of a data breach leading to the loss, access or disclosure of personal data, controllers must notify the appropriate DPA “without undue delay,” and, where feasible, within 72 hours. Like many US data breach notification laws, GDPR contains a notice exception where the data is encrypted or where it is unlikely the data subject will be harmed.
  • Required Data Protection Officers: The GDPR requires data controllers and processors to appoint a data protection officer (“DPO”) if the business’s “core activities” consist of regular and systematic data subject monitoring or the processing of sensitive personal data (relating to, e.g., health, ethnicity, trade union membership) or data relating to criminal convictions and offenses.
  • Rules on Data Transfer: Binding Corporate Rules are recognized as the “gold standard” for data transfer. Also, data transfer out of the EU will be allowed where the European Commission has issued a decision affirming the adequacy of the level of data protection in the country where the data is being transferred. DPAs will not have to approve EU Model Contract Clauses, which remain valid under the GDPR.
  • Sanctions: GDPR gives data subjects a private right of action in EU courts. Data subjects will have a right to money damages from either controllers or processors for harm caused by processing personal data. DPAs will have enforcement authority similar to US regulators. A European Data Protection Board will issue opinions, adopt binding decisions and otherwise oversee data protection processes.

When people think about data breaches, they tend think more about the illegal hacking into computer networks by individuals, criminal enterprises or even nation states, than they do about simple employee error. This makes some sense as hacking incidents seem to be more interesting and draw more media attention. Holding this belief, however, can cause many to underestimate the risk of a breach due to the assumption they are not likely to be the target of a hack, and miss altogether the risk of employee error. A recent report by the Wall Street Journal about a survey by the Association for Corporate Counsel may change this.

According to the survey, “employee error” turns out to be the most common reason for a data breach. An example of the kind of employee error mentioned in the survey – “accidently sending an email with sensitive information to someone outside the company” – is something just about all of us have heard about or experienced in our own organizations.

So what does this mean? Well, for organizations that want to minimize the chance of a data breach, they may have to rethink their current strategies. This is particularly true in industries in which more employees are likely to have access to greater amounts of personal information – healthcare, insurance, retail, professional services, etc.

Addressing the risk of “employee error” is difficult…mistakes happen. But there are steps organization can take to minimize the risk. Here are a few:

  • Understand the risks your workforce presents. Addressing data security in an organization often means focusing on its IT infrastructure, with less attention to how employees do their jobs, what information they have access to and why, and whether employees are sufficiently aware of best practices for safeguarding information, among other things. Firewalls, software updates and encryption all are important to a comprehensive information security program, but to address employee error organizations first must understand the roles employees play and the functions they carry out that involve personal information. It is not uncommon for employee mistakes to bypass the IT safeguards, resulting in a data breach.
  • Reevaluate the role of IT. In many organizations, data breach prevention is thought of solely as an IT function. In most cases, this is simply the wrong approach. Data security is an enterprise-wide concern, requiring other stakeholders to have a seat at the table when trying to understand and minimize these risks. Assessing the risks healthcare workers pose to patient data, for example, requires more than an understanding of the level of encryption on the network. Does the worker know when he or she is able to disclose PHI to a family member, other person? Are workers aware of and follow the minimum necessary rule? How are workers using their personal devices, working from home, etc. The IT department is a necessary component for developing a data security plan, but its participation alone is not sufficient.
  • Training, training, training. Organizations and their employees are increasingly challenged by an expanding regulatory and compliance environment – data security is a part of that environment. The absence of adequate training can not only cause the organization to fall short of certain compliance mandates, but it is a missed opportunity to reduce data breach risk. Training ought to reach beyond how to set a good password and the policy on using flash drives. These are important, but training also should remind employees about basic steps they take in the course of their particular job which could trigger a significant breach if they are not careful – e.g., be careful when forwarding email with sensitive attachments, avoid clicking on links in emails, don’t leave boxes with sensitive data laying around, etc.

Obviously, more can be done to minimize the risk to personal data caused by employee error and those steps depend on a range of factors specific to each organization. However, organizations first have to recognize that employee error is a significant risk, and this requires thinking beyond IT-related risks.

The Internet of Things (IoT), as defined by Wikipedia, is the network of physical objects or “things” embedded with electronics, software, sensors, and network connectivity, which enables these objects to collect and exchange data. The IoT allows objects to be sensed and controlled remotely across existing network infrastructure, creating opportunities for more direct integration between the physical world and computer-based systems, and resulting in improved efficiency, accuracy and economic benefit.  Each thing is uniquely identifiable through its embedded computing system but is able to interoperate within the existing Internet infrastructure.

In short, if we look at the objects we use in everyday life – from our phones, to our laptops, to even our copy machines or printers at work – each is able to collect and potentially exchange vast amounts of data.  While the capabilities of these devices and objects to collect data and exchange data will likely improve our daily lives, it is also important to examine how to protect the privacy and security of the information and data which is collected and shared.

As we have previously discussed, the Fixing America’s Surface Transportation Act (FAST Act) includes a number of provisions related to privacy, including an amendment to the Gramm-Leach-Bliley Act (GLBA) as well as the enactment of the Driver Privacy Act of 2015.  Interestingly, the FAST Act also requires a report on the potential of the IoT to improve transportation services in rural, suburban, and urban areas.

Specifically, Section 3024 of Title III, requires the Secretary of Transportation to submit a report to Congress not later than 180 days after December 4, 2015 (the enactment date of the FAST Act).  The report, presumably to address the issues discussed above, is to include (1) a survey of the communities, cities, and States that are using innovative transportation systems to meet the needs of ageing populations; (2) best practices to protect privacy and security, as determined as a result of such survey; and (3) recommendations with respect to the potential of the IoT to assist local, State, and Federal planners to develop more efficient and accurate projections of the transportation.

While it is unclear exactly what information will be captured in the report, it’s clear the drafters of Section 3024 have recognized the importance of data privacy and security while utilizing the IoT to improve transportation.  On a more personal note, I have to believe I am not alone in hoping that the report will finally address (and correct!) the traffic patterns related to my daily commute!

An increasing number of companies have been installing or otherwise using some of the latest monitoring technologies in vehicles driven by employees – whether those vehicles are owned by the company or the employee – usually for safety and/or logistics management. These technologies include “event data recorders” or EDRs that capture a range of information just prior to or during a crash event. Seeking to address privacy concerns for data collected on EDRs, the Driver Privacy Act of 2015 (“Act”) was enacted as part of the Fixing America’s Surface Transportation Act (H.R. 22), signed by President Obama on Friday, December 4, 2015. Companies that have vehicle monitoring programs should review this new law.

To what data does the law apply?

The law applies to any data retained by an EDR installed in a vehicle, and makes clear that the data belongs to the owner of the vehicle or, in the case of a leased vehicle, the lessee of the vehicle in which the event data recorder is installed. It does not matter when the vehicle was made. For purposes of this law, an EDR is defined in 49 CFR section 563.5 and generally means a device or function in a vehicle that records the vehicle’s dynamic time-series data during the time period just prior to or during a crash event, but does not include audio and video data. Installed in nearly all new cars, EDRs capture data elements such as speed, braking, use of a seat belt, and other information.

How does the law safeguard privacy?

The Act provides that data recorded or transmitted by an EDR may not be accessed by a person other than the vehicle’s owner or lessee. There are some exceptions:

  • as authorized by a court or judicial or administrative authority, subject to the standards for admission into evidence required by that court or other administrative authority;
  • if pursuant to written, electronic, or recorded audio consent of the vehicle owner or lessee;
  • to carry out certain investigations or inspections authorized by federal law, subject to limitations on the disclosure of personally identifiable information and the vehicle identification number;
  • to determine the need for, or facilitate, emergency medical response in response to a car accident;
  • for traffic safety research, so long as the personally identifiable information of the owner or lessee and the vehicle identification number is not disclosed.

Are there state laws that apply here as well?

Yes, a number of states already have laws addressing privacy concerns related to information collected on EDRs. The exceptions to the collection of this data vary state to state, but many of those laws require the consent of the owner of the vehicle.

What effects will the Act have on employers?

Most monitoring programs apply to employees operating company-owned vehicles. In those cases, the employer owns or leases the vehicle and is consenting intuitively to accessing the data captured by the EDR. Of course, employers may nonetheless want to inform employees of the monitoring activity, and also have special considerations concerning certain groups in their workforce, including those represented by a union and those operating in other countries.

For those employers whose employees use vehicles that the employees own or lease, accessing EDR data will require the employees’ written, electronic, or recorded audio consent. Many employers are already doing this, particularly in states where this has been required for some time. However, the Act mandates this nationwide.

It seems the White House and Congress can agree on at least one thing—financial institutions are over-burdened by current privacy notice rules. In a move that is hoped to save financial institutions significant costs on postage, printing and administrative resources, on Friday, December 4, 2015, President Obama signed the Fixing America’s Surface Transportation Act (the ‘‘FAST Act’’) (H.R. 22) into law. Somewhat oddly, the FAST Act, which applies to infrastructure like highways and bridges, also amends the Gramm-Leach-Bliley Act (“GLBA”) provisions pertaining to customer annual privacy notices.

Currently, the GLBA requires financial institutions to mail customers annual privacy notices regarding the collection, use and disclose those customers’ nonpublic personal information (“NPI”). The new GLBA exemption states that a financial institution is not required to provide an annual privacy notice if it (1) only shares NPI with nonaffiliated third-parties in a manner that does not require the financial institution to provide an opt-out and (2) if the financial institution has not changed its policies and practices with respect to disclosing NPI since it last provided the customer a notice.

The GLBA privacy notice exemption only applies so long as the financial institution’s privacy practices do not change. If a financial institution decides to disclose NPI in a manner that requires it to offer an opt-out to its customers, the financial institution would be required to send an updated privacy notice to its customers.

In the last two weeks, the Office for Civil Rights (OCR) announced two substantial settlements under HIPAA that together totaled $4.35 million. These large amounts seem to be driven not by actual harm to individuals, but in significant part by alleged HIPAA compliance failures identified by OCR following investigations commenced in response to receipt of data breach reports. It is a mistake to believe that timely and otherwise compliant reporting of supposed “no harm, no foul” data breaches will result in minor, if any, enforcement activity; that is, if the agency believes you have not satisfactorily complied with the privacy and security standards.

Depending on the circumstances of the breach, an OCR investigation will look at why the breach occurred, but it likely will go beyond that to examine compliance with basic HIPAA privacy and security standards, even if indirectly related to the breach at hand.

Let’s see how this could play out. In the case of the $3.5 million settlement with Triple-S Management Corporation, there were a number of breaches reported to OCR:

  • Former Triple-S employees while employed by a Triple-S competitor improperly accessed restricted areas of a Triple-S subsidiary’s database. According to OCR’s announcement, the individual’s access rights were not terminated upon leaving Triple-S employment. This allowed the former employees to access names, contract numbers, home addresses, diagnostic codes and treatment codes of covered individuals.
  • As we reported, a Triple-S subsidiary reported to OCR that in September 2013 a vendor disclosed Medicare Advantage beneficiaries’ protected health information (PHI) on the outside of a pamphlet mailed to the beneficiaries, about 13,000 of them.
  • In another breach, a Triple-S subsidiary reported that a former employee of a business associate copied beneficiary ePHI onto a CD, took it home for an unknown period of time, and then downloaded it onto a computer at his new employer. The ePHI included beneficiaries enrollment information, including names, dates of births, contract numbers, HICN, home addresses’ and Social Security numbers.
  • Another breach involved enrollment staff who placed the incorrect member ID cards in mailing envelopes, resulting in beneficiaries receiving the member ID card of another individual. The PHI included members’ names, identification numbers, benefit packages, effective dates, contract numbers, co-payments and deductibles.

Note – these are not sophisticated systems attacks carried out by unnamed international identity theft rings or by nation states. They are essentially mistakes in the handling of PHI that can happen at any covered entity or business associate.

Each of the incidents above affected more than 500 individuals, and there were a handful of other breaches summarized in the resolution agreement affecting fewer than 500 individuals. But there was no discussion of harm to any affected individuals in support of the settlement amount. Instead, OCR itemized a number of alleged compliance failures, not all of which directly led to the breaches, such as:

  • Not implementing appropriate administrative, physical, and technical safeguards to protect PHI
  • Disclosing PHI to an outside vendor without a business associate agreement
  • Using and disclosing more than the minimum necessary PHI
  • Not conducting an accurate and thorough risk analysis that incorporates all IT equipment, applications, and data systems
  • Not implementing sufficient security measures to reduce risk to ePHI to a reasonable and appropriate level.

In addition to paying $3.5 million, Triple-S will need to establish a comprehensive compliance program satisfactory to OCR that includes a risk analysis and a risk management plan, policies and procedures for compliance with HIPAA requirements, training and other measures.

Of course, OCR’s approach makes sense in that its purpose generally is not to remedy harm to individuals affected by data breaches, but to enforce compliance with the HIPAA privacy and security standards. Covered entities and business associates should avoid, therefore, underestimating potential regulatory exposure because of a “no harm, no foul” view of reported data breaches. Compliance and steps to prevent breaches are the agency’s focus, not whether the breach actually harms affected persons, although significant harm to affected individuals would strengthen the agency’s enforcement position.

Preparedness is key!

One of your employees discloses your organization’s patient information to a soon-to-be new employer for use in generating business at the new employer’s competing business, and your company has to settle with the New York State Attorney General for HIPAA violations. Make sense?

This is what happened according to a published settlement agreement (pdf) that was reached between the University of Rochester Medical Center (URMC) and New York Attorney General Eric Schneiderman, whose office announced the settlement on December 2. As part of the settlement, and in addition to agreeing to pay $15,000, URMC submitted to an extensive review of its policies and procedures by the Office of Attorney General (OAG), and agreed to report certain breaches of PHI to the OAG for the next three years, among other things.

In this case, a URMC nurse practitioner, who was planning on leaving URMC to work for another provider, asked URMC for a list of all of the patients she treated while at URMC; URMC provided a list of 3,403 patients to her. Without getting patient authorization, the nurse practitioner provided that list to her new employer. The new employer then sent a mailing to those patients letting them know of the nurse practitioner’s move and that they could choose to be treated at the new company.

Some health care professionals may take the position that the patients are their patients, that they have the treatment relationship with the patients, and therefore there is no HIPAA issue in situations like these. Not so fast. The practice may own the data, not the providers it employs. And, patients may look to the practice, and not the particular provider, as the party responsible for safeguarding their protected health information. This appears to be the case here as URMC learned about the breach when some of its patients called to complain that they had received letters from the other provider.

Electronic medical records and related systems are essential to a functioning healthcare organization and health care providers often have broad access to patient files to do their jobs. So, stopping these types of incidents seems virtually impossible. Minimizing the risk, however, is possible through straight forward policies and training, as well as systems that can limit access to data to the extent appropriate for the business and applicable law. Non-compete and other agreements with workers also may be useful in addressing these and related risks involving patient data when healthcare workers move on.

This development is an important reminder for covered entities and business associates about HIPAA compliance and the practical realities of business that also have data security implications. Covered entities and business associates also should remember that state attorneys general have enforcement authority under HIPAA, and they are using it.

As most readers are aware, the Court of Justice of the European Union (CJEU) rule in Schrems v. Data Protection Commissioner (Case C-362/14) on October 6, 2015, the voluntary Safe Harbor Program did not provide adequate protection to the personal data of EU citizens. Post Schrems U.S. companies have been unclear what to do to transfer data out of the EU in a compliant manner. There may soon be a clearer answer, says a top EU official.

Vera Jourová, the European Commissioner for Justice, Consumers and Gender Equality, made comments through a spokesperson that the U.S. and the E.U. were close to an agreement on new data transfer requirements. Jourová’s spokesperson, Christian Wigand, told various press outlets that the EU and U.S. have “agreed on concrete next steps in order to come to a conclusion before the end of January 2016.” These next steps will materially affect how U.S. companies develop international data transfer plans for data being exported from the EU.

EU data protection authorities have said that they will start to enforce the Schrems decision by the end of January 2016, which could suspend Safe Harbor transatlantic data transfers unless a replacement procedure is created.

If the EU and U.S. can agree on terms that provide adequate protection to EU citizens’ data before the end of January, U.S. companies will have a clearer path to data transfer compliance.

The Georgia Secretary of State acknowledged that last month his office improperly disclosed social security numbers and other private information for more than 6,000,000 registered voters in Atlanta due to a “clerical error.” Anyone in Georgia who is registered to vote (approximately 6.2M citizens) may be affected. The Secretary acknowledged that his office shares voter registration data on a monthly basis with various news media and political parties as required by Georgia law upon request. He indicated that due to a clerical error, twelve recipients of this data received a disk that contained personal information including social security numbers and driver’s license information that should not have been provided. Two class-action lawsuits have been filed alleging significant damages as a result of the breach. Information regarding the breach became public upon the filing of the lawsuits.

Georgia’s identity theft law, enacted in 2005, requires certain private businesses and state and local government agencies to notify affected consumers after a breach is discovered. On November 19, the state of Georgia provided notice to affected persons, describing among other things that the Secretary of State’s office took immediate corrective action, including contacting the recipients receiving the personal information and requesting them to return it. This breach is somewhat similar to a massive data breach reported in South Carolina in 2012 that exposed 3.8M social security numbers possessed by the South Carolina Department of Revenue. The state of South Carolina paid a credit monitoring company approximately $12M to provide credit monitoring for victims of the breach, a service apparently not being made available to affected Georgia voters. South Carolina lawmakers also earmarked an additional $25M into the budget for an extra year of credit protection and to upgrade computer security for the state.

According to the Identity Theft Resource Center (ITRC) there have been a total of 669 data breaches to date in 2015 exposing nearly 182M records. The annual total includes 21.5M records exposed in the attack on the U.S. Office of Personnel Management in June and 78.8M healthcare customer records exposed at Anthem in February. Of the data breaches to date in 2015, approximately 38.6% represents the business sector, 36% represents the medical/healthcare sector, 9.1% represents the banking/credit/financial sector, 8.5% represents the government/military sector and 7.8% represents the education sector. By comparison, the ITRC tracked the total number of 2014 breaches at 783, which was up approximately 28% compared with 2013.

Data breaches that require notification under federal and state mandates, which may include even some inadvertent disclosures, continue to happen. It is true that not all such breaches can be prevented, but in addition to taking steps to prevent these incidents, businesses need to be prepared to respond quickly and thoroughly should such an unfortunate incident occur.