It seems the White House and Congress can agree on at least one thing—financial institutions are over-burdened by current privacy notice rules. In a move that is hoped to save financial institutions significant costs on postage, printing and administrative resources, on Friday, December 4, 2015, President Obama signed the Fixing America’s Surface Transportation Act (the ‘‘FAST Act’’) (H.R. 22) into law. Somewhat oddly, the FAST Act, which applies to infrastructure like highways and bridges, also amends the Gramm-Leach-Bliley Act (“GLBA”) provisions pertaining to customer annual privacy notices.

Currently, the GLBA requires financial institutions to mail customers annual privacy notices regarding the collection, use and disclose those customers’ nonpublic personal information (“NPI”). The new GLBA exemption states that a financial institution is not required to provide an annual privacy notice if it (1) only shares NPI with nonaffiliated third-parties in a manner that does not require the financial institution to provide an opt-out and (2) if the financial institution has not changed its policies and practices with respect to disclosing NPI since it last provided the customer a notice.

The GLBA privacy notice exemption only applies so long as the financial institution’s privacy practices do not change. If a financial institution decides to disclose NPI in a manner that requires it to offer an opt-out to its customers, the financial institution would be required to send an updated privacy notice to its customers.

In the last two weeks, the Office for Civil Rights (OCR) announced two substantial settlements under HIPAA that together totaled $4.35 million. These large amounts seem to be driven not by actual harm to individuals, but in significant part by alleged HIPAA compliance failures identified by OCR following investigations commenced in response to receipt of data breach reports. It is a mistake to believe that timely and otherwise compliant reporting of supposed “no harm, no foul” data breaches will result in minor, if any, enforcement activity; that is, if the agency believes you have not satisfactorily complied with the privacy and security standards.

Depending on the circumstances of the breach, an OCR investigation will look at why the breach occurred, but it likely will go beyond that to examine compliance with basic HIPAA privacy and security standards, even if indirectly related to the breach at hand.

Let’s see how this could play out. In the case of the $3.5 million settlement with Triple-S Management Corporation, there were a number of breaches reported to OCR:

  • Former Triple-S employees while employed by a Triple-S competitor improperly accessed restricted areas of a Triple-S subsidiary’s database. According to OCR’s announcement, the individual’s access rights were not terminated upon leaving Triple-S employment. This allowed the former employees to access names, contract numbers, home addresses, diagnostic codes and treatment codes of covered individuals.
  • As we reported, a Triple-S subsidiary reported to OCR that in September 2013 a vendor disclosed Medicare Advantage beneficiaries’ protected health information (PHI) on the outside of a pamphlet mailed to the beneficiaries, about 13,000 of them.
  • In another breach, a Triple-S subsidiary reported that a former employee of a business associate copied beneficiary ePHI onto a CD, took it home for an unknown period of time, and then downloaded it onto a computer at his new employer. The ePHI included beneficiaries enrollment information, including names, dates of births, contract numbers, HICN, home addresses’ and Social Security numbers.
  • Another breach involved enrollment staff who placed the incorrect member ID cards in mailing envelopes, resulting in beneficiaries receiving the member ID card of another individual. The PHI included members’ names, identification numbers, benefit packages, effective dates, contract numbers, co-payments and deductibles.

Note – these are not sophisticated systems attacks carried out by unnamed international identity theft rings or by nation states. They are essentially mistakes in the handling of PHI that can happen at any covered entity or business associate.

Each of the incidents above affected more than 500 individuals, and there were a handful of other breaches summarized in the resolution agreement affecting fewer than 500 individuals. But there was no discussion of harm to any affected individuals in support of the settlement amount. Instead, OCR itemized a number of alleged compliance failures, not all of which directly led to the breaches, such as:

  • Not implementing appropriate administrative, physical, and technical safeguards to protect PHI
  • Disclosing PHI to an outside vendor without a business associate agreement
  • Using and disclosing more than the minimum necessary PHI
  • Not conducting an accurate and thorough risk analysis that incorporates all IT equipment, applications, and data systems
  • Not implementing sufficient security measures to reduce risk to ePHI to a reasonable and appropriate level.

In addition to paying $3.5 million, Triple-S will need to establish a comprehensive compliance program satisfactory to OCR that includes a risk analysis and a risk management plan, policies and procedures for compliance with HIPAA requirements, training and other measures.

Of course, OCR’s approach makes sense in that its purpose generally is not to remedy harm to individuals affected by data breaches, but to enforce compliance with the HIPAA privacy and security standards. Covered entities and business associates should avoid, therefore, underestimating potential regulatory exposure because of a “no harm, no foul” view of reported data breaches. Compliance and steps to prevent breaches are the agency’s focus, not whether the breach actually harms affected persons, although significant harm to affected individuals would strengthen the agency’s enforcement position.

Preparedness is key!

One of your employees discloses your organization’s patient information to a soon-to-be new employer for use in generating business at the new employer’s competing business, and your company has to settle with the New York State Attorney General for HIPAA violations. Make sense?

This is what happened according to a published settlement agreement (pdf) that was reached between the University of Rochester Medical Center (URMC) and New York Attorney General Eric Schneiderman, whose office announced the settlement on December 2. As part of the settlement, and in addition to agreeing to pay $15,000, URMC submitted to an extensive review of its policies and procedures by the Office of Attorney General (OAG), and agreed to report certain breaches of PHI to the OAG for the next three years, among other things.

In this case, a URMC nurse practitioner, who was planning on leaving URMC to work for another provider, asked URMC for a list of all of the patients she treated while at URMC; URMC provided a list of 3,403 patients to her. Without getting patient authorization, the nurse practitioner provided that list to her new employer. The new employer then sent a mailing to those patients letting them know of the nurse practitioner’s move and that they could choose to be treated at the new company.

Some health care professionals may take the position that the patients are their patients, that they have the treatment relationship with the patients, and therefore there is no HIPAA issue in situations like these. Not so fast. The practice may own the data, not the providers it employs. And, patients may look to the practice, and not the particular provider, as the party responsible for safeguarding their protected health information. This appears to be the case here as URMC learned about the breach when some of its patients called to complain that they had received letters from the other provider.

Electronic medical records and related systems are essential to a functioning healthcare organization and health care providers often have broad access to patient files to do their jobs. So, stopping these types of incidents seems virtually impossible. Minimizing the risk, however, is possible through straight forward policies and training, as well as systems that can limit access to data to the extent appropriate for the business and applicable law. Non-compete and other agreements with workers also may be useful in addressing these and related risks involving patient data when healthcare workers move on.

This development is an important reminder for covered entities and business associates about HIPAA compliance and the practical realities of business that also have data security implications. Covered entities and business associates also should remember that state attorneys general have enforcement authority under HIPAA, and they are using it.

As most readers are aware, the Court of Justice of the European Union (CJEU) rule in Schrems v. Data Protection Commissioner (Case C-362/14) on October 6, 2015, the voluntary Safe Harbor Program did not provide adequate protection to the personal data of EU citizens. Post Schrems U.S. companies have been unclear what to do to transfer data out of the EU in a compliant manner. There may soon be a clearer answer, says a top EU official.

Vera Jourová, the European Commissioner for Justice, Consumers and Gender Equality, made comments through a spokesperson that the U.S. and the E.U. were close to an agreement on new data transfer requirements. Jourová’s spokesperson, Christian Wigand, told various press outlets that the EU and U.S. have “agreed on concrete next steps in order to come to a conclusion before the end of January 2016.” These next steps will materially affect how U.S. companies develop international data transfer plans for data being exported from the EU.

EU data protection authorities have said that they will start to enforce the Schrems decision by the end of January 2016, which could suspend Safe Harbor transatlantic data transfers unless a replacement procedure is created.

If the EU and U.S. can agree on terms that provide adequate protection to EU citizens’ data before the end of January, U.S. companies will have a clearer path to data transfer compliance.

The Georgia Secretary of State acknowledged that last month his office improperly disclosed social security numbers and other private information for more than 6,000,000 registered voters in Atlanta due to a “clerical error.” Anyone in Georgia who is registered to vote (approximately 6.2M citizens) may be affected. The Secretary acknowledged that his office shares voter registration data on a monthly basis with various news media and political parties as required by Georgia law upon request. He indicated that due to a clerical error, twelve recipients of this data received a disk that contained personal information including social security numbers and driver’s license information that should not have been provided. Two class-action lawsuits have been filed alleging significant damages as a result of the breach. Information regarding the breach became public upon the filing of the lawsuits.

Georgia’s identity theft law, enacted in 2005, requires certain private businesses and state and local government agencies to notify affected consumers after a breach is discovered. On November 19, the state of Georgia provided notice to affected persons, describing among other things that the Secretary of State’s office took immediate corrective action, including contacting the recipients receiving the personal information and requesting them to return it. This breach is somewhat similar to a massive data breach reported in South Carolina in 2012 that exposed 3.8M social security numbers possessed by the South Carolina Department of Revenue. The state of South Carolina paid a credit monitoring company approximately $12M to provide credit monitoring for victims of the breach, a service apparently not being made available to affected Georgia voters. South Carolina lawmakers also earmarked an additional $25M into the budget for an extra year of credit protection and to upgrade computer security for the state.

According to the Identity Theft Resource Center (ITRC) there have been a total of 669 data breaches to date in 2015 exposing nearly 182M records. The annual total includes 21.5M records exposed in the attack on the U.S. Office of Personnel Management in June and 78.8M healthcare customer records exposed at Anthem in February. Of the data breaches to date in 2015, approximately 38.6% represents the business sector, 36% represents the medical/healthcare sector, 9.1% represents the banking/credit/financial sector, 8.5% represents the government/military sector and 7.8% represents the education sector. By comparison, the ITRC tracked the total number of 2014 breaches at 783, which was up approximately 28% compared with 2013.

Data breaches that require notification under federal and state mandates, which may include even some inadvertent disclosures, continue to happen. It is true that not all such breaches can be prevented, but in addition to taking steps to prevent these incidents, businesses need to be prepared to respond quickly and thoroughly should such an unfortunate incident occur.

As we have previously reported, a growing list of jurisdictions have enacted social media privacy laws applicable to employers.  The most recent state to join the list is Maine, which brings the total to 22 states having enacted similar measures.

Under Maine’s law, an employer may not:

  • Require or coerce an employee or applicant to disclose the password to a personal social media account;
  • Require or coerce an employee or applicant to access a personal social media account in the presence of the employer or its agent;
  • Require or coerce an employee or applicant to disclose any personal social media account information;
  • Require or cause an employee or applicant to add anyone to the list of contacts associated with a personal social media account;
  • Require or cause an employee or applicant to alter or change setting to allow a 3rd party to view the content of a personal social media account;
  • Discharge, discipline, or otherwise penalize (including the threat of same) an employee for refusing to disclosure or provide access to a personal social media account as prohibited above;
  • Fail or refuse to hire an application for refusing to disclosure or provide access to a personal social media account as prohibited above.

Importantly, Maine’s law, like many of the other similar laws which have been enacted, does not prohibit or restrict an employer from requiring an employee to disclose personal social media account information the employer believes to be relevant to an investigation of employee misconduct or a workplace-related violation of laws, rules, or regulations — so long as the information accessed is used solely for purposes of the investigation or a related proceeding.

The prohibition on employer access to personal social media accounts began in 2012 and in the past 3 years has expanded to 21 additional states.  We expect this trend to continue elsewhere in 2016.

Demonstrating its continued commitment to data security enforcement, the Federal Communications Commission (FCC) recently announced Cox Communications Inc., the nation’s third largest cable operator, agreed to pay $595,000 to resolve an investigation into whether the company failed to properly protect its customers’ personal information.  The agreement ends the first data security enforcement action brought by the FCC against a cable operator.
The investigation by the FCC Enforcement Bureau determined that Cox’s electronic data systems were breached in 2014 by a hacker who pretended to be from Cox’s information technology department and convinced both a Cox customer service representative and Cox contractor to enter their account IDs and passwords into a fake, or “phishing,” website.  The user access information was then utilized to obtain customers’ personally identifiable information, which included names, addresses, email addresses, secret questions/answers, PIN, and in some cases partial Social Security and driver’s license numbers, as well as Customer Proprietary Network Information (CPNI) of the company’s telephone customers.
Under the Communications Act, a cable operator shall not disclose personally identifiable information concerning any subscriber without the prior consent of the subscriber and shall take steps necessary to prevent unauthorized access to such information by a person other than the subscriber or cable operator.   Importantly, during its investigation, the FCC found Cox’s data security systems did not include readily available measures that might have prevented the use of the compromised credentials. Additionally, the company never reported the breach to the FCC’s data breach portal, as required by law.
According to Travis LeBlanc, Chief, Enforcement Bureau: “Cable companies have a wealth of sensitive information about us, from our credit card numbers to our pay-per-view selections….This investigation shows the real harm that can be done by a digital identity thief with enough information to change your passwords, lock you out of your own accounts, post your personal data on the web, and harass you through social media.”
In addition to identifying (and notifying) all affected individuals, the order and consent decree also requires the company to provide free credit monitoring services for one year.  Further, Cox must improve its privacy and data security practices, by: (i) designating a senior corporate manager who is a certified privacy professional; (ii) conducting privacy risk assessments; (iii) implementing a written information security program; (iv) maintaining reasonable oversight of third party vendors; (v) implementing a more robust data breach response plan; and (vi) providing privacy and security awareness training to employees and third-party vendors.
In the past year, the FCC has taken three enforcement actions for violations of the Communications Act and Commission rules related to protection of customer personal information resulting in over $28 million in penalties.

This resolution, and the facts underlying the data incident, demonstrate not only the lengths that hackers will go in order to obtain personal information, but also how easily the hacker was able to obtain IDs and passwords.  As we have discussed, implementation of a written information security program, including prohibitions on sharing user access credentials (IDs and passwords) and employee training on data security, may very well have prevented this incident.

The Cybersecurity Information Sharing Act or CISA passed the Senate this week by vote of 74-21, but not without controversy. CISA would not establish a generally applicable federal standard for safeguarding personal information, nor would it enact a federal breach notification requirement. Rather, if signed into law, CISA would among other things create a framework for governmental entities and private entities to share cyber threat information for cybersecurity purposes in order to help protect against the massive data breaches that have hit the federal government and major U.S. companies. A companion bill has been passed in the House and, if successfully reconciled, the law will be sent to President Obama, who indicated support for the bill.

The controversy surrounding CISA relates primarily to the belief that the bill’s measures seeking greater data security come at too great a cost – privacy. Advocates of CISA believe, in general, that the sharing of cyber threat indicators or defensive measures for cybersecurity purposes and the monitoring of information systems are needed to address attacks on these systems. Privacy advocates and others reject that view, arguing in essence that this move toward greater data security jettisons privacy protections. That is, under the guise of security, companies would be free to monitor and share personal data with the federal government, enabling more expansive data collection and surveillance and without regard to privacy protections under other laws.

Section 104 of CISA would permit a private entity to monitor that entity’s information system or the information system of another entity, including a Federal entity (with that other entity’s written consent), or the information stored on those systems – for a cybersecurity purpose. A cybersecurity purpose means “the purpose of protecting an information system or information that is stored on, processed by, or transiting an information system from a cybersecurity threat or security vulnerability.”

Subsection (c) of 104 also would permit an entity to share with, or receive from, any other entity or the Federal Government a “cyber threat indicator” or “defensive measure.” Under the bill, cyber threat indicator is defined to mean an “action on or through an information system that may result in an unauthorized effort to adversely impact the security, availability, confidentiality, or integrity of an information system or information that is stored on, processed by, or transiting an information system.” However, the entity receiving the cyber threat indicator or defensive measure would be required to comply with otherwise lawful restrictions placed on the sharing or use of such cyber threat indicator or defensive measure by the sharing entity. In addition, CISA would require that:

An entity monitoring an information system, operating a defensive measure, or providing or receiving a cyber threat indicator or defensive measure … [must] implement and utilize a security control to protect against unauthorized access to or acquisition of such cyber threat indicator or defensive measure.

The bill goes on to provide that when sharing a cyber threat indicator, entities must either:

review such cyber threat indicator to assess whether such cyber threat indicator contains any information that the entity knows at the time of sharing to be personal information or information that identifies a specific person not directly related to a cybersecurity threat and remove such information; or

implement and utilize a technical capability configured to remove any information contained within such indicator that the entity knows at the time of sharing to be personal information or information that identifies a specific person not directly related to a cybersecurity threat.

Privacy advocates argue that this language leaves open the possibility that entities sharing cyber threat indicators might not “know” if the indicator contains personal information, thus weakening the privacy protection for personal information under this provision. However, Senate Select Committee on Intelligence (SSCI) Chairman Richard Burr (R-NC), sponsor of CISA, points to these provisions and others to refute the privacy concerns raised by opponents of the bill. In a press release, “Debunking Myths about Cybersecurity Information Sharing Act,” Sen. Burr argues, among other things, that under CISA:

The cyber threat information sharing is completely voluntary. Companies have the choice as to whether they want to participate in CISA’s cyber threat information sharing process, but all privacy protections are mandatory.

The debate surely will continue as the House and Senate reconcile their versions of the law. For now, business should continue to focus on their own efforts to safeguard personal and other confidential data, and be prepared in the event they experience a data breach.

On October 6, 2015, California Governor Jerry Brown signed three new laws which substantially alter and expand the state’s security breach notification requirements. The new changes to California Civil Code sections 1798.29 and 1798.82, the Golden State’s laws that require notifications by state agencies and private sector entities of certain breaches of security (i) provide a definition for encryption, (ii) establish new requirements for the content and form of breach notifications, and (iii) add license plate information gathered through automated license plate recognition (ALPR) systems to the definition of personal information subject to the state’s notification requirements. These changes become effective January 1, 2016.

When is Personal Information Considered “Encrypted”

Under California’s current law, if personal information is “encrypted,” the notification requirements will not apply. Until now, the law had not defined when personal information would be considered to be “encrypted.” Assembly Bill 964 amends California Civil Code sections 1798.29 and 1798.82 to provide a definition for this previously undefined term. With passage of the amendment, the term “encrypted” is now defined as “rendered unusable, unreadable or indecipherable to an unauthorized person through a security technology or methodology generally accepted in the field of information technology.” This language seems to allow for flexibility in the types of encryption that can be applied, as well as for future changes in encryption technology. For more information on encryption technologies, click here.

Updates to Content and Form of Breach Notification

Senate Bill 570 amends California Civil Code sections 1798.29 and 1798.82 to require government agencies and businesses to clarify the content of security breach notifications and provides a model security breach notification. All security breach notifications must now be titled “Notice of Data Breach” and present required information under the following headlines: “What Happened,” “What Information Was Involved,” “What We Are Doing,” “What You Can Do,” and “For More Information.” The notice must be designed to call attention to the nature and significance of the matter, must be clear and conspicuous and must be in text no smaller than 10-point type.

Use of the model notification form will be deemed compliant with California’s notification requirements, and thus helpful for agencies and business when trying to understand what the notice needs to say. However, in the case of breaches affecting individuals in multiple states, when simplifying the notification process is critical, use of California’s model notice across multiple states may be problematic. For example, the “What Happened” section should not be included in notices to Massachusetts residents as that state’s law prohibits including a description of the nature of the breach or unauthorized acquisition or use.

Information Obtained from Automated License Plate Recognition Systems is Personal Information

Popular among local law enforcement, automated license plate recognition (ALPR) systems allow license plate information to be captured from videos and stored. Senate Bill 34 added new sections to California’s Civil Code, beginning with section 1798.90.5, to require that certain users of those systems – called ALPR operators – safeguard ALPR information, including a requirement to implement a usage and privacy policy in order to ensure that the collection, use maintenance, sharing and dissemination of ALPR information is consistent with respect for individuals’ privacy and civil liberties. However, the Senate Bill also amends California Civil Code sections 1798.29 and 1798.82 to include information obtained from ALPR systems in the definition of “personal information” when used along with an individual’s name. Thus, if this information is involved in a breach of security, it will trigger a notification requirement. Also, individuals harmed by unauthorized access or use of ALPR information or a breach of security of an ALPR system may bring a private right of action.

These amendments represent significant changes to the security breach notifications provisions of Civil Code sections 1798.29 and 1798.82, as well as additional protections for information obtained from ALPR systems. In particular, they impact how to respond to security breaches, how to protect personal information and the scope of what information is protected. Businesses are encouraged to review their encryption policies, adopt compliant security breach notification forms and, if using an ALPR system, adopt compliant policies with respect to ALPR information and the employees who control those systems.

Responding to a Department of Health and Human Services Office of Inspector General (OIG) report recommending stronger oversight of covered entities’ compliance with the HIPAA Privacy Rule, the Office for Civil Rights (OCR) stated that in early 2016 it will launch Phase 2 of its audit program measuring compliance with HIPAA’s privacy, security and breach notification requirements by covered entities and business associates.

After conducting a study to assess OCR’s oversight of covered entities’ compliance with the HIPAA Privacy Rule, OIG issued a report finding that OCR should strengthen its oversight of covered entities and making several recommendations. Specifically, OIG recommended that OCR:

  1. fully implement a permanent audit program;
  2. maintain complete documentation of corrective action;
  3. develop an efficient method in its case-tracking system to search for and track covered entities;
  4. develop a policy requiring OCR staff to check whether covered entities have been previously investigated; and
  5. continue to expand outreach and education efforts to covered entities.

OCR concurred with each of OIG’s recommendations. In its response to the report, OCR stated it is moving forward with a permanent audit program and will launch Phase 2 of that program in early 2016. The program will target common areas of noncompliance and will include business associates as well as covered entities. Phase 2 “will test the efficacy of the combination of desk reviews of policies as well as on-site reviews.” Accordingly, both covered entities and business associates should be reviewing their HIPAA policies and practices and developing a plan for working with OCR in on-site reviews.

OCR also indicated it is working on improving its ability to document and track corrective actions taken by covered entities and business associates in response to an OCR investigation. In addition, OCR revealed that it now has the ability to search for and track covered entities’ compliance history. OCR will now require investigators to check for prior investigations at the outset of new investigations of covered entities and business associates. This may mean a greater likelihood of on-site visits if a covered entity’s history indicates a potential for systemic compliance issues.

Finally, OCR agreed with OIG’s recommendation that it should continue to expand its outreach and education efforts. Information about those efforts can be found in Appendix C to OIG’s report.

As we previously reported, having the right documents in place can go a long way toward helping an organization survive an OCR HIPAA audit. Now that it is clear that these audits are coming early next year, it is important that covered entities and business associates invest the time in identifying and closing any HIPAA compliance gaps before an OCR investigator does this for them.