The GDPR is wrapping up its first year and moving full steam ahead. This principles-based regulation has had a global impact on organizations as well as individuals. While there continue to be many questions about its application and scope, anticipated European Data Protection Board guidance and Data Protection Authority enforcement activity should provide further clarity in the upcoming year. In the meantime, here are a few frequently asked questions – some reminders of key principles under the GDPR and others addressing challenges for implementation and what lies ahead.

Can US organizations be subject to the jurisdiction of the GDPR?

Whether a US organization is subject to the GDPR is a fact-based determination. Jurisdiction may apply where the US organization has human or technical resources located in the EU and processes EU personal data in the context of activities performed by those resources. In cases where the US organization does not have human or technical resources located in the EU, it may be subject to the GDPR’s jurisdiction in two instances: if the organization targets individuals in the EU (not businesses) by offering goods or services to them, regardless of whether payment is required, or if it monitors the behavior of individuals in the EU and uses that personal data for purposes such as profiling (e.g. website cookies, wearable devices). The GDPR may also apply indirectly to a US organization through a data processing agreement.

If we execute a data processing agreement, does that make our US organization subject to the GDPR?

When an organization subject to the GDPR engages a third party to process its EU data, the GDPR requires that the organization impose contractual obligations on the third party to implement certain GDPR-based safeguards. If you are not otherwise subject to the GDPR, executing a data processing agreement will not directly subject you to the GDPR. Instead, it will contractually obligate you to follow a limited, specific set of GDPR-based provisions. Your GDPR-based obligations will be indirect in that they are contractual in nature.

Does the GDPR apply only to the data of EU citizens?

No, the GDPR applies to the processing of the personal data of data subjects who are in the EU regardless of their nationality or residence.

Is our organization subject to the GDPR if EU individuals access our website and make purchases?

If your organization does not have human or technical resources in the EU, the mere accessibility of your website to EU visitors, alone, will not subject you to the GDPR. However, if your website is designed to target EU individuals (e.g. through features such as translation to local language, currency converters, local contact information, references to EU purchasers, or other accommodations for EU individuals) your activities may be viewed as targeting individuals in the EU and subject you to the GDPR.

Are we required to delete an individual’s personal data if they request it?

If your organization is subject to the GDPR, an individual may request that you delete their personal data. However, this is not an absolute right. Your organization is not required to delete the individual’s personal data if it is necessary

  • for compliance with a legal obligation or the establishment, exercise or defense of a legal claim
  • for reasons of public interest (e.g. public health, scientific, statistical or historical research purposes)
  • to exercise the right of freedom of expression or information
  • where there is a legal obligation to keep the data
  • or where you have anonymized the data.

Additional consideration should be given to any response when the individual’s data is also contained in your back-ups.

GDPR principles have started to influence law in the U.S. In fact, many have been watching developments regarding the California Consumer Privacy Act (CCPA), which shares a right to delete as it pertains to the personal information of a California resident. Similar to the GDPR, it is not an absolute right and in certain cases an exception may apply. For instances, both law contain an exception from the right to have personal information deleted when the information is needed to comply with certain laws.

Does the GDPR apply to an EU citizen who works in the US?

If your organization is not subject to the GDPR and you hire an EU citizen to work in the US, the GDPR may not apply to the processing of their personal data in the US. However, depending on the circumstances, the answer may be different if the EU citizen is in the US on temporary assignment from an EU parent. In that scenario, their data may be subject to the GDPR if the US entity’s relationship with the parent creates an establishment in the EU, and it processes this data in the context of the activities of that establishment. To the extent the EU parent transfers the EU employee’s personal data from the EU to the US entity, that transfer may require EU-US Privacy Shield certification, the execution of binding corporate rules, or standard contractual clauses. These measures are designed to ensure data is protected when it is transferred to a country, such as the US, that is not deemed to have reasonable safeguards.

Do we need to obtain an EU individual’s consent every time we collect their personal data?

If your organization is subject to the GDPR and processes an EU individual’s information, you must have a “legal basis” to do so. Consent is just one legal basis. In addition to consent, two of the most commonly used legal basis are the “legitimate interests” of your organization and the performance of a contract with the individual. A legitimate interest is a business or operational need that is not outweighed by the individual’s rights (e.g. processing personal data for website security, conducting background checks, or coordinating travel arrangements). Processing necessary to the performance of a contract is activity that enables you to perform a contract entered into with the individual (e.g. processing employee data for payroll pursuant to the employment contract or processing consumer data for shipping goods under a purchase order.)

Should we obtain an employee’s consent to process their personal data?

Continue Reading The GDPR – One Year and Counting

As we noted last month, Washington’s efforts to follow California’s lead in passing its own GDPR-like law have stalled after the bill failed to make its way through the state’s House of Representatives—despite overwhelming approval in the Senate (where it passed 46-1).  That bill’s sponsor has promised to revisit the issue during the 2020 legislative session.

Despite this roadblock on the consumer privacy front, Washington governor Jay Inslee signed a bill on May 7 (HB 1071) significantly expanding the state’s data breach notification law, RCW 19.255.01, et seq.  There was little doubt that Governor Inslee would sign the bill into law, as it passed unanimously in both state legislative bodies.

Below is a summary of major changes to the state’s data breach notification law, and key takeaways for employers subject to Washington law.  For a detailed explanation of the law’s new provisions—which will become effective March 1, 2020—please refer to this post.

Deadline to provide notice of breach shortened to thirty (30) days following discovery.

Under the current law (and until HB 1071’s amendments become effective on March 1, 2020), notice of a breach must be provided within 45 days of discovery. With the amendments, notice must be provided no more than thirty days after the organization discovers the breach. This applies to notices sent to affected consumers as well as to the state’s Attorney General. The threshold requirement for notice to the Attorney General remains the same—it is only required if 500 or more Washington residents were affected by the breach.

Thirty days may still sound like plenty of time, but it can often take several days, or even weeks, for an entity to determine the scope of a breach and compile a list of potentially affected consumers. And if the breach affected residents of more than one state, each state’s laws must be examined to ensure that the notices sent to each individual comport with the breach notification laws of that individual’s state of residence.

Definition of “personal information” significantly expanded.

The previous definition tracked the language used by the majority of states, and only covered breaches that included an individual’s first name (or initial) and last name, plus any one or more of the three “bare minimum” data elements— Social Security number, driver’s license or state ID number, and/or financial account or card number (with an access code or password that would permit access thereto).

With the amendment, Washington adds the following six additional data elements that will be considered “personal information” if combined with an individual’s first name or initial and last name:

  • Full date of birth;
  • Unique private key used to authenticate or sign an electronic record;
  • Passport, military, or student ID number;
  • Health insurance policy or identification number;
  • Information about a consumer’s medical history, physical or mental health condition, or diagnosis or treatment by a health care professional; and,
  • Biometric data (such as fingerprint or retina scans, voiceprints, or other unique biological patterns used to identify an individual).

Significantly, Washington law now considers an individual’s username (or email address) and password (or security questions sufficient to permit access to an account) to be “personal information” regardless of whether the individual’s name is included. Notice to affected consumers of a breach of this type may be provided electronically or by email (unless the affected account was the individual’s email account).

In addition, the new law provides that even without an individual’s first name or initial and last name, any one or more of the other data elements will be considered “personal information” if the element, or combination of elements, would permit a person to commit identity theft against the individual, and the data element(s) were not rendered unusable though encryption, redaction or other methods.

Finally, as discussed more thoroughly in this post, HB 1071 also added notice requirements for affected consumers and the Attorney General—though notice to the Attorney General is still not required unless 500 or more Washington residents were affected by the breach.

There are several takeaways for employers here:

  • First, employers must be aware of the types of data elements the organization maintains on its employees (or other individuals, such as customers or clients), how that data is maintained, and what happens to that data when it is no longer needed.
  • Employers should also examine the necessity of maintaining certain types of data, and consider narrowing the scope of data elements that the organization maintains by ceasing to collect and maintain unnecessary data—even if not currently listed in the state’s definition of “personal information.”
  • Until now, Washington employers may not have been overly concerned with securing certain types of data, such as an employee’s date of birth or health insurance policy number. But once HB 1071’s amendments take effect, that information could trigger breach notification duties if subject to unauthorized access or disclosure.
  • Finally, employers should ensure the organization has sound policies in place specifically to deal with sensitive data (e., “personal information”) deemed necessary to maintain.

Many health care providers, including small and medium-sized physician practices, rely on a number of third party service providers to serve their patients and run their businesses. Perhaps the most important of these is a practice’s electronic medical record (EMR) provider, which manages and stores patient protected health information. EMR providers generally are business associates under HIPAA, subjecting them to many of the same requirements under the HIPAA privacy and security rules applicable to covered healthcare providers. HIPAA-covered healthcare providers should not assume their EMR providers comply with HIPAA and HITECH.

According to a federal Office for Civil Rights (OCR) press release, Medical Informatics Engineering, Inc. (MIE) has paid $100,000 to OCR and has agreed to a detailed corrective action plan to settle potential violations of the HIPAA privacy and security rules. MIE provides software and EMR services to healthcare providers.

According to reporting by the Chicago Tribune,

about 82 percent of hospital information security leaders surveyed reported having a “significant security incident” in the last 12 months, according to the 2019 Healthcare Information and Management Systems Society Cybersecurity Survey.

Yet, according to the same report, spending on information security only takes up about 5% of healthcare providers’ data security budgets, which is well below industry average. Additionally, some have estimated that in 2018, 20% of the breaches suffered by healthcare providers was caused by their third-party service providers. An excellent article by HIPAAJournal outlines a number of statistics illustrating the growing data security risk in healthcare.

In 2015, MIE reported to OCR that it discovered a breach which compromised user IDs and passwords enabling access to electronic protected health information (ePHI) of approximately 3.5 million people. OCR claims that, according to OCR’s investigation, MIE did not conduct a comprehensive risk analysis prior to the breach. The HIPAA rules require entities to perform an accurate and thorough assessment of the potential risks and vulnerabilities to the confidentiality, integrity, and availability of an entity’s ePHI. This is a basic requirement in the HIPAA security rules that all covered entities and business associates need to perform.

OCR Director Roger Severino noted,

The failure to identify potential risks and vulnerabilities to ePHI opens the door to breaches and violates HIPAA.

So, what is a healthcare provider to do?

A required element of HIPAA compliance includes having business associate agreements in place with business associates, including EMR providers. Under these agreements, business associate agree that they have satisfied the risk assessment requirement under HIPAA. However, in addition to making sure that compliant agreements are in place, covered healthcare providers may want to go a step further. That is, they may want to better assess the compliance efforts of their vendors as represented in the business associate agreement, particularly for those vendors that process and maintain so much of their patients’ ePHI. Providers might, for example, require such vendors to complete a detailed questionnaire about their data security practices, visit the vendor’s facilities, and/or request to review a copy of the vendor’s risk assessment. Similar practices can be applied to all vendors, not just EMR providers or business associates, based on the risk they pose.

Of course, healthcare providers should make sure they themselves are in compliance with the HIPAA privacy and security rules. This includes, among other things, conducting and documenting their own risk assessment. Simply having a set of policies and procedures is not sufficient.

A district court in Tennessee recently concluded in Wachter Inc. v. Cabling Innovations LLC that two former employees who allegedly shared confidential company information found on the company’s computer system with a competitor did not violate the Computer Fraud and Abuse Act (CFAA). The CFAA expressly prohibits “intentionally accessing a computer without authorization or exceeding authorized access, and thereby obtaining… information from any protected computer”.

The two former employees in question worked for Wachter Inc., a Kansas-based communications equipment provider, during which time they allegedly sent confidential company information to their personal email accounts and to email accounts of Wachter’s competitor, Cabling Innovations. In addition the former employees allegedly used Wachter’s resources and confidential information to obtain and perform work for Cabling Innovation.

In its reasoning, the Court emphasized that the CFAA does not define the term “without authorization” and some courts have found that “an employee may access an employer’s computer ‘without authorization’ where it utilizes the computer to access confidential or proprietary information that he has permission to access, but then uses that information in a manner that is inconsistent with the employer’s interest”. Moreover, the Court highlighted that “the CFAA was not meant to cover the disloyal employee who walks off with confidential information. Rather, the statutory purpose is to punish trespassers and hackers”.

The Court went on to state that the CFAA is primarily a criminal statute, and although it also permits “any person who suffers damage or loss by reason of a violation … [to] maintain a civil action against the violator to obtain compensatory damages and injunctive relief or other equitable relief,” the rule of “lenity” directs the Court to construe the CFAA coverage narrowly. The Court reasoned, “the rule of lenity limits the conduct that falls within the criminal prohibitions, it likewise limits the conduct that will support a civil claim”.

The CFAA has generated much debate among the courts regarding the scope of its application. Some forms of “unauthorized access” are obvious – e.g. a hacker breaking into a protected computer system resulting in data theft is clearly a CFAA violation and is the type of event the CFAA was originally designed to protect against. However, other circumstances, particularly in the employment context, can blur the lines of what is considered “unauthorized access” under the CFAA.

The court in Wachter is under the jurisdiction of the Sixth Circuit, which has not addressed the issue of a potential CFAA violation where an employee who has permission to access company information then misuses or misappropriates that information. That said, most districts courts in the Sixth Circuit have concluded that there cannot be a CFAA violation where an employee had permissible access to the computer system. Similarly, the Fourth Circuit held in WEC Carolina Energy Solutions LLC v. Miller that an employee who allegedly downloaded proprietary information from an employer’s computer system for the benefit of his subsequent employer did not violate the CFAA.

Other circuits, however, have taken a much more expansive approach to what employee activity is considered “without authorization” under the CFAA. For example, in U.S. v. John, the Fifth Circuit held that an employee violated the CFAA when she retrieved confidential customer account information she was authorized to access and transferred it to her half-brother for the purpose of committing a fraud. The First, Seventh and Eleventh Circuits have all taken a similarly expansive view that an employee violates the CFAA when he/she accesses the computer system in violation of the employer’s data use policies.

The U.S. Supreme Court has avoided addressing issues of CFAA vagueness. Most recently, the Supreme Court denied certiorari in Nosal v. United States, 16-1344, declining to weigh in on the scope of unauthorized access under the CFAA. The Ninth Circuit held in Nosal that David Nosal violated the CFAA by using his past assistant’s password to access his former employer’s computer system after his access credentials were expressly revoked.

Given the conflicting jurisdictional interpretations of the CFAA, companies should review their policies and procedures to ensure access rights and limitations to their information and information systems are clearly defined and effectively communicated to their employees. Taking these steps will help protect company data and may be useful in preserving a potential CFAA claim.

 

On May 10, Governor Phil Murphy signed into law P.L.2019, c.95. an amendment enhancing New Jersey’s data breach notification law by expanding the definition of personal information, and updating notification requirements. As we previously reported, the amendment was unanimously approved by the New Jersey General Assembly and Senate in late February.

New Jersey’s data breach notification law requires businesses to notify consumers of a breach of their personal information. Previously the law defined personal information as an individual’s first name or first initial and last name linked with any one or more of the following data elements:

  • Social Security number;
  • driver’s license number or State identification card number;
  • account number or credit or debit card number, in combination with any required security code, access code, or password that would permit access to an individual’s financial account.

The new law adds to the above list of data elements:

  • user name, email address, or any other account holder identifying information, in combination with any password or security question and answer that would permit access to an online account.

In addition, notification requirements are different for these added data elements. Under the amendment, businesses or public entities experiencing a breach involving a user name or password, in combination with any password or security question and answer that would permit access to an online account, and no other personal information, may notify affected consumers via electronic or other form that directs the customer whose personal information has been breached to promptly change any password and security question or answer, as applicable, or to take other appropriate steps to protect the online account with the businesses or public entities and all other online accounts for which the customer uses the same user name. Further, for breaches involving an email account, a business or public entity shall not provide notice of the breach via the compromised email account. Instead, notice shall be provided by one of the other methods described in the law, OR by clear and conspicuous notice delivered to the customer online when the customer is connected to the online account from an IP address or online location from which the business or public entity knows the customer customarily accesses the account.

New Jersey has now become at least the 10th state to update its data breach notification law to specifically address online breaches. The new law will take effect September 1, 2019.

Image result for alexa recordingCalifornia keeps making privacy headlines for its trailblazing California Consumer Privacy Act (“CCPA”), set to take effect January 1, 2020, but there is another set of privacy bills making its way through the California state legislature, that, if passed, will provide consumers with further privacy protections.

The “Your Data Your Way” initiative, comprised of four legislative bills and a non-binding resolution, is a privacy plan introduced by several Republican Assembly members in January on National Privacy Day, that has passed California’s Committee on Privacy and Consumer Protection (“the Committee”), and is now headed to the Assembly for a potential vote, followed by the Senate. Each bill has already been revised to some extent during the Committee process, and will likely experience further revisions along the way.

Below are some highlights from the four “Your Data, Your Way” initiative bills:

  • AB 1035 An amendment to California’s data breach notification bill, sponsored by Assemblyman Chad Mayes, would require covered businesses to notify affected individuals of a data breach within 45 days of discovery of the breach. The bill originally included a 72-hour data breach notification requirement, similar to the GDPR, but this was revised during the Committee process.
  • AB 1395 Virtual assistants, such as Alexa, have raised concerns about unintended recording of conversations and utterances by users. Sponsored by Assemblyman Jordan Cunningham, this bill would limit data collection conducted by tech companies via these devices. Specifically, the bill includes a prohibition on data storage and marketing of recorded voice commands without prior consumer consent.
  • AB 288 Another bill sponsored by Assemblyman Cunningham on social media privacy, would allow “social networking service” users that close their account the option to have their personally identifiable information (PII) permanently removed from the company’s database and would prohibit the company from selling this PII to, or exchanging the PII with a third-party, subject to a few exceptions.
  • AB 1138 Assemblyman James Gallagher’s bill, AB 1138 would require social media websites and apps to obtain parental consent before creating the account of a child under the age of 13. This builds on California’s Parent’s Accountability and Child Protection Act (AB 2511), which becomes effective on January 1, 2020. AB 2511 requires a person or business conducting business in California and that seeks to sell certain products or services to take reasonable steps, as specified, to ensure that the purchaser is of legal age at the time of purchase or delivery, including, but not limited to, verifying the age of the purchaser.

In addition, as part of the “Your Data, Your Way” initiative, a non-binding resolution entitled “21st Century Monopolies” was introduced, calling on the Federal Trade Commission (FTC) and Congress to update the federal anti-trust laws in order to more effectively protect consumers.

As recently touched on by Charlie Warzel in an opinion piece entitled, “We are Drowning in Data” in the newly established New York Times Privacy Project, we still cannot fathom the extent to which technology will cause an expansion in the already exorbitant amount of ways our personal data is collected. Innovation continues to outpace technology, but California is certainly trying to keep up!

A security lapse has exposed the data of at least 13.7 million user records of the high-end job recruitment site, Ladders. The company left a cloud-hosted search database exposed without a password. Ladders took the database offline less than an hour after the news website TechCrunch alerted the company after learning about the potential breach from a security researcher, Sanyam Jain.

Each record included names, email addresses, addresses, phone numbers, their employment histories and even exact geolocation based off of individual IP addresses. The user profiles also contain information about the industry they’re seeking a job in and their current compensation in U.S. dollars.

A data leak of information such as social security numbers, phone numbers, credit history or other more sensitive information “would be a gold mine for cyber criminals who would have everything they need to steal identities, file false tax returns, get loans or credit cards,” according to Bob Diachenko, online publisher for TechCrunch. In contrast, most of the information affected in the Ladders’ data leak, while personal and sensitive, does not amount to personally identifiable information which could be used for identity theft.

Additionally, an important distinction should be made between data leaks and data breaches: data leaks are usually incidents in which data was unintentionally made public as the result of an accident or misapplication of a system’s features, however the data has not actively been accessed or exfiltrated; data breaches are incidents involving active threats which compromise a database.

The recent abundance of high-profile data leaks as of late emphasize the need for organizations today to be proactive rather than reactive. The legal landscape of the data privacy world also reflects this approach. For example, the General Data Protection Regulation (GDPR) enforces a “Privacy by Design” system, which requires any action a company undertakes that involves processing personal data must be done with data protection and privacy in mind at every step. Similarly the much anticipated California Consumer Privacy Act, requires a business to “implement and maintain reasonable security procedures and practices appropriate to the nature of the information to protect the personal information”, and similar frameworks are mandates in other states such as Colorado, Massachusetts and Oregon.

This wave of data leaks/breaches, combined with the growing public awareness of data privacy right and concerns, and legislative activity in the area, makes the development of a meaningful data protection program an essential component of business operations.

Texans like the adage “Everything is Bigger in Texas”. So, as the Lone Star State follows its counterparts and the federal government in discussing broad sweeping privacy protections, legislators introduced two (competing) privacy bills this session: the Texas Consumer Privacy Act and the Texas Privacy Protection Act.

Readers should note that the 2019 Texas Legislative Session is set to end on May 27, 2019, although a special session may be called to address items not resolved during the regular session. If privacy legislation is not passed, state lawmakers would not consider it again until 2021, as the legislature only meets every other year, for 140 days. If either of the bills were to pass this session, the effective date could be as early as September 2020.

Even if neither bill passes this session, which is likely the case given the legislative hurdles that must happen within the limited timeframe, privacy as an issue is not going away in Texas (or anywhere else for that matter). And, given that Texas is the second largest economy in the U.S., any privacy legislation will have a big impact. The current prediction is that Texas will take a back seat to watch how California enacts the CCPA, and (hopefully) learn from some of its pain points in order to adopt legislation in 2020.

Nevertheless, below is an overview of the two pending bills in their current form.

Texas Consumer Privacy Act (“TXCPA”)

The TXCPA is similar to the California Consumer Protection Act (“CCPA”). It provides Texas consumers with rights to:

  • Know what information is being collected, distributed and sold about them;
  • Opt-out of sales of their information, including a requirement that businesses include a “Do Not Sell My Information” link on their website; and
  • Request that their information be deleted.

The TXCPA would also require businesses subject to the act to:

  • Provide notification of categories of personal information collected and how each category would be used;
  • Provide an online privacy policy or notice; and
  • Provide methods for consumers to submit data requests and disclose certain information in response to such requests.

It also borrows concepts from the EU GDPR around transparency and notice.

Similar to the CCPA, there are questions about how the bill would define a consumer and whether it would be applied to employees. Like the CCPA, the TXCPA also provides rights to households, but this is currently not well defined. The TXCPA does not establish a business duty to implement and maintain security procedures, nor does it allow a private cause of action for consumers in the event of a breach. The Texas Attorney General would enforce violations, set at an amount up to $2,500 per violation (and $7,500 for intentional violations).

In its current form, the TXCPA would only apply to certain businesses, including those that collect consumer personal information. These types of businesses would also have to meet certain thresholds.

Texas Privacy Protection Act (HB 4390)

The TXPPA distinguishes itself from the TXCPA with applicability and its level of detail. It also does not provide the same consumer rights as the TXCPA. For the TXPPA to apply, a business must be:

  • Doing business in Texas
  • Have more than 50 employees
  • Collect personally identifiable information (“PII”) of more than 5,000 individuals, households or devices (or have this information collected on its behalf); this only applies to the collection of PII over the Internet or digital network, or through a computing device that is associated with a specific end user. This requirement is not only to “Texas residents” meaning an Internet business with only a handful of customers in Texas, but numerous customers elsewhere, may be subject to the law.
  • And either:
    • Have an annual gross revenue of more than $25 million; or
    • Derive 50% of more of its annual revenue from the processing of PII.

The traditional PII categories, like social security number, driver’s license number, credit card or financial account information, etc. are expanded under the TXPPA to include biometric information, religious affiliation, racial or ethnic origin information, unique genetic information, physical or mental health information, precise geolocation data and the private communications or other user-created content of an individual that is not publicly available.

The TXPPA requires the explicit permission from the individual from whom the information pertains, unless processing is required by law. A business may only process PII if it is relevant to accomplish the purpose for which it is to be processed, and this must be specified by notice prior to the collection. Processing also may not violate state or federal law or infringe on an individuals’ Constitutional rights or privileges. The TXPPA also gives individuals the right to access their PII and the right to be forgotten.

TXPPA requires impacted businesses to establish and maintain a comprehensive security program that contains safeguards for PII, although there is not a lot of guidance in the current bill on this. Like the TXCPA, there is no private cause of action for a breach of duty to protect PII. Businesses would also be liable when a service provider mishandles their data.

Also like the TXCPA, the Texas Attorney General may bring an action and recover civil penalties, but they are higher under the TXPPA – up to $10,000 per violation, not to exceed a total of $1 million.

Either bill, if passed into law, would keep Texas in line with other states currently enhancing their privacy and security laws to keep up with the California Consumer Privacy Act set to take effect January 1, 2020.  Organizations across the United States should be assessing and reviewing their data collection activities, building robust data protection programs, and investing in written information security programs (WISPs).

 

Wrongful use of retirement plan participant data was among the claims made by a class of 40,000 participants against the plan sponsor and others in Cassell et al. v. Vanderbilt University et al. Specifically, the plan participants claimed that the University inter alia breached its “loyalty and prudence” duty by failing to protect confidential employee retirement plan participant information, allowing the plan’s recordkeeper to obtain access to participant’s personal information and to profit from that access.

The parties reached a settlement agreement which included a payment of $14.5 million along with promises to make certain changes in plan administration. Retirement plan sponsors have faced litigation concerning plan administration in a number of areas including investment selection and prudence over plan fees, but the Vanderbilt settlement includes a uniquely heightened focus on protection of data, signaling a trend in this direction.

Recordkeeping, investment of contributions, and other tasks associated with retirement plan administration require access to large amounts of personal information, usually in electronic format. The risks to that are not limited to data breaches. As the Vanderbilt settlement indicates, plan participants have become increasingly aware of the vulnerabilities associated with handling their data, as well as how their data is being used by plan vendors. In addition to monetary compensation, the Vanderbilt settlement stipulates that vendors such as recordkeepers cannot use employee participant data to market or sell products unrelated to the retirement plan to the participants, unless the participants initiate.

The Employee Retirement Income Security Act (“ERISA) is the primary federal statute regulating employee benefit plans, including retirement plans. Currently, there are no express provisions in ERISA that prohibit the use of plan participant data for any particular purpose. However, the plaintiffs in this case relied on ERISA’s long standing fiduciary duty provisions to support their claims concerning plan data:

  • ERISA’s fiduciary duty provisions require plan fiduciaries to discharge their duties with respect to a plan solely in the interest of the participants and beneficiaries and for the exclusive purpose of providing benefits to participants and their beneficiaries. 29 U.S. Code § 1104.
  • ERISA also prohibits plan fiduciaries from engaging in certain prohibited transactions, including transactions between the plan and a party in interest which the fiduciary knows constitutes a direct or indirect transfer to, or use by or for the benefit of a party in interest, of any assets of the plan. 29 U.S.C. §1106(a)(1).

It will be interesting to see if these kinds of claims take hold, after all, this is only a settlement and not a decision in federal court. One of the issues courts will have to wrestle with is whether plan data constitutes a plan asset.

But for now, plan sponsors should be thinking about their relationships with plan third party service providers. According to the DOL, ERISA requires plan fiduciaries to “obtain and carefully consider” the services to be provided by plan service providers before engaging the provider. Whether that duty extends to assessing the provider’s data privacy and security practices is not clear. But, in light of this settlement, plan sponsors should be asking themselves some basic questions including, who has access to participants’ data? How much (and what) data does the provider have access to, and what are they doing with that data? Is the service provider sharing data with other third parties?

Of course, depending on the bargaining power of the sponsor, it may not be able to convince a vendor to agree not to use participant data solely for plan administration purposes. At a minimum, sponsors should be sure their process includes these and other factors when making selections.

Ever since the California Consumer Privacy Act (CCPA) was enacted in June of 2018 it has been in a constant state of revision.   First, in September of 2018, Governor Jerry Brown signed into law Senate Bill 1121, which helped clarify and strengthen the original version of law. Then, in February of 2019, California Attorney General Xavier Becerra and Senator Hannah-Beth Jackson introduced Senate Bill 561, similarly intended to clarify and strengthen the CCPA with expansion of the consumer’s right to bring a private cause of action and removing certain ambiguous language. During this period, the California Attorney General’s Office also conducted a CCPA rulemaking process with a six-part series of public forums, allowing all interested persons the opportunity to provide their comments on the new law. And finally, earlier this week, the California Assembly of Privacy and Consumer Protection Committee (“Committee”) introduced several bills intended to clarify some of the remaining ambiguities in the CCPA.

We’ve already reported on one of the bills, introduced by Committee Chairman Ed Chau’s AB 25, which the committee unanimously approved. AB 25 modifies the definition of consumer to exclude employees, and contractors (if a written contract is in place). In addition to AB 25, several other bills were approved by the Committee and will now advance to the Senate Judiciary Committee, chaired by Senator Jackson, a major proponent of protection of consumer rights. It is likely that some of these bills will not survive the legislative process, and others will be revised along the way. Below is a list of the Committee approved CCPA amendment bills:

  • AB 846 A bill that updates the clause prohibiting businesses from discriminating against consumers who exercises “opt-out” rights by clarifying that loyalty, rewards, and similar programs are exempt.
  • AB 873 – A bill that helps clarify ambiguities in the definitions of personal and deidentified information.
  • AB 874A bill that updates the public record exemption under the definition of personal information.
  • AB 1146 A bill clarifying a consumer’s right to request that a business delete or not sell the consumer’s personal information, in the context of a motor vehicle warranty or recall information.
  • AB 1355 – An additional bill introduced by Chairman Chau makes technical changes to CCPA drafting flaws.
  • AB 1564 – A bill providing alternatives to the current requirement that a business makes available to consumers a toll-free number to submit requests for information regarding the use of their personal information. Alternatives include an email address and physical address for submitting requests.

The Committee also approved AB 981 which would make significant changes affecting the insurance industry, including changes that would hope to incorporate California’s Insurance Information and Privacy Protection Act (“IIPPA”) to avoid overlap with CCPA, and exempting insurance institutions, agents, and support organizations (insurers) from certain CCPA provisions. Other changes include:

  1. Providing that insurers or insurance transactions subject to the IIPPA shall be exempt from the CCPA. This exemption would not apply to the CCPA’s limited private right of action for data breaches or business activity not subject to the IIPPA.
  2. Defining various terms for the purposes of the IIPPA to mirror the definitions provided in the CCPA, including “consumer” to reflect the definition proposed in the March 25, 2019 version of AB 25, and “personal information” to reflect the definition of that term provided in the CCPA, with the exception of “household,” which is absent from the definition, similar to AB 873.
  3. Requiring insurers to provide certain notices concerning their information practices and privacy policies and procedures, including communications to individuals regarding the right to opt-out of disclosures.
  4. Requiring an insurance institution, agent, or insurance-support organization to implement a comprehensive written information security program that includes administrative, technical, and physical safeguards for the protection of policyholder information, as specified, and authorize the commissioner to audit an insurance intuition, agent, or support organization’s compliance.
  5. Prohibiting an insurer from “unfairly discriminating,” against an applicant or policyholder because that applicant or policyholder has opted out from the disclosure of nonpublic PI, or did not grant authorization for the disclosure of nonpublic personal medical record information.

In addition, the Senate Judiciary Committee was scheduled in a April 23 hearing to review SB 753, a bill that would have revised the definition of “sell” to exempt situations where a business “pursuant to a written contract, shares, discloses, or otherwise communicates to another business or third party a unique identifier only to the extent necessary to serve or audit a specific advertisement to the consumer.” The bill would require such a “contract to prohibit the other business or third party from sharing, selling, or otherwise communicating the information except as necessary to serve or audit advertisement from the business.” Review of SB 753 was cancelled at the request of Senator Henry Stern, the bill’s author, who faced criticism that the bill would negatively impact the CCPA’s purpose.

We will continue to track and update on the fate of these bills. While it remains unclear which bills will ultimately stick, the CCPA is certain to see additional changes in the upcoming months.