Will the Public Health Emergency Privacy Act Make it into the Next Stimulus Package?

Despite several attempts, Congress has struggled to push forward a federal consumer privacy law over the past few years. But the COVID-19 pandemic, which has raised concerns regarding location monitoring, GPS tracking and use of health data, has heightened the urgency for federal consumer privacy legislation. In May, a group of Democrats from the U.S. Senate and House of Representatives introduced the Public Health Emergency Privacy Act (“the Act”), aimed to protect health information during the pandemic and regulate the use of that data with contact tracing technologies.

In late July, the Senate Committee of Appropriations introduced an Emergency Coronavirus Stimulus Package (“the Stimulus Package”) which would allocate $53 million of the $306 million package, to the Department of Homeland Security Cybersecurity and Infrastructure Security Agency for the protection of Coronavirus research data and related data. In addition, a group of 13 senators including Kamala Harris, D-California, Elizabeth Warren, D-Massachusetts, and Mark Warner, D-Virginia, sent a letter to Senate and Congressional leadership, asking for the Act to be included in the passage of the Stimulus Package.

“Health data is among the most sensitive data imaginable and even before this public health emergency, there has been increasing bipartisan concern with gaps in our nation’s health privacy laws,” the Senators stated in their letter.

“While a comprehensive update of health privacy protections is unrealistic at this time, targeted reforms to protect health data – particularly with clear evidence that a lack of privacy protections has inhibited public participation in screening activities – is both appropriate and necessary,” they added.

Under the Act, “Covered Organizations” is defined as “any person that collects, uses, or discloses  emergency health data electronically or  through communication by wire or radio; OR that develops or operates a website, web application, mobile application, mobile operating system feature, or smart device application for the purpose of tracking, screening, monitoring, contact tracing, or mitigation, or otherwise responding to the COVID–19 public health emergency.” NOTE:  Covered Organizations do not include: a health care provider; a person engaged in a de minimis collection or processing of emergency health data; a service provider; a person acting in their individual or household capacity; or a public health authority.

The Act would protect “emergency health data” which means “data linked or reasonably linkable to an individual or device, including data inferred or derived about the individual or device from other collected data provided such data is still linked or reasonably linkable to the individual or device, that concerns the public COVID–19 health emergency.” Examples of such data include:

  • information that reveals the past, present, or future physical or behavioral health or condition of, or provision of healthcare to, an individual, including data derived from testing an individual. This likely would include COVID-19 viral or serological test results, along with genetic data, biological samples, and biometrics;
  • other data collected in conjunction with other emergency health data or for the purpose of tracking, screening, monitoring, contact tracing, or mitigation, or otherwise responding to the COVID–19 public health emergency, such as (i) geolocation and similar information for determining the past or present precise physical location of an individual at a specific point in time, (ii) proximity data that identifies or estimates the past or present physical proximity of one individual or device to another, including information derived from Bluetooth, audio signatures, nearby wireless networks, and near-field communications; and (iii) any other data collected from a personal device.

Below are key requirements of the Act for Covered Organizations:

  • Only collect, use or disclose data that is necessary, proportionate and limited for a good-faith health purpose;
  • Take reasonable measures, where possible, to ensure the accuracy of data and provide a mechanism for individuals to correct inaccuracies;
  • Adopt reasonable safeguards to prevent unlawful discrimination on the basis of emergency health data;
  • Only disclose data to a government entity if it is to a public health authority and is solely for good faith public health purposes;
  • Establish and implement reasonable data security policies, practices and procedures;
  • Obtain affirmative express consent before collecting, using or disclosing emergency health data, and provide individuals with an effective mechanism to revoke that consent. NOTE: There are limited exceptions where consent is not required including to protect from fraud/malicious activity, to prevent a security incident, or if otherwise required by law;
  • Provide notice in the form of a privacy policy prior to collection that describes how and for what purposes the data will be used (including categories of recipients), the organization’s data security policies and practices, and how individuals may exercise their rights.

If enacted, the Federal Trade Commission (FTC) would be required to promulgate rules regarding data collection, use and disclosure under the Act. In addition, both the FTC and state attorneys general would have enforcement authority over the Act.

The Act, if passed, would be a temporary measure that would terminate once COVID-19 was no longer deemed a public emergency. Covered organizations would be required to not use or maintain emergency health data 60 days after the termination of the public health emergency, and destroy or render not linkable such data.

With no comprehensive Federal privacy framework in place, the Senators are urging Congressional leadership to allow for a measure that provides “Americans with assurance that their sensitive health data will not be misused will give Americans more confidence to participate in COVID screening efforts, strengthening our common mission in containing and eradicating COVID-19”.

We will continue to update on the status of the Act and other related developments.

Transferring Employee Data after EU-U.S. Privacy Shield Invalidated

Businesses are now prohibited from transferring employee personal data from the European Economic Area (EEA) to the U.S. under the EU-U.S. Privacy Shield program. The Court of Justice of the European Union (CJEU) declared the EU-U.S. Privacy Shield invalid in Data Protection Commissioner v. Facebook Ireland and Schrems (C-311/18) (Schrems II), effective immediately. Businesses that relied on the EU-U.S. Privacy Shield as an adequate transfer mechanism can no longer perform routine activities such as sending employee data from the EEA to U.S. headquarters for HR administration, accessing a global HR database from the U.S., remotely accessing EEA user accounts from the U.S. for IT services, providing EEA data to third party vendors for processing in the U.S., or relying on certain cloud-based services.

The EU-U.S. Privacy Shield program was designed to provide EEA data with a level of protection comparable to EU law upon transfer to the U.S. The CJEU invalidated the program stating that U.S. companies could not provide an essentially equivalent level of protection based on the breadth of U.S. national security surveillance laws, FISA 702, E.O. 12.333, and PPD 28.

U.S. businesses must now identify an alternate mechanism to transfer employee data from the EEA to the U.S. Many businesses rely on transfer mechanisms such as binding corporate rules (BCRs) for intragroup transfers, or standard contractual clauses (SCCs) for intracompany transfers as well as transfers to third parties. SCCs are clauses approved by the EU as providing reasonable safeguards to data transferred from the EEA. The CJEU did not invalidate either of these transfer mechanisms in Schrems II but placed SCCs under heightened scrutiny. The Court emphasized the data exporter’s obligation to verify the data importer’s ability to provide EEA data with an adequate level of protection. The data exporter must review each transfer to determine on a case by case basis whether the SCCs provide sufficient reasonable safeguards, particularly in light of the recipient country’s surveillance laws. As a result, data exporters must review applicable local legislation for each transfer to identify when SCCs are adequate, whether supplemental protective measures are required, or whether the transfer cannot occur. A comparable analysis will apply to BCRs.

Businesses seeking to find an alternate to the EU-U.S. Privacy Shield, BCRs, or SCCs should review whether a transfer may fall under one of several exceptions to the GDPR’s requirement of an adequate transfer mechanism. Many of these exceptions, however, apply only when the transfer is necessary, occasional, and affects a limited number of data subjects.

Under the GDPR, an impermissible transfer can result in assessment of fines up to €20,000,000, or, in the case of an undertaking, up to four percent of the total worldwide annual turnover of the preceding financial year, whichever is higher. In addition, EEA data subjects may bring a private cause of action against the data exporter for an illegal transfer, either individually or as part of a class action.

The CJEU’s decision creates great uncertainty about the future of transatlantic data transfers. As the EU and U.S. negotiate the path forward, U.S. businesses should review their employee data flows, identify whether they or their sub-contractors are subject to U.S. national security laws, and determine the feasibility of additional contractual or technical measures to supplement the reasonable safeguards.

Please see our full article (hereand FAQs (here) for additional information.

OCR Warns HIPAA Covered Entities: When You Learn About HIPAA Violations, Fix Them

Roger Severino, Director of the Office for Civil Rights (OCR) at the U.S. Department of Health and Human Services (HHS), provides advice for HIPAA covered health care providers:

When informed of potential HIPAA violations, providers owe it to their patients to quickly address problem areas to safeguard individuals’ health information

According to OCR allegations, a small health care provider in North Carolina, Metropolitan Community Health Services, reported a data breach on June 9, 2011. The breach involved the impermissible disclosure of protected health information to an unknown email account affecting 1,263 patients. It is not clear when OCR’s investigation commenced, but it “revealed longstanding, systemic noncompliance with the HIPAA Security Rule…Metro failed to conduct any risk analyses, failed to implement any HIPAA Security Rule policies and procedures, and neglected to provide workforce members with security awareness training until 2016.” Under the Resolution Agreement reached with OCR, Metro agreed to a two-year corrective action plan (CAP) and to pay $25,000.

The OCR considered that Metro is a Federally Qualified Health Center that provides a variety of discounted medical services to the underserved population in rural North Carolina, but that did not stop it from taking enforcement action against a relatively small covered entity. Other examples of enforcement actions against small health care providers include:

HIPAA compliance is no doubt a significant challenge for large and small covered healthcare providers, and other covered entities and business associates. In addition, data breaches can be nearly impossible to prevent in all cases. However, these and other OCR enforcement actions suggest that with some relatively basic compliance measures, small providers can be more successful during OCR investigations. Here are some examples:

  • Conduct a risk assessment that considers the threats and vulnerabilities to protected health information.
  • Maintain written policies and procedures that address required administrative, physical, and technical safeguards required under the Security Rule.
  • Provide training and improve security awareness for workforce members when they begin working for the organization and periodically thereafter.
  • Maintain business associate agreements with all business associates.
  • Document compliance efforts.

And, of course, evaluate compliance following a reported data breach and make the necessary improvements.

EU-U.S. Privacy Shield Program for Transfer of Personal Data to U.S. Found Invalid

On July 16, 2020, the Court of Justice of the European Union (CJEU) published its decision in the matter of Data Protection Commissioner v. Facebook Ireland and Maximillian Schrems (“Schrems II”). The matter, arising from the transfer of Schrems’ personal data by Facebook Ireland to Facebook Inc. in the United States, presented questions concerning the transfer of personal data from the EEA to a third country without an adequacy determination. The decision declares the EU-US Privacy Shield program invalid and affirms the validity of standard contractual clauses (SCCs) as an adequate mechanism for transferring personal data from the EEA, subject to heightened scrutiny.

The CJEU invalidated the Privacy Shield program on grounds that it fails to provide an adequate level of protection to personal data transferred from the EEA to the U.S. In support, it points specifically to three U.S. national security laws: FISA 702, E.O. 12.333, and PPD 28. The CJEU found the breadth of these bulk surveillance and monitoring laws violates the basic minimum safeguards required by the GDPR for proportionality: the U.S. government’s processing of EEA personal data is not limited to what is strictly necessary. The CJEU further noted these surveillance programs fail to provide EEA data subjects with enforceable rights and effective legal review comparable to applicable EU law. As of the date of the decision, data exporters and U.S. data importers can no longer rely on EU-US Privacy Shield certification as an adequate mechanism to transfer personal data from the EEA to the U.S. There is currently no grace period. However, since a grace period was enacted shortly after the EU-US Safe Harbor was invalidated, it is conceivable one will be announced as the EU and U.S. assess the implications of this decision.

The CJEU affirmed the validity of controller-processor standard contractual clauses (SCCs) as an adequate mechanism for transferring personal data from the EEA to a third country lacking an EU adequacy decision. In affirming the validity of SCCs, the CJEU highlighted three stakeholder obligations:

  1. the data exporter’s responsibility to verify the importer’s ability to provide an essentially equivalent level of protection in the third country;
  2. the data importer’s responsibility to notify the exporter immediately if it cannot comply with the SCCs, including situations where it is compelled to produce EEA data at the request of law enforcement; and
  3. the data exporter’s responsibility to immediately suspend or terminate the transfer upon notice from the importer that it cannot comply with the SCCs.

Based on these requirements, the SCCs may not be an adequate transfer mechanism in every case, or may require the negotiation of additional provisions to satisfy these obligations.

The CJEU further highlighted the affirmative obligation of supervisory authorities to identify and suspend or terminate transfers based on SCCs where the importer cannot provide EEA data with an adequate level of protection.

Under the GDPR, an impermissible transfer can result in fines up to €20,000,000, or in the case of an undertaking, up to 4 % of the total worldwide annual turnover of the preceding financial year, whichever is higher. In addition, EEA data subjects may bring a private cause of action for an illegal transfer, either individually or as part of a class action.

Please see our full article (hereand FAQs (here) for additional information.

Supreme Court Will Take on The TCPA Again

Back in October of 2019, the U.S. Supreme Court was petitioned to review a Ninth Circuit ruling regarding the Telephone Consumer Privacy Act (“TCPA”) on the following issues: 1) whether the TCPA’s prohibition on calls made by an automatic telephone dialing system (“ATDS”) is an unconstitutional restriction of speech, and if so whether the proper remedy is to broaden the prohibition to abridge more speech, and 2) whether the definition of ATDS in the TCPA encompasses any device that can “store” and “automatically dial” telephone numbers, even if the device does not “us[e] a random or sequential number generator.” Now, the Court has finally accepted writ of certiorari, limited to review of Question 2, described above.

ATDS Circuit Split

When the TCPA was enacted in 1991, most American consumers were using landline phones, and Congress could not begin to contemplate the evolution of the mobile phone. The TCPA defines “Automatic Telephone Dialing System” (ATDS) as “equipment which has the capacity—(A) to store or produce telephone numbers to be called, using a random or sequential number generator; and (B) to dial such numbers.” 47 U.S.C § 227(a)(1). In 2015, the Federal Communications Commission (FCC) issued its 2015 Declaratory Ruling & Order (2015 Order), concerning clarifications on the TCPA for the mobile era, including the definition of ATDS and what devices qualify. The 2015 Order only complicated matters further, providing an expansive interpretation for what constitutes an ATDS, and sparking a surge of TCPA lawsuits in recent years.

Consequently, several FCC-regulated entities appealed the 2015 FCC Order to the D.C. Circuit Court of Appeals, in ACA International v. FCC, No. 15-1211, Doc. No. 1722606 (D.C. Cir. Mar. 16, 2018). The D.C. Court concluded the FCC’s opinion that all equipment that has the potential capacity for autodialing is subject to the TCPA, is too broad. Although the FCC did say in its 2015 Order “there must be more than a theoretical potential that the equipment could be modified to satisfy the ‘autodialer’ definition”, the Court held that this “ostensible limitation affords no ground for distinguishing between a smartphone and a Firefox browser”. The Court determined that the FCC’s interpretation of ATDS was “an unreasonably expansive interpretation of the statute”.

Since the decision in ACA Int’l, courts have weighed in on the D.C. Circuit Court ruling and the status of the 2015 Order, sparking a circuit split over what constitutes an ATDS. The Second and Ninth Circuit have both broadly interpreted the definition of an ATDS, while the Third, Seventh and Eleventh have taken a much narrower reading. For example, earlier this year the Eleventh and Seventh Circuit Courts reached similar conclusions, back-to-back, narrowly holding that the TCPA’s definition of Automatic Telephone Dialing System (ATDS) only includes equipment that is capable of storing or producing numbers using a “random or sequential” number generator, excluding most “smartphone age” dialers. By contrast, the Ninth Circuit has concluded that “an ATDS need not be able to use a random or sequential generator to store numbers[.]”  The court explained that “it suffices to merely have the capacity to ‘store numbers to be called’ and ‘to dial such numbers automatically.’”

Supreme Court Petition

The Supreme Court has accepted petition for review of the Ninth Circuit ruling on the issue of whether the definition of “ATDS” in the TCPA encompasses any device that can “store” and “automatically dial” telephone numbers, even if the device does not “us[e] a random or sequential number generator.” The Supreme Court’s decision should help resolve the circuit split and provide greater clarity and certainty for parties facing TCPA class action litigation. The Court is expected to hear oral arguments on this dispute at the start next term, in the fall, and issue a decision by the summer of 2021.

Take Away

2020 is shaping up to be an important year for the TCPA. We recently reported on a much-anticipated Supreme Court decision, Barr v. American Association of Political Consultants, in which the court weighed in on the constitutionality of the TCPA, holding that the government debt collection exception of the TCPA violated the First Amendment, and must be invalidated and severed from the remainder of the statute. While it appears that courts are generally leaning towards the narrowing of the TCPA in a myriad of aspects, organizations are still advised to err on the side of caution, during this period of uncertainty, when implementing and updating telemarketing and/or automatic dialing practices.

Supreme Court Weighs in on TCPA Constitutionality

In a much-anticipated Supreme Court decision, Barr v. American Association of Political Consultants, sure to impact the future of the Telephone Consumer Protection Act (“TCPA”), the Court addressed the issue of whether the government-debt exception to the TCPA’s automated-call restriction violates the First Amendment, and whether the proper remedy for any constitutional violation is to sever the exception from the remainder of the statute.

The Supreme Court concluded that Congress impermissibly favored government-debt collection speech over political and other speech, in violation of the First Amendment, and thus must invalidate the government-debt collection exception of the TCPA, and sever it from the remainder of the statute. Despite concerns that the Court would address the constitutionality of the TCPA in its entirety, the Court left untouched the TCPA’s general restriction on calls made with an “automatic telephone dialing system” (“ATDS”).

Applying traditional severability principles, seven Members of the Court conclude that the entire 1991 robocall restriction should not be invalidated, but rather that the 2015 government-debt exception must be invalidated and severed from the remainder of the statute. . . . As a result, plaintiffs still may not make political robocalls to cell phones, but their speech is now treated equally with debt-collection speech.

 Addressing the decision to leave the remainder of the TCPA intact, the Court highlighted the “normal rule”, introduced in Free Enterprise Fund v. Public Company Accounting Oversight Bd., where the Court concluded that, “Generally speaking, when confronting a constitutional flaw in a statute, we try to limit the solution to the problem, severing any problematic portions while leaving the remainder intact.”

This is not the first time, of late, that the Supreme Court has been petitioned to address the constitutionality of the TCPA. Back in October of 2019, the Court was petitioned to review the following issues: 1) whether the TCPA’s prohibition on calls made by ATDS is an unconstitutional restriction of speech, and if so whether the proper remedy is to broaden the prohibition to abridge more speech, and 2) whether the definition of “ATDS” in the TCPA encompasses any device that can “store” and “automatically dial” telephone numbers, even if the device does not “us[e] a random or sequential number generator.” The Court has still not announced whether it will accept this petition.

While the impact of the Supreme Court’s decision on the TCPA is limited, given that only the government-debt exception was severed, it still provides greater certainty and clarity for organizations facing TCPA litigation. Organizations are advised to review and update their telemarketing and/or automatic dialing practices to ensure TCPA compliance.

California Attorney General Issues CCPA FAQs

With the California Consumer Privacy Act (CCPA) now in effect (January 1, 2020) and enforceable by California’s Attorney General (“AG”) (July 1, 2020), the AG has published Frequently Asked Questions (FAQs). Designed to aid consumers in exercising their rights under the CCPA, the FAQs also contain helpful reminders for businesses and service providers regarding their obligations under the law.

The FAQs cover several main topics for consumers: general information, “Do Not Sell” requests, “Right to Know” requests, required notices, “Right to Delete” requests, right to nondiscrimination, and information about data brokers. As noted, FAQ responses include information businesses and service providers may want to review.

For example, businesses still not sure if they are covered by the CCPA can review Question 5 under General Information, “What businesses does the CCPA apply to?”:

The CCPA applies to for-profit businesses that do business in California and meet any of the following:

    • Have a gross annual revenue of over $25 million;
    • Buy, receive, or sell the personal information of 50,000 or more California residents, households, or devices; or
    • Derive 50% or more of their annual revenue from selling California residents’ personal information.

There is more to this analysis, but the response provides a good starting point. One question many businesses have is whether the $25 million gross annual revenue threshold refers only to revenue generated in California. The AG did not answer this question in the regulations or these FAQs, and the statute itself is silent. However, the AG’s responses to comments submitted concerning the regulations can be instructive:

Civil Code § 1798.140(c)(1)(A) does not limit the revenue threshold to revenue generated in California or from California residents. Any proposed change to limit the threshold to revenue generated only in California or from California residents would be inconsistent with the CCPA.

The FAQs help to confirm the role of service providers and explain to consumers why a business might refuse to act on a consumer’s request, such as a request to exercise the right to delete. In that case, under Question 6 of Requests to Delete Personal Information, the AG explains that “service providers” do not have the same obligations under the CCPA that “businesses” do. Requests must be submitted to the business, not its service providers. Of course, a business may require service providers to act on approved and verified requests the business receives from its consumers, such as requests to delete consumer personal information.

The FAQs also inform consumers about what to do if they think a business violated the CCPA. Notably, Question 7 of the General Information section makes clear that consumer “cannot sue businesses for most CCPA violations.” In most cases, only the Attorney General can file an action against a business. The FAQ goes on to explain:

Consumers can only sue a business under the CCPA if there is a data breach, and even then, only under limited circumstances. You can sue a business if your nonencrypted and nonredacted personal information was stolen in a data breach as a result of the business’s failure to maintain reasonable security procedures and practices to protect it. If this happens, you can sue for the amount of monetary damages you actually suffered from the breach or “statutory damages” of up to $750 per incident. If you want to sue for statutory damages, you must give the business written notice of which CCPA sections it violated and give it 30 days to give you a written statement that it has cured the violations in your notice and that no further violations will occur. You cannot sue for statutory damages for a CCPA violation if the business is able to cure the violation and gives you its written statement that it has done so, unless the business continues to violate the CCPA contrary to its statement.

In addition to maintaining “reasonable safeguards,” businesses need to be prepared, following a breach of nonencrypted and nonredacted personal information, to promptly respond to written statements from consumers concerning alleged violations.

Consumers, businesses, and service providers are encouraged to review the FAQs. As the AG notes, the FAQs “are not legal advice, regulatory guidance, or an opinion of the Attorney General.” So, while the FAQs can provide helpful general explanations of certain CCPA requirements, businesses and service providers, in particular, will want to obtain a more complete understanding of the statute and regulations with experienced counsel.

Is Personal Information of Retirement Plan Participants an ERISA Plan Asset?

A little more than one year ago, we reported on a settlement (Cassell et al. v. Vanderbilt University, et al.) involving the alleged wrongful use of personal information belonging to retirement plan participants, claimed to be “plan assets.” This year, similar claims have been made against Shell Oil Company in connection with its 401(k) plan. Retirement plan sponsors may begin seeing more of these claims and they might consider some strategies to head them off.

The essence of the allegations is that employers breach their fiduciary duties of loyalty and prudence when they permit plan service providers to profit from the use of plan assets – sensitive personal information of plan participants – for non-plan purposes. Citing several “cross-selling” activities of plan advisors and other service providers, the Shell plaintiffs claim downstream sales opportunities working with retirement plans are more plentiful through better access to plan participant data, and without the need to engage in “cold-calling.”

The Employee Retirement Income Security Act (“ERISA) is the primary federal statute regulating employee benefit plans, including retirement plans. Currently, there are no express provisions in ERISA that prohibit the use of plan participant data for any particular purpose. However, as in the Vanderbilt case, the Shell plaintiffs rely on ERISA’s long-standing fiduciary duty provisions to support their claims concerning plan data:

  • ERISA’s fiduciary duty provisions require plan fiduciaries to discharge their duties with respect to a plan solely in the interest of the participants and beneficiaries and for the exclusive purpose of providing benefits to participants and their beneficiaries. 29 U.S. Code § 1104.
  • ERISA also prohibits plan fiduciaries from engaging in certain prohibited transactions, including transactions between the plan and a party in interest which the fiduciary knows constitutes a direct or indirect transfer to, or use by or for the benefit of a party in interest, of any assets of the plan. 29 U.S.C. §1106(a)(1).

For example, in Count IV of the complaint, the Shell plaintiffs alleged fiduciary duties under § 1104(a)(1) include:

restricting its use of Confidential Plan Participant Data solely to carrying out its Plan recordkeeping role, not using the data for nonplan purposes

Recordkeeping, investment of contributions, and other tasks associated with retirement plan administration require access to large amounts of personal information, usually in electronic format. The risks involving such information are not limited to data breaches. As the Vanderbilt and Shell cases indicate, plan participants have become increasingly aware of the vulnerabilities associated with handling their data, as well as how their data are being used by plan vendors. The California Consumer Privacy Act (CCPA) and similar laws emerging in other states may increase this awareness. At least for the time being, employees of CCPA covered entities are entitled to a “notice at collection” that must outline the categories of personal information collected and the purpose(s) that information is used. Regardless of whether ERISA preempts the CCPA, increased communication about privacy of personal information may cause participants to be more sensitive to the collection and use of their information.

There are some measures plan sponsors can take to minimize the risk of these kinds of claims.

  • Consider relationships with plan service providers more carefully and earlier in the process. ERISA requires plan fiduciaries to “obtain and carefully consider” the services to be provided by plan service providers before engaging the provider. Whether that duty extends to assessing the provider’s data privacy and security practices is not clear. Nonetheless, during the procurement process, consider basic questions such as: Who has access to participants’ data? How much (and what) data does the provider have access to, and what are they doing with that data? Is the service provider sharing data with other third parties?
  • Limit by contract the ability of plan service providers to use plan participant data to market or sell to participants products unrelated to the retirement plan, unless the participants initiate or consent.

Of course, depending on the bargaining power of the sponsor, it may not be able to convince a vendor to agree not to use participant data solely for plan administration purposes. However, sponsors should be sure their process includes these and other factors when making selections and when evaluating the performance of their service providers.

North Dakota Implements A New Student Privacy Law

North Dakota’s State Board of Higher Education recently implemented the Student Data Privacy and Security Bill of Rights (the “Policy”). The Policy, which went into effect on May 29, 2020, was created by the North Dakota Student Association to facilitate students’ access to their Personally Identifiable Information (“PII”), and to regulate the North Dakota University System and its institutions’ collection and use of PII.

Key Provisions Under The Policy

The Policy outlines students’ right to know the types of PII collected by the North Dakota University System and its institutions (“NDUS”), including how the data is used and stored. Under the Policy, NDUS must, to the extent possible, make information available concerning the types of PII provided to NDUS vendors and contractors.

Use of PII

NDUS is prohibited from selling, releasing, or disclosing “non-directory” information for commercial or advertisement purposes. Directory information constitutes public record. While NDUS may use student PII for assessments and research related to accreditation, accountability, and policy implementation, NDUS may not subject students to punitive consequences as a result of the findings from such use.

Third-Party Providers and Vendors

NDUS must responsibly engage with third-party providers of educational services and vendors to ensure that student PII disclosed to these third parties are protected by the applicable industry standards. Generally, NDUS may not require students to disclose their PII to third-party service providers as a course requirement.

Record Review and Student Remedies

Students have the right to inspect, review, and challenge the accuracy and completeness of their academic record through a written request based on the NDUS institution’s request process. NDUS may limit the means of access to the educational record to ensure proper security of the record. These provisions are also afforded to students under the Federal Education Rights and Privacy Act (“FERPA”). NDUS is also required to comply with FERPA, which includes adhering to student requests to prevent disclosure of certain PII as “directory information”.

Students have the right to file complaints about violations under the Policy or other possible breaches of student data through an institutional grievance process.

Trends In State Student Privacy Laws

North Dakota follows the growing nation-wide trend towards stronger state privacy laws related to student information. Since 2013, 40 states and Washington D.C. have enacted legislation specific to student privacy issues. Most states, including New York and Vermont have regulated student privacy issues only for K-12. North Dakota joins the few states that regulate the use of student PII in higher education. As K-12 and higher education institutions continue to increase the use of educational technological services to facilitate classroom instruction, the need to strengthen student privacy laws, specifically as to higher education, will also continue to increase. In light of recent large-scale data breaches, educational institutions should continue to assess and enhance their data breach prevention and response procedures.

CCPA Litigation is on the Rise: Is Your Organization Prepared?

On January 1, 2020 the California Consumer Privacy Act (CCPA) took effect. Largely considered the most expansive U.S. privacy law to date, there has been much anticipation over the impact the law will have on the privacy litigation landscape. Although the California Attorney General’s (“AG”) enforcement authority only begins on July 1, this has not stopped plaintiffs from already pursuing CCPA litigation in light of the January 1 effective date.

The CCPA authorizes a private cause of action against a covered business if a failure to implement reasonable security safeguards results in a data breach. The definition of personal information for this purpose is much narrower than the general definition of personal information under the CCPA. If successful, a plaintiff can recover statutory damages in an amount not less than $100 and not greater than $750 per consumer per incident or actual damages, whichever is greater, as well as injunctive or declaratory relief and any other relief the court deems proper. This means that plaintiffs in these lawsuits likely do not have to show actual harm or injury to recover.

As of today, there have been approximately 25 CCPA-related claims filed in state and federal court. Thus far, there are three common types of CCPA-related litigation:

  • Reasonably Security Safeguards. Unsurprisingly, given the limited nature of the CCPA’s private cause of action, most claims to-date have been on the basis of an alleged failure to implement reasonable security safeguards resulting in a data breach. For example, in February a putative class action was filed in the Northern District of California, San Francisco Division, against a supermarket and its e-commerce platform provider, alleging negligence and a failure to maintain reasonable safeguards, among other things, leading to a data breach. The complaint specifically seeks recovery under the CCPA –  Civ. Code § 1798.100, et seq. It is worth noting that several complaints on the basis of an alleged failure to implement reasonable security safeguards were filed in light of the increase in videoconferencing platform usage in response to COVID-19. In addition, at least one complaint is based on a data breach that occurred before January. And, yet, another claim (the first CCPA case filed in federal court), was brought by a non-California resident. While many of these cases may face viability issues moving forward, they indicate the eagerness of plaintiffs and their counsel to pursue relief under the CCPA, and the likely uptick in CCPA litigation in the coming years.
  • Consumer Rights. The CCPA does not provide consumers with a private cause of action if their rights (g. right to notice, right to delete, right to opt out) under the statute are violated. This, however, has not stopped plaintiffs from filing claims on the basis that their rights under the CCPA have been violated. For example, in one case, the plaintiff alleged that the defendant violated the CCPA by failing to provide consumers notice of their right to opt out of sale of their personal information to a third party, and failure to provide notice of their collection and use of personal information practices.
  • CCPA References.  In several cases, although the plaintiff is not seeking relief on the basis of a CCPA violation, the CCPA is still mentioned in connection with a different violation. For example, in a case against a videoconference provider, the CCPA is mentioned in a claim regarding a violation of the Cal. Bus. Code – Unfair Competition law, highlighting that the defendant failed to provide accurate disclosures to users on their data sharing practices and failed to implement reasonable security measures, but never explicitly alleged that the defendant violated the CCPA.

CCPA litigation is only ramping up, and organizations need to be prepared. As data breaches continue to plague businesses across the country, including those subject to the CCPA, ensuring reasonable safeguards are in place may be the best defense. To learn more about the CCPA’s obligations and how to implement policies and procedures to ensure compliance, check out Jackson Lewis’s CCPA FAQS for Covered Businesses. For more information on what businesses can be doing to ensure they have reasonable safeguards to protect personal information, review our post on that topic.

LexBlog