Cities step up their efforts to combat the COVID-19 Delta variant. New York City, New Orleans, and San Francisco have all announced requirements for certain persons to produce evidence of COVID vaccination status in order to patronize or work indoors at certain establishments. Adding to an already complex patchwork of COVID-related regulation – screening, social distancing, contact tracing, paid-time off, record keeping, etc. – certain businesses will need to absorb another layer. But while doing so, they should avoid creating new data privacy and security risks.

In general, each of the cities requires businesses in certain industries such as food services (restaurants, bars), fitness, and entertainment (hotels, casinos, music halls) to require employees, patrons, customers, contractors, and others to provide proof of vaccination to go indoors at these establishments. In some cases, proof is required even for certain outdoor activities. For example, in New Orleans, the requirement applies to outdoor events of more than 500 people if total attendance is more than 50% of the outdoor venue’s capacity.

There are several exceptions to these requirements. For example:

  • Persons under 12 do not have to provide proof of vaccination.
  • In New Orleans, a negative PCR test within 72 hours of access can be provided in lieu of vaccination proof. This is not permitted in San Francisco, which requires proof of full vaccination. See FAQs for COVID-19 Health Order C19-07y. NYC requires proof of at least one dose of the COVID-19 vaccination.
  • San Francisco businesses may allow patrons wearing a well-fitted mask to use a restroom indoors without vaccination verification. There is a similar exception in NYC.
  • If an individual in NYC is unable to show proof of vaccination due to a disability, the business must engage in a cooperative dialogue to see if a reasonable accommodation is possible. Reasonable accommodation is not required if the individual would create a direct threat to other customers or employees, or impose an undue hardship on the business. A similar approach is required for employees.

A significant issue for covered businesses, however, is whether they must collect any additional information in order to comply, and how should that information be safeguarded, retained, and/or disclosed, as necessary. Businesses will want to have sufficient proof that they have complied to avoid an enforcement action. In New York City, when enforcement begins on September 13, 2021, noncompliant establishments may be subject to a fine of $1,000, or more for repeated violations. But this does not mean they need to collect sensitive personal information.

The cities provide several ways for individuals to communicate proof of COVID vaccination.

  • In New Orleans, individuals can use the LA Wallet app; an original, digital photograph, or photocopy of CDC vaccination cards (both sides); or an official vaccine record issued by another state, foreign nation or the World Health Organization.
  • In San Francisco, one can show their CDC Vaccination Record Card (CRC), an image of the card saved to one’s smartphones, a digital COVID-19 vaccine record issued by the State of California, or an approved private app.
  • In NYC, any of the following could be a Key to NYC: one’s CDC vaccination card, the NYC COVID Safe App, the New York State Excelsior App, and official vaccine record, or a photo or hard copy of an official vaccination record of a vaccine administered outside the U.S.

In NYC, businesses also must check the ID of each person required to show proof of vaccination who appears to be 18 or older to confirm the individual is the same person as listed on the proof of vaccination. The ID must contain either the person’s name and picture, or name and date of birth. However, ID checks are not required for individuals that can be matched against information the business already maintains, such as employees.

Do I need to check other identification besides proof of vaccination?

Yes. Identification bearing the same identifying information as the proof of vaccination must also be displayed. (underline added)

See NYC’s Key to NYC FAQs. San Francisco has a similar requirement. See San Francisco FAQs (“Businesses subject to this new requirement must cross-check proof of vaccination against each patron’s photo identification.”)

Some of these methods raise privacy and data security issues for individuals, especially for those choosing to use apps. Pennsylvania is just one state reeling from a data breach involving a COVID app that exposed medical information of thousands of its citizens. But there are significant questions for businesses – what information do they have to collect, if any, and what steps should they take to process and safeguard that information.

NYC’s Key to NYC FAQs provides:

Who must display proof of vaccination?

Employees, patrons, interns, contractors, and volunteers at Key to NYC establishments must display proof of vaccination. Businesses may keep a record of people who have previously provided proof of vaccination, rather than require the proof be displayed every time the person enters the establishment. (underline added)…

What documents do I need to maintain?

You must have a written record that describes how you will verify proof of vaccination for staff and patrons. The record must be on site and available for inspection.

Based on the above, covered NYC businesses are not required to collect information from individuals about their vaccination status. They only need to document how they will verify proof. (NYC provides a sample written protocol) The guidance suggests, however, that businesses could maintain a record of persons who already confirmed vaccination status for ease of administration. But, doing so arguably would create confidential personal information.

New Orleans and San Francisco also do not require businesses to collect proof of vaccination information, although businesses in San Francisco should assess whether the California Consumer Privacy Act (CCPA), as amended, applies and whether additional compliance measures should be implemented.

So, the good news is that while there are some additional compliance requirements in these cities concerning COVID, covered businesses should not have to collect personal information from customers or employees in most cases to meet these requirements. When implementing these measures, businesses should consider advising employees to avoid collecting personal information. Of course, in cases where an employee or patron seeks a reasonable accommodation, the business may need additional information to process that request. In that case, there should be procedures in place to minimize the information needed, to safeguard what is collected, and to limit disclosure of what is retained.

Following a series of major ransomware attacks, including against Colonial Pipeline, which provides the East Coast with 45 percent of its gasoline, jet fuel and diesel, President Biden issued a National Security Memorandum (“the Memorandum”) last week intent on improving cybersecurity for critical infrastructure systems. The Memorandum comes in follow up to the Biden Administration’s Executive Order issued immediately following the Colonial Pipeline Cyberattack back in May, entitled “Improving the Nation’s Cybersecurity” (EO). The EO made a clear statement on the Administration’s cybersecurity policy,

“It is the policy of my Administration that the prevention, detection, assessment, and remediation of cyber incidents is a top priority and essential to national and economic security.  The Federal Government must lead by example.  All Federal Information Systems should meet or exceed the standards and requirements for cybersecurity set forth in and issued pursuant to this order.”

In the latest Memorandum, the Administration posited that the country’s critical infrastructure is a responsibility of both the government and private owners/operators of that infrastructure.  Any threat to that infrastructure is deemed a threat to the country’s national and economic security.  Critical infrastructure includes dams, energy, critical manufacturing, food and agriculture, and water and wastewater systems.

As a result, the Administration established an Industrial Control Systems Cybersecurity Initiative (the “Initiative”) that will be a voluntary, collaborative effort between the federal government and members of the critical infrastructure community aimed at improving voluntary cybersecurity standards for companies that provide critical services.

The primary objective of the Initiative is to encourage, develop, and enable deployment of a baseline of security practices, technologies and systems that can provide threat visibility, indications, detection, and warnings that facilitate response capabilities in the event of a cybersecurity threat.  According to the President’s Memo, “we cannot address threats we cannot see.”

The Initiative already had been undertaken with the electricity subsector and will now result in similar efforts with natural gas pipelines, followed by water and wastewater, and chemical sectors later this year.  According to news reports, more than 150 power industry utilities have enrolled in the voluntary program.

The Initiative will be coordinated by the Department of Homeland Security, which is being direct to issue preliminary performance goals for control systems for all sectors no later than September 22, 2021, followed by sector-specific system goals within one year.  These performance goals aim to serve as clear guidance to owners and operators about cybersecurity practices and postures that the American people can trust and should expect for such essential services, to “protect national and economic security, as well as public and health safety”.

The U.S. government continues to ramp up efforts to strengthen its cybersecurity, and we can expect states to continue to legislate and regulate in this area. Businesses across all sectors will likely experience pressure to evaluate their data privacy and security threats and vulnerabilities and adopt measures to address their risk and improve compliance.

The complete Memorandum can be viewed by clicking here.

Facial recognition technology has become increasingly popular in recent years in the employment and consumer space (e.g. employee access, passport check-in systems, payments on smartphones), and in particular during the COVID-19 pandemic. As the need arose to screen persons entering a facility for symptoms of the virus, including temperature, thermal cameras, kiosks, and other devices with embedded with facial recognition capabilities were put into use. However, many have objected to the use of this technology in its current form, citing problems with the accuracy of the technology, and now, more alarmingly, there is growing concern that “Faces are the Next Target for Fraudsters” as summarized by a recently article in the Wall Street Journal (“WSJ”).

In the last year, there has been an uptick in hackers trying to “trick” facial recognition technology, in a myriad of settings, such as fraudulently claiming unemployment benefits from state workforce agencies, The majority of states are now using facial recognition technology as a way to verify to eligible citizens, ironically enough, in order to prevent other types of fraud. As discussed in the WSJ article, the firm ID.me.Inc. which provides facial recognition software for 26 states to help verify individuals eligible for unemployment benefits has seen between June 2020 – January 2021 over 80,000 attempts to fool government identification facial recognition systems.  Hackers of facial recognition systems use a myriad of techniques including deepfakes (AI generated images), special masks, or even holding up images or videos of the individual the hacker is looking to impersonate.

Fraud is not the only concern with facial recognition technology.  Despite its appeal for employers and organizations, there are concerns regarding the accuracy of the technology, as well as significant legal implications to consider.  First, there are growing concerns regarding accuracy and biases of the technology.  A recent report by the National Institute of Standards and Technology studied 189 facial recognition algorithms which is considered the “majority of the industry”.  The report found that most of the algorithms exhibit bias, falsely identifying Asian and Black faces 10 to beyond 100 times more than White faces.  Moreover, false positives are significantly more common in woman than men, and more elevated in elderly and children, than middle-aged adults.

In addition, several U.S. localities have already banned the use of facial recognition for law enforcement, other government agencies, and/or private and commercial use.  The City of Baltimore, for example, recently banned the use of facial recognition technologies by city residents, businesses, and most of the city government (excluding the city police department) until December 2022.  Council Bill 21-0001  prohibits persons from “obtaining, retaining, accessing, or using certain face surveillance technology or any information obtained from certain face surveillance technology.” Likewise in September of 2020 the City of Portland in Oregon became the first city in the United States to ban the use of facial recognition technologies in the private sector citing, among other things, a lack of standards for the technology and wide ranges in accuracy and error rates that differ by race and gender. Failure to comply can be painful. The Ordinance provides persons injured by a material violation a cause of action for damages or $1,000 per day for each day of violation, whichever is greater.

And finally, companies looking to implement facial recognition technologies, must consider their obligations under laws such as the Illinois’ Biometric Information Privacy Act (BIPA) and the California Consumer Privacy Act (CCPA). The BIPA addresses a business’s collection of biometric data from both customers and employees including for example facial recognition, finger prints, and voice prints.  The BIPA requires informed consent prior to collection of biometric data, mandates protection obligations and retention guidelines, and creates a private right of action for individuals aggrieved by BIPA violations which has resulted in a flood of BIPA class action litigation in recent years.  Texas, Washington and California also have similar requirements, New York is considering a BIPA-like privacy bill and NYC recently created BIPA-like requirements for retail, hospitality businesses concerning biometric collection from customers. Additionally, states are increasingly amending their breach notification laws to add biometric information to the categories of personal information that require notification, including 2020 amendments in California, D.C., and Vermont. Moreover, there are a myriad of data destruction, reasonable safeguards, and vendor requirements to consider, depending on the state, when collecting biometric data.

Takeaway

Facial recognition and other biometric data related technology is booming, and continues to infiltrate different facets of life that are hard to even contemplate. The technology brings innumerable potential benefits as well as significant data privacy and cybersecurity risks. Organizations that collect, use, and store biometric data increasingly face compliance obligations as the law attempts to keep pace with technology, cybersecurity crimes, and public awareness of data privacy and security. Creating a robust privacy and data protection program or regularly reviewing an existing one is a critical risk management and legal compliance step.

Patient record requests can be a significant administrative burden for health care providers. An OCR enforcement initiative and a new federal law give providers more reason to get this process right.  We summarize these rules here.

Since the Health Insurance Portability and Accountability Act of 1996 (HIPAA) Privacy Rule became effective in 2003, it generally required covered entities to provide patients timely access to their medical records. However, continued concerns over the level of patient access to records are driving increased emphasis, heightened enforcement activity, and new laws to ensure individuals have easy access to their health information, including the 21st Century Cures Act.  A critical goal of these efforts is to empower patients to be more in control of decisions regarding their health and well-being. By helping individuals have ready access to their health records, according to OCR, they are better positioned:

to monitor chronic conditions, adhere to treatment plans, find and fix errors in their health records, track progress in wellness or disease management programs, and directly contribute their information to research.

The “Right to Access” under HIPAA established a floor for patients to access their health records, which could be exceeded by more stringent state laws. In 2019, the OCR commenced its Right of Access Initiative, an enforcement priority to support individuals’ right to timely access to their health records at a reasonable cost. At least one study found providers are struggling to fully comply. Nonetheless, the OCR has announced nearly 20 enforcement actions under its Right of Access Initiative – a full list of enforcement actions is available on the OCR website. Monetary settlements to date have ranged from $3,500 to $200,000. In addition, the OCR resolution agreements require the covered entities to develop a corrective action plans to prevent further violations.

The Cures Act significantly heightens the obligations under HIPAA right to access. Its Interoperability, Information Blocking, and the ONC Health IT Certification Program seeks to minimize the interference with the ability of authorized persons or entities to access, exchange, or use electronic health information – that is, it wants to eliminate impermissible “information blocking.” More specifically, the Cures Act defines information blocking as business, technical, and organizational practices that prevent or materially discourage the access, exchange, or use of EHI when an actor knows, or (for some actors like electronic health record vendors) should know, that these practices are likely to interfere with access, exchange, or use of EHI.  The law empowers the HHS Office of Inspector General (OIG) to investigate claims of information blocking and to provide referral processes to facilitate coordination with the OCR. The goal of these provisions is to support seamless, secure access, exchange, and use of electronic health information (EHI).

During the nearly 20 years since the HIPAA Privacy Rule became effective, technological changes now support even greater access rights, including enabling access in real time and on demand. Providers, even certain providers not subject to HIPAA, will need to ensure they have compliant policies and procedures for ensuring patients have access to their records and avoiding enforcement actions, headaches, and penalties.

Effective October 1, 2021, Connecticut becomes the third state with a data breach litigation “safe harbor” law (Public Act No. 21-119), joining Utah and Ohio. In short, the Connecticut law prohibits courts in the state from assessing punitive damages in data breach litigation against a covered defendant that created, maintained, and complied with a cybersecurity program that meets certain requirements. Cyberattacks are on the rise – think Colonial Pipeline, Kaseya, JBS, and others – with ransomware attacks up 158 percent from 2019-2020 in North America.

The hope is this law will provide covered entities of all sizes an incentive to implement stronger controls over their information systems. According to Homeland Security Secretary Alejandro Mayorkas:

As a matter of fact, small businesses comprise approximately one-half to three-quarters of the victims of ransomware 

So, what can “covered entities” in Connecticut do to at least try to protect themselves from punitive damages if sued following a data breach?

First, it is important to note that the law applies to “covered entities” – defined to include a business that “accesses, maintains, communicates or processes personal information or restricted information in or through one or more systems, networks or services located in or outside this state.”

The definition of “personal information” tracks the definition of the same term in Connecticut’s recently updated data breach notification law. But, the law adds the term “restricted information” to the mix, defined to include:

any information about an individual, other than personal information or publicly available information, that, alone or in combination with other information, including personal information, can be used to distinguish or trace the individual’s identity or that is reasonably linked or linkable to an individual, if the information is not encrypted, redacted or altered by any method or technology in such a manner that the information is unreadable, and the breach of which is likely to result in a material risk of identity theft or other fraud to a person or property.

PA 21-119 prohibits superior courts from assessing punitive damages against a covered entity defendant in any tort action brought under Connecticut law or in Connecticut courts alleging a failure to implement reasonable cybersecurity controls that resulted in a data breach involving personal information or restricted information, provided that:

[the covered entity] created, maintained and complied with a written cybersecurity program that contains administrative, technical and physical safeguards for the protection of personal or restricted information and that conforms to an industry recognized cybersecurity framework.

Examples of the frameworks listed in the statute include: NIST SP 800-171, NIST SP 800-53, and the Center for Internet Security’s “Center for Internet Security Critical Security Controls for Effective Cyber Defense.” Covered entities regulated under federal or state laws, such as the Security Rule under the Health Insurance Portability and Accountability Act of 1996 (HIPAA), can rely on compliance with the current version of those regulatory frameworks. Should these frameworks change, covered entities have six months to confirm to the changes.

Additionally, the cybersecurity program must be designed to:

  • protect the security and confidentiality of personal and restricted information;
  • protect against any threats or hazards to the security or integrity of such information; and
  • protect against unauthorized access to and acquisition of such information that would result in a material risk of identity theft or other fraud to the individual to whom the information relates.

Importantly, covered entities should consider how the framework they use covers the personal and restricted information they maintain. For example, a HIPAA covered entity or business associate relying solely on the HIPAA security rule could mean that its cybersecurity program reaches only “protected health information” as defined by HIPAA, but not personal and restricted information as defined in PA 21-119.

The Connecticut law, however, permits the program to be shaped by several factors including (i) the size and complexity of the covered entity; (ii) the nature and scope of the activities of the covered entity; (iii) the sensitivity of the information to be protected; and (iv) the cost and availability of tools to improve information security and reduce vulnerabilities.

This law, similar to the measures in Utah and Ohio, incentivize heightened protection of personal data, while providing a safe harbor from certain claims for organizations facing data breach litigation.  Creating, maintaining, and complying with a robust data protection program is a critical risk management and legal compliance step, and one that might provide protection from litigation following a data breach.

Individuals who serve as a fiduciaries to their company’s retirement plan often feel they may not be sufficiently informed or qualified to make prudent decisions for the plan. They might ask themselves: “How do I know which are prudent investments?” or “What amount of plan fees are ‘reasonable’”? Now, the DOL is requiring plan fiduciaries to prudently assess cybersecurity, possibly taking many plan fiduciaries further outside their comfort zones.

We started to see this developing in Episode 1 of our Musings series, when a new member of the Retirement Plan Committee expressed concerns about being qualified to help make decisions about the new DOL cybersecurity guidance. Knowing the Retirement Plan Committee maintains a robust training program, the Committee Chair reassured the New Committee Member that some upcoming training might help…

Retirement Plan Committee Chair: So, what did you think of the training?

New Committee Member: It was long! And, I have to admit, when I saw the agenda showing that our ERISA attorney was going to be presenting for 90 minutes, I immediately went for a second cup of coffee! But I was wrong. The presenter was quite good at putting complex and unfamiliar concepts into easy to understand, bite-sized pieces. She certainly calmed some of the concerns I expressed to you last week, while helping me to see how the problem of cybersecurity has become interwoven with our fiduciary duties

Committee Member A: I agree 100%. Until today I did not fully understand the scope of our duty as fiduciaries. I thought protecting assets in the plan meant simply make good investments and controlling fees.

Committee Member B: Yes, but did you hear what the attorney said? It is not a matter of “if” but “when” we have a breach. So, why spend all this time if we are just going to have a breach anyway?

Retirement Plan Committee Chair: Maybe, but the message was not that we have to be perfect, but prudent. We have to do our due diligence when making decisions, but we can’t guarantee a result.

Committee Member B: The attorney explained we have to make sure that nobody steals money from participant accounts.  This is like playing  cops and robbers but now the robbers can be thousands of miles away, stealing by a stroke of their computer. How are we to cope with this?

New Committee Member: That is not exactly what I heard. I heard that we need to be proactive, not reactive. We need to think more critically about the risk to the plan’s data and its assets. We have to consider the kinds of safeguards that are in place at the company and with any vendor that provides services to the plan. We need to learn more about what those safeguards should be, and maybe even bring in some expertise to help us figure that out. We can’t just wing it! And our own IT team may not have this expertise and be on top of the latest types of attacks.

But, she cautioned, even that may not be enough, because no set of safeguards is perfect. It’s like building a moat around the plan’s assets, but also realizing the attackers are sophisticated and can find their way around the drawbridge and the moat.  So, we need to be prepared to respond to the inevitable data breach.

I feel better knowing that meeting our fiduciary duty does not require us to be perfect, but we also have some work to do, including to document our process.

Committee Member A: Exactly. You are right. Before the meeting  I was totally confused and had visions of cyberattacks from Mars. Counsel explained the situation and provided concrete examples. It was helpful knowing we could develop a road map to follow.  I feel better that the situation can be addressed if we take the time and effort to understand it. She laid it out step by step, identifying some common shortfalls and strategies for mitigation.

Retirement Plan Committee Chair: There certainly is a learning curve here, but sounds like we are on our way. Tonight was first step of prudently addressing this new issue and we will build on it. There is a lot to unpack here. For example, it is not just about passwords, firewalls, and encryption, based on the presentation, we also have to consider identity verification.

We all have approved distributions and withdrawals requested by participants. Is our process good enough to tell a real request from a fraudulent one? How much time do each of us actually take to review requests, question the frequency of requests, or consider where they are coming from?

New Committee Member: The attorney said she was going to be at our next meeting, is that right?

Retirement Plan Committee Chair: Yes, that’s right. She may bring in an IT firm in to help us further and to begin shaping a plan to address this issue.

Committee Member B: That is good because I spoke with one of my friends who serves on his retirement plan committee, and the DOL has already started auditing plans on these issues. I volunteered to serve on this committee but am concerned about liability. I want to do more to protect myself and the plan.

The Committee appears to be moving in the right direction. They realize now they cannot be experts in all aspects of plan administration, and that some basic training can go a long way to help them make better, more prudent decisions.  But they also realize that need a plan to tackle the process of assessing cybersecurity risks for plan assets and plan data.

In April, we posted about the U.S. Department of Labor’s (DOL) Employee Benefits Security Administration (EBSA) issuing cybersecurity guidance for employee retirement plans. That is, April 14, 2021. Shortly thereafter, the DOL updated its audit inquiries to include probing questions for plan fiduciaries about their compliance with “hot off the press” agency guidelines.

So, what do those inquiries look like?

In short, the DOL is asking plan sponsors to produce:

all documents relating to any cybersecurity or information security programs that apply to the data of the Plan, whether those programs are applied by the sponsor of the Plan or by any service provider of the Plan

For plan fiduciaries that are new to cybersecurity and have not received a DOL audit in the last few months, it may not be clear what documents or materials the DOL is expecting. The DOL fleshes out its general inquiry with a laundry list of items. Here are some examples of those more specific requests:

  • All policies, procedures, or guidelines relating to such things as:
    • The implementation of access controls and identity management, including any use of multi-factor authentication
    • The processes for business continuity, disaster recovery, and incident response.
    • Management of vendors and third party service providers, including notification protocols for cybersecurity events and the use of data for any purpose other than the direct performance of their duties.
    • Cybersecurity awareness training.
    • Encryption to protect all sensitive information transmitted, stored, or in transit.

The list above is not complete, but it makes clear the DOL is looking for information about what plan fiduciaries are doing to safeguard their own information and systems to address privacy and security, not just that of their service providers. Some plan fiduciaries might be wondering what should policies, procedures, or guidelines look like to protect plan data. There are many frameworks to consider when adopting reasonable safeguards. Examples include guidance published by the National Institute of Standards and Technology, the New York SHIELD Act, the Massachusetts data security regulations, the privacy and security standards under HIPAA, etc.

In addition to policies, procedures, and guidelines summarized above, the DOL also seeks in its audit request copies of other materials, some of which are listed below.

  • “All documents and communications relating to any past cybersecurity incidents.”

So, evidently, the DOL would like to discover whether the plan had a prior cybersecurity incident. It is unclear whether this request refers only to “breaches of security” or similar terms as defined under state breach notification laws which require notification, or mere “incidents” that do not rise to the level of a reportable breach.

  • “All documents and communications describing security reviews and independent security assessments of the assets or data of the Plan stored in a cloud or managed by service providers.”

Here the DOL makes a distinction between plan “assets” and plan “data,” seeking security reviews and assessments relating to both. Recent litigation called into question whether plan data could be considered a “plan asset.” In one of the most recent cases, Harmon v. Shell Oil Co., 2021 WL 1232694 (S.D. Tex. Mar. 30, 2021), the U.S. District Court for the Southern District of Texas rejected the argument that plan assets include plan data.

  • “All documents describing security technical controls, including firewalls, antivirus software, and data backup.”

An important note here is that it may not be enough to say, “we are doing this,” or “we have implemented antivirus and firewalls to protect our information systems.” The DOL is looking for documents that describe those safeguards and controls.

  • “All documents and communications from service providers relating to their cybersecurity capabilities and procedures.”
  • “All documents and communications from service providers regarding policies and procedures for collecting, storing, archiving, deleting, anonymizing, warehousing, and sharing data.”
  • “All documents and communications describing the permitted uses of data by the sponsor of the Plan or by any service providers of the Plan, including, but not limited to, all uses of data for the direct or indirect purpose of cross-selling or marketing products and services.”

The DOL would like to see how plan fiduciaries are communicating with their service providers to assess service provider cybersecurity risk, as well as the documents and other materials from service providers concerning the processing of plan data. Importantly, the DOL is not just looking for cybersecurity related information. The agency apparently wants to know how service providers are permitted to use plan data. Plan fiduciaries will want to think carefully about their current practices, including their communications, when selecting and working with service providers.

No plan fiduciary wants to experience a DOL audit of their retirement plans, or any other audit for that matter. But cybersecurity clearly is a new and important area of interest for the DOL and plan fiduciaries need to be prepared to respond. Feel free to contact us if you would like to discuss audit readiness concerning cybersecurity for your plans.

Colorado is officially the third U.S. state to enact comprehensive privacy legislation, following California and Virginia. The Colorado General Assembly passed the Colorado Privacy Act (CPA), Senate Bill 21-109, on June 8, 2021, and Governor Jared Polis signed it into law on July 7, 2021.

The Colorado Privacy Act takes effect July 1, 2023, six months after the Virginia Consumer Data Protection Act (VCDPA) and California Privacy Rights Act (CPRA).

Applicability

The CPA provides new obligations on Controllers—that is, any entity that (i) determines the purposes and means of processing personal data, (ii) conducts business in Colorado or produces or delivers commercial products or services intentionally targeted to residents of the state, and (iii) either:  (a) controls or processes the personal data of more than 100,000 Colorado residents per year or (b) derives revenue from selling the personal data of more than 25,000 Colorado residents.

It also provides new rights to Consumers—or, any individual who is a Colorado resident acting in an individual or household context.

The CPA does not apply to data that is subject to other federal privacy laws such as the Health Insurance Portability and Accountability Act (HIPAA), the Children’s Online Privacy Protection Act (COPPA), the Gramm-Leach-Bliley Act (GLBA), the Family Educational Rights and Privacy Act (FERPA), and the Securities Exchange Act of 1934. The CPA also exempts employment data, higher education institutions, nonprofits, state and local governments, and public utility customer records (so long as they are not sold).

Consumer Rights under the Colorado Privacy Act

The rights the CPA affords to Consumers are similar to those in the VCDPA and CCPA/CPRA.

In broad strokes, the CPA regulates the use of and disclosures surrounding “personal data,” which includes information that is linked, or reasonably linkable, to an identifiable person, and “sensitive data,” which includes data revealing racial or ethnic origin, religious beliefs, a mental or physical health condition, sexual orientation, citizenship, genetic or biometric data, or personal data from a known child.

The CPA empowers Consumers with new controls over their data, including the right to:

  1. opt out of the processing of certain personal data;
  2. access personal data (up to twice per calendar year);
  3. correct inaccurate data;
  4. delete personal data; and
  5. data portability.

Controller Duties under the Colorado Privacy Act

Similarly, the CPA creates duties for Controllers, including the:

  • Duty of transparency;
  • Duty of purpose specification;
  • Duty of data minimization;
  • Duty to avoid secondary use;
  • Duty to avoid unlawful discrimination; and
  • Duty regarding sensitive data.

In addition, while Consumers may request access to their personal data, Controllers may not require that a Consumer create a new account in order to exercise this right (or retaliate with increased cost or decreased availability of a product or service ).  When responding to Consumer data requests, Controllers must:

  • Take action on the Consumer’s request without undue delay and within 45 days of receiving the request—with few exceptions.
  • Develop an internal process for Consumers to appeal refusals of data requests.
  • Notify the Consumer that it may contact the Colorado Attorney General if the Consumer has concerns about the result of the response and outcome of appeal.

Controllers must also conduct data protection assessments for each processing activity involving a heightened risk of harm to Consumers, including:

  • The sale of personal data;
  • Processing of sensitive data; or
  • Processing personal data for targeted advertising if it could lead to unfair or deceptive treatment or have a disparate impact on Consumers, financial or physical injury, physical or other intrusion upon seclusion, or other substantial injury

Controllers must present these data protection assessments to the CO Attorney General upon request.

Enforcement

One key difference between the CPA and California and Virginia privacy laws is that the CPA is enforceable by both the district attorney and office of the attorney general. This broadened enforcement mechanism could lead to greater scrutiny of affected businesses.

Unlike the CCPA, the CPA does not include a private right of action. The attorney general or district attorney may, however, institute a civil action or pursue injunctive relief. Failure to comply with the CPA may be considered a deceptive trade practice. Financial penalties are left to the discretion of the courts.

Key Takeaways

Colorado may be only the third state to enact comprehensive privacy legislation, but other states will likely be soon to follow. Differences between the CPA, VCDPA, and CPRA are subtle, and there are plenty of technical details to sift through. While this may ease the burden of compliance, companies still need to ensure their data collection activities fully comply with the provisions of each privacy act.

And with more states likely to follow suit, data privacy compliance will only get more complicated.

Please contact a Jackson Lewis attorney with any questions.

* Jackson Biesecker is a law clerk in our Privacy, Data & Cybersecurity Practice Group that contributed substantially to this article.

 

 

Globalization, compliance, and the growth in outsourcing have created a myriad of cross-border data transfer scenarios. These scenarios include marketing to and servicing customers, assessing global compliance with diversity and including goals, and outsourcing back office business functions. However, the emergence of far reaching data privacy regulation, such as the EU General Data Protection Regulation (“GDPR”), has erected roadblocks to the free flow of personal data, particularly from the European Economic Area (“EEA”) to countries without an EU adequacy decision, including the United States. Standard Contractual Clauses (“SCCs”) are one way to navigate the roadblocks, but the SCCs are not as simple as circulating a form agreement.

The recent Schrems II decision further complicated the flow of information when it invalidated the EU-U.S. Privacy Shield, and the original SCCs were unable to adequately address the EU Commission’s concerns about the protection of personal data. However, SCCs have played an increased role as an appropriate safeguard for transferring personal data. For U.S. companies sending or receiving personal data from the EEA, these new clauses will help accommodate an expanded set of transfer arrangements including processor to processor and processor to controller. Among other changes, the new SCCs address the data importer’s duties in situations where applicable laws affect its ability to comply with the SCCs, an issue raised in the Schrems II decision.

In short, the new SCCs are contractual terms adopted in part by the EU Commission to facilitate the transfer of personal data post-Schrems II. The SCCs are designed to ensure a non-GDPR importer has implemented appropriate safeguards to protect the data, and that data subjects have enforceable rights and effective legal remedies. The FAQs below summarize the new SCCs.

  1. What are the “new” SCCs?

On June 4, 2021, the EU Commission adopted “new” modernized SCCs to replace the 2001, 2004 and 2010 SCCs currently in use.

  1. How are the new SCCs different?

The EU Commission updated the SCCs to address more complex processing activities, the requirements of the GDPR, and the Schrems II decision. These clauses are modular so they can be tailored to the type of transfer.

  1. What types of data transfers are subject to the new SCCs?

The original SCCs apply to controller-controller and controller-processor transfers of personal data from the EU to countries without a Commission adequacy decision. The updated clauses are expanded to also include processor-processor and processor-controller transfers.

  1. Can multiple parties execute the SCCs?

Yes. While the existing SCCs were designed for two parties, the new clauses can be executed by multiple parties. The clauses also include a “docking clause” so that new parties can be added to the SCCs throughout the life of the contract.

  1. What obligations does a data importer have?

The obligations of the data importer are numerous and include, without limitation:

  • documenting the processing activities it performs on the transferred data,
  • notifying the data exporter if it is unable to comply with the SCCs,
  • returning or securely destroying the transferred data at the end of the contract,
  • applying additional safeguards to “sensitive data,”
  • adhering to purpose limitation, accuracy, minimization, retention, and destruction requirements,
  • notifying the exporter and data subject if it receives a legally binding request from a public authority to access the transferred data, if permitted, and
  • challenging a public authority access request if it reasonably believes the request is unlawful.
  1. Do the new SCCs require a risk assessment?

Yes. The SCCs require the data exporter to warrant there is no reason to believe local laws will prevent the importer from complying with its obligations under the SCCs. In order to make this representation, both parties must conduct and document a risk assessment of the proposed transfer.

  1. What does the risk assessment require?

The parties should review the facts and circumstances of the transfer (e.g., the nature of the data, duration of transfer, purpose for processing, storage location of the data, intended onward transfers), the relevant laws and practices of the importer’s jurisdiction, the existence or absence of public authority requests for access to the data in the importer’s jurisdiction, and any reasonable safeguards designed to supplement the protections of the SCCs. This documented assessment must be completed before fully executing the SCCs and it must be made available to the Supervisory Authority on request.

  1. Are the new SCCs negotiable?

No. The new SCCs cannot be negotiated, amended, or edited. However, additional terms can be included as long as they do not contradict or conflict with the underlying SCCs or the data subject’s privacy rights. Of course, those additional terms may be negotiated. It will also be important to consider what effect the new SCCs have on existing service agreement terms and conditions.

  1. What are the SCCs Annexes?

The SCCs include an Appendix with three Annexes for the parties to complete: Description of Transfer, Security Measures, and Sub-processors. These Annexes require detailed information about the transfer, particularly with respect to technical and organizational measures the importer will use to safeguard the data.

  1. Do the new SCCs apply to U.S. organizations that are not subject to the GDPR?

Yes, if a data exporter transfers data from the EU to a U.S. organization, the U.S. organization must execute the new SCCs unless the parties rely on an alternate transfer mechanism or an exception exists. This applies regardless of whether the U.S. company receives or accesses the data as a data controller or processor.

  1. When would a U.S. organization use the new SCCs to transfer or receive personal data from the EU?

A U.S. organization that is subject to the GDPR based on an “establishment” in the EU may transfer data from the EU to a data importer in the U.S. (or other country without an EU adequacy decision) in reliance on the SCCs unless the importer is also subject to the GDPR, the parties rely on an alternate transfer mechanism, or an exception applies. For example, assume the U.S. organization’s EU office transfers customer personal data to a third-party billing vendor located in the U.S. or transfers employee data to a compensation consultant in the U.S. In this case, if the vendor is not subject to the GDPR, the U.S. organization can enter into SCCs with that vendor to meet its obligations under the GDPR with regard to that transfer.

Perhaps a U.S. organization is not established in the EU but is subject to the GDPR because it offers goods or services to data subjects located in the EU or monitors their behavior in the EU. This organization may need to transfer the personal data of its EU customers to a third-party shipping vendor located in the U.S. It may transfer such data in reliance on the SCCs, unless the importer (the shipping vendor) is subject to the GDPR, the parties rely on an alternate transfer mechanism, or an exception applies.

Even in cases where a U.S. organization is not subject to the GDPR, but receives personal data in the U.S. from the EU or accesses personal data stored in the EU from the U.S., it must execute SCCs with the data exporter unless the parties rely on an alternate transfer mechanism or an exception exists. This applies regardless of whether the U.S. company is receiving or accessing the data as a data controller or data processor. For example, where a U.S. organization receives personal data as a controller for its own processing purposes (e.g., a U.S. ), the parties can execute controller – controller SCCs. Alternatively, if the U.S. organization receives personal data as a processor for the data exporter’s processing purposes (e.g., a U.S. marketing company receives customer personal data from an EU retailer), the parties can execute controller – processor SCCs.

In circumstances where a U.S. organization is not subject to the GDPR, but receives personal data from the EU as a processor and transfers that data to a sub-contractor or sub-processor in the U.S. (i.e., an onward transfer), the parties can execute processor – processor SCCs. For example, this may apply where a U.S. company provides fulfillment services to the data exporter and subcontracts shipping services to a third-party.

  1. Do the new SCCs give rights to individuals whose personal data is being transferred?

Yes. Individuals whose personal data is being transferred from the EU (i.e., data subjects) are third party beneficiaries of the SCCs and can invoke and enforce the SCCs against both the data exporter and importer.

  1. Does executing the new SCCs subject a U.S. company to EU jurisdiction?

With the exception of processor-controller transfers, the SCCs will be governed by an EU member state law that recognizes third party beneficiary rights and disputes arising from the clauses will be resolved in the courts of that member state. In addition, the importer must submit to the jurisdiction of the applicable Supervisory Authority and EU member state courts; commit to abide by any binding decision under the member state law; agree to respond to inquiries and submit to audits; and comply with remedial and compensatory measures adopted by the Supervisory Authority. In the case of a processor-controller transfer, the parties shall select the law of the country that will govern; however, that law must allow for third party beneficiary rights.

  1. What is the operative date of the new SCCs?

The 2001, 2004 and 2010 SCCs are repealed, effective September 27, 2021. New transfers made after September 27, 2021 must use the new SSCs.

  1. Should an organization replace the SSCs its currently using for ongoing transfers of personal data from the EU?

Yes, but there is a grace period. Organizations currently using the original SCCs for ongoing transfers must replace them with the new clauses by December 27, 2022. During the grace period, the parties must ensure the ongoing transfer is subject to appropriate safeguards.

  1. Should organizations replace SSCs that were used for a completed, one-time transfer of personal data from the EU?

Maybe. If the transfer of data from the EU to the U.S. has been completed, but the data importer continues to process the personal data, the parties must replace the original SCCs with the new clauses by December 27, 2022.

  1. Do the new SCCs impact GDPR data processing agreements?

Yes. The new SCCs may be used in lieu of a GDPR data processing agreement between a controller and processor or processor and processor during a transfer, thus eliminating the need for both a data processing agreement and SCCs. The new SCCs include the Article 28 provisions typically included in a GDPR data processing agreement.

  1. Do the new SSCs apply to transfers of personal data from the U.K. to the U.S.?

No. The original SCCs will continue to apply to U.K. – U.S. transfers of personal data until the U.K. recognizes the EU Commission’s new SCCs or adopts its own version.

  1. What steps should U.S. organizations take to prepare for the new SCCs?

Preparing for the new SCCs will require a commitment of time and resources. U.S. organizations that plan to transfer, receive, or access personal data from or in the EU after September 27, 2021 should consider the following steps well in advance of the SCC’s operative date:

  • Identifying ongoing transfers that will need to be updated and reviewing completed transfers to determine whether processing on the data is ongoing.
  • Implementing a process to conduct documented risk assessments prior to a transfer that includes
  • Reviewing transfer facts.
  • Identifying applicable national and local laws and practices.
  • Assessing the potential for public authority access to, or requests to access, transferred data.
  • Determining whether the organization previously received public authority access, or requests to access.
  • Identifying additional available reasonable safeguards for the transfer.
  • Developing internal policies for handling data transferred from the EU to ensure compliance with purpose limitations, storage and retention requirements, data minimization, data destruction and confidentiality obligations.
  • Training employees to identify cross border transfers of EU data that may be subject to the GDPR and SCCs including client, consumer, and HR data.
  • Reviewing the organization’s technical and organizational safeguards to ensure adequate protection of EU data during transmission and storage.
  • Determining whether data transferred or received from the EU will be transferred onward to a third party or vendor and reviewing vendor and third-party contracts to ensure the recipient will be contractually obligated to implement reasonable safeguards.
  • Reviewing and updating the organization’s data breach response plan to address the data transferred or received from the EU.
  • Reviewing and updating the organization’s business continuity plan to ensure the availability of data transferred or received from the EU.
  • Reviewing existing transfers to ensure adequate safeguards are in place.

September 27, 2021 is not far away. Most U.S. organizations will need to move quickly to identify new cross border data transfers commencing after that date and be prepared to implement the new procedures and documents for the SCCs. This is, of course, if they are not relying on an alternate transfer mechanism or an exception exists. Organizations will also need to review any ongoing transfers made in reliance on the old SCCs and take steps to comply. As with new transfers, this will require a documented risk assessment and a comprehensive understanding of the organization’s process for accessing and transferring personal data protected under GDPR.