As Facial Recognition Technology Surges, Organizations Face Privacy and Cybersecurity Concerns, and Fraud

Facial recognition technology has become increasingly popular in recent years in the employment and consumer space (e.g. employee access, passport check-in systems, payments on smartphones), and in particular during the COVID-19 pandemic. As the need arose to screen persons entering a facility for symptoms of the virus, including temperature, thermal cameras, kiosks, and other devices with embedded with facial recognition capabilities were put into use. However, many have objected to the use of this technology in its current form, citing problems with the accuracy of the technology, and now, more alarmingly, there is growing concern that “Faces are the Next Target for Fraudsters” as summarized by a recently article in the Wall Street Journal (“WSJ”).

In the last year, there has been an uptick in hackers trying to “trick” facial recognition technology, in a myriad of settings, such as fraudulently claiming unemployment benefits from state workforce agencies, The majority of states are now using facial recognition technology as a way to verify to eligible citizens, ironically enough, in order to prevent other types of fraud. As discussed in the WSJ article, the firm ID.me.Inc. which provides facial recognition software for 26 states to help verify individuals eligible for unemployment benefits has seen between June 2020 – January 2021 over 80,000 attempts to fool government identification facial recognition systems.  Hackers of facial recognition systems use a myriad of techniques including deepfakes (AI generated images), special masks, or even holding up images or videos of the individual the hacker is looking to impersonate.

Fraud is not the only concern with facial recognition technology.  Despite its appeal for employers and organizations, there are concerns regarding the accuracy of the technology, as well as significant legal implications to consider.  First, there are growing concerns regarding accuracy and biases of the technology.  A recent report by the National Institute of Standards and Technology studied 189 facial recognition algorithms which is considered the “majority of the industry”.  The report found that most of the algorithms exhibit bias, falsely identifying Asian and Black faces 10 to beyond 100 times more than White faces.  Moreover, false positives are significantly more common in woman than men, and more elevated in elderly and children, than middle-aged adults.

In addition, several U.S. localities have already banned the use of facial recognition for law enforcement, other government agencies, and/or private and commercial use.  The City of Baltimore, for example, recently banned the use of facial recognition technologies by city residents, businesses, and most of the city government (excluding the city police department) until December 2022.  Council Bill 21-0001  prohibits persons from “obtaining, retaining, accessing, or using certain face surveillance technology or any information obtained from certain face surveillance technology.” Likewise in September of 2020 the City of Portland in Oregon became the first city in the United States to ban the use of facial recognition technologies in the private sector citing, among other things, a lack of standards for the technology and wide ranges in accuracy and error rates that differ by race and gender. Failure to comply can be painful. The Ordinance provides persons injured by a material violation a cause of action for damages or $1,000 per day for each day of violation, whichever is greater.

And finally, companies looking to implement facial recognition technologies, must consider their obligations under laws such as the Illinois’ Biometric Information Privacy Act (BIPA) and the California Consumer Privacy Act (CCPA). The BIPA addresses a business’s collection of biometric data from both customers and employees including for example facial recognition, finger prints, and voice prints.  The BIPA requires informed consent prior to collection of biometric data, mandates protection obligations and retention guidelines, and creates a private right of action for individuals aggrieved by BIPA violations which has resulted in a flood of BIPA class action litigation in recent years.  Texas, Washington and California also have similar requirements, New York is considering a BIPA-like privacy bill and NYC recently created BIPA-like requirements for retail, hospitality businesses concerning biometric collection from customers. Additionally, states are increasingly amending their breach notification laws to add biometric information to the categories of personal information that require notification, including 2020 amendments in California, D.C., and Vermont. Moreover, there are a myriad of data destruction, reasonable safeguards, and vendor requirements to consider, depending on the state, when collecting biometric data.

Takeaway

Facial recognition and other biometric data related technology is booming, and continues to infiltrate different facets of life that are hard to even contemplate. The technology brings innumerable potential benefits as well as significant data privacy and cybersecurity risks. Organizations that collect, use, and store biometric data increasingly face compliance obligations as the law attempts to keep pace with technology, cybersecurity crimes, and public awareness of data privacy and security. Creating a robust privacy and data protection program or regularly reviewing an existing one is a critical risk management and legal compliance step.

Information Blocking and HIPAA’s Right to Access – Is Your Practice Compliant?

Patient record requests can be a significant administrative burden for health care providers. An OCR enforcement initiative and a new federal law give providers more reason to get this process right.  We summarize these rules here.

Since the Health Insurance Portability and Accountability Act of 1996 (HIPAA) Privacy Rule became effective in 2003, it generally required covered entities to provide patients timely access to their medical records. However, continued concerns over the level of patient access to records are driving increased emphasis, heightened enforcement activity, and new laws to ensure individuals have easy access to their health information, including the 21st Century Cures Act.  A critical goal of these efforts is to empower patients to be more in control of decisions regarding their health and well-being. By helping individuals have ready access to their health records, according to OCR, they are better positioned:

to monitor chronic conditions, adhere to treatment plans, find and fix errors in their health records, track progress in wellness or disease management programs, and directly contribute their information to research.

The “Right to Access” under HIPAA established a floor for patients to access their health records, which could be exceeded by more stringent state laws. In 2019, the OCR commenced its Right of Access Initiative, an enforcement priority to support individuals’ right to timely access to their health records at a reasonable cost. At least one study found providers are struggling to fully comply. Nonetheless, the OCR has announced nearly 20 enforcement actions under its Right of Access Initiative – a full list of enforcement actions is available on the OCR website. Monetary settlements to date have ranged from $3,500 to $200,000. In addition, the OCR resolution agreements require the covered entities to develop a corrective action plans to prevent further violations.

The Cures Act significantly heightens the obligations under HIPAA right to access. Its Interoperability, Information Blocking, and the ONC Health IT Certification Program seeks to minimize the interference with the ability of authorized persons or entities to access, exchange, or use electronic health information – that is, it wants to eliminate impermissible “information blocking.” More specifically, the Cures Act defines information blocking as business, technical, and organizational practices that prevent or materially discourage the access, exchange, or use of EHI when an actor knows, or (for some actors like electronic health record vendors) should know, that these practices are likely to interfere with access, exchange, or use of EHI.  The law empowers the HHS Office of Inspector General (OIG) to investigate claims of information blocking and to provide referral processes to facilitate coordination with the OCR. The goal of these provisions is to support seamless, secure access, exchange, and use of electronic health information (EHI).

During the nearly 20 years since the HIPAA Privacy Rule became effective, technological changes now support even greater access rights, including enabling access in real time and on demand. Providers, even certain providers not subject to HIPAA, will need to ensure they have compliant policies and procedures for ensuring patients have access to their records and avoiding enforcement actions, headaches, and penalties.

Connecticut Enacts Safe Harbor from Punitive Damages in Data Breach Cases

Effective October 1, 2021, Connecticut becomes the third state with a data breach litigation “safe harbor” law (Public Act No. 21-119), joining Utah and Ohio. In short, the Connecticut law prohibits courts in the state from assessing punitive damages in data breach litigation against a covered defendant that created, maintained, and complied with a cybersecurity program that meets certain requirements. Cyberattacks are on the rise – think Colonial Pipeline, Kaseya, JBS, and others – with ransomware attacks up 158 percent from 2019-2020 in North America.

The hope is this law will provide covered entities of all sizes an incentive to implement stronger controls over their information systems. According to Homeland Security Secretary Alejandro Mayorkas:

As a matter of fact, small businesses comprise approximately one-half to three-quarters of the victims of ransomware 

So, what can “covered entities” in Connecticut do to at least try to protect themselves from punitive damages if sued following a data breach?

First, it is important to note that the law applies to “covered entities” – defined to include a business that “accesses, maintains, communicates or processes personal information or restricted information in or through one or more systems, networks or services located in or outside this state.”

The definition of “personal information” tracks the definition of the same term in Connecticut’s recently updated data breach notification law. But, the law adds the term “restricted information” to the mix, defined to include:

any information about an individual, other than personal information or publicly available information, that, alone or in combination with other information, including personal information, can be used to distinguish or trace the individual’s identity or that is reasonably linked or linkable to an individual, if the information is not encrypted, redacted or altered by any method or technology in such a manner that the information is unreadable, and the breach of which is likely to result in a material risk of identity theft or other fraud to a person or property.

PA 21-119 prohibits superior courts from assessing punitive damages against a covered entity defendant in any tort action brought under Connecticut law or in Connecticut courts alleging a failure to implement reasonable cybersecurity controls that resulted in a data breach involving personal information or restricted information, provided that:

[the covered entity] created, maintained and complied with a written cybersecurity program that contains administrative, technical and physical safeguards for the protection of personal or restricted information and that conforms to an industry recognized cybersecurity framework.

Examples of the frameworks listed in the statute include: NIST SP 800-171, NIST SP 800-53, and the Center for Internet Security’s “Center for Internet Security Critical Security Controls for Effective Cyber Defense.” Covered entities regulated under federal or state laws, such as the Security Rule under the Health Insurance Portability and Accountability Act of 1996 (HIPAA), can rely on compliance with the current version of those regulatory frameworks. Should these frameworks change, covered entities have six months to confirm to the changes.

Additionally, the cybersecurity program must be designed to:

  • protect the security and confidentiality of personal and restricted information;
  • protect against any threats or hazards to the security or integrity of such information; and
  • protect against unauthorized access to and acquisition of such information that would result in a material risk of identity theft or other fraud to the individual to whom the information relates.

Importantly, covered entities should consider how the framework they use covers the personal and restricted information they maintain. For example, a HIPAA covered entity or business associate relying solely on the HIPAA security rule could mean that its cybersecurity program reaches only “protected health information” as defined by HIPAA, but not personal and restricted information as defined in PA 21-119.

The Connecticut law, however, permits the program to be shaped by several factors including (i) the size and complexity of the covered entity; (ii) the nature and scope of the activities of the covered entity; (iii) the sensitivity of the information to be protected; and (iv) the cost and availability of tools to improve information security and reduce vulnerabilities.

This law, similar to the measures in Utah and Ohio, incentivize heightened protection of personal data, while providing a safe harbor from certain claims for organizations facing data breach litigation.  Creating, maintaining, and complying with a robust data protection program is a critical risk management and legal compliance step, and one that might provide protection from litigation following a data breach.

Musings of Retirement Plan Fiduciaries on Cybersecurity: Episode Two

Individuals who serve as a fiduciaries to their company’s retirement plan often feel they may not be sufficiently informed or qualified to make prudent decisions for the plan. They might ask themselves: “How do I know which are prudent investments?” or “What amount of plan fees are ‘reasonable’”? Now, the DOL is requiring plan fiduciaries to prudently assess cybersecurity, possibly taking many plan fiduciaries further outside their comfort zones.

We started to see this developing in Episode 1 of our Musings series, when a new member of the Retirement Plan Committee expressed concerns about being qualified to help make decisions about the new DOL cybersecurity guidance. Knowing the Retirement Plan Committee maintains a robust training program, the Committee Chair reassured the New Committee Member that some upcoming training might help…

Retirement Plan Committee Chair: So, what did you think of the training?

New Committee Member: It was long! And, I have to admit, when I saw the agenda showing that our ERISA attorney was going to be presenting for 90 minutes, I immediately went for a second cup of coffee! But I was wrong. The presenter was quite good at putting complex and unfamiliar concepts into easy to understand, bite-sized pieces. She certainly calmed some of the concerns I expressed to you last week, while helping me to see how the problem of cybersecurity has become interwoven with our fiduciary duties

Committee Member A: I agree 100%. Until today I did not fully understand the scope of our duty as fiduciaries. I thought protecting assets in the plan meant simply make good investments and controlling fees.

Committee Member B: Yes, but did you hear what the attorney said? It is not a matter of “if” but “when” we have a breach. So, why spend all this time if we are just going to have a breach anyway?

Retirement Plan Committee Chair: Maybe, but the message was not that we have to be perfect, but prudent. We have to do our due diligence when making decisions, but we can’t guarantee a result.

Committee Member B: The attorney explained we have to make sure that nobody steals money from participant accounts.  This is like playing  cops and robbers but now the robbers can be thousands of miles away, stealing by a stroke of their computer. How are we to cope with this?

New Committee Member: That is not exactly what I heard. I heard that we need to be proactive, not reactive. We need to think more critically about the risk to the plan’s data and its assets. We have to consider the kinds of safeguards that are in place at the company and with any vendor that provides services to the plan. We need to learn more about what those safeguards should be, and maybe even bring in some expertise to help us figure that out. We can’t just wing it! And our own IT team may not have this expertise and be on top of the latest types of attacks.

But, she cautioned, even that may not be enough, because no set of safeguards is perfect. It’s like building a moat around the plan’s assets, but also realizing the attackers are sophisticated and can find their way around the drawbridge and the moat.  So, we need to be prepared to respond to the inevitable data breach.

I feel better knowing that meeting our fiduciary duty does not require us to be perfect, but we also have some work to do, including to document our process.

Committee Member A: Exactly. You are right. Before the meeting  I was totally confused and had visions of cyberattacks from Mars. Counsel explained the situation and provided concrete examples. It was helpful knowing we could develop a road map to follow.  I feel better that the situation can be addressed if we take the time and effort to understand it. She laid it out step by step, identifying some common shortfalls and strategies for mitigation.

Retirement Plan Committee Chair: There certainly is a learning curve here, but sounds like we are on our way. Tonight was first step of prudently addressing this new issue and we will build on it. There is a lot to unpack here. For example, it is not just about passwords, firewalls, and encryption, based on the presentation, we also have to consider identity verification.

We all have approved distributions and withdrawals requested by participants. Is our process good enough to tell a real request from a fraudulent one? How much time do each of us actually take to review requests, question the frequency of requests, or consider where they are coming from?

New Committee Member: The attorney said she was going to be at our next meeting, is that right?

Retirement Plan Committee Chair: Yes, that’s right. She may bring in an IT firm in to help us further and to begin shaping a plan to address this issue.

Committee Member B: That is good because I spoke with one of my friends who serves on his retirement plan committee, and the DOL has already started auditing plans on these issues. I volunteered to serve on this committee but am concerned about liability. I want to do more to protect myself and the plan.

The Committee appears to be moving in the right direction. They realize now they cannot be experts in all aspects of plan administration, and that some basic training can go a long way to help them make better, more prudent decisions.  But they also realize that need a plan to tackle the process of assessing cybersecurity risks for plan assets and plan data.

DOL Has Started to Audit Compliance with Its Cybersecurity Guidelines

In April, we posted about the U.S. Department of Labor’s (DOL) Employee Benefits Security Administration (EBSA) issuing cybersecurity guidance for employee retirement plans. That is, April 14, 2021. Shortly thereafter, the DOL updated its audit inquiries to include probing questions for plan fiduciaries about their compliance with “hot off the press” agency guidelines.

So, what do those inquiries look like?

In short, the DOL is asking plan sponsors to produce:

all documents relating to any cybersecurity or information security programs that apply to the data of the Plan, whether those programs are applied by the sponsor of the Plan or by any service provider of the Plan

For plan fiduciaries that are new to cybersecurity and have not received a DOL audit in the last few months, it may not be clear what documents or materials the DOL is expecting. The DOL fleshes out its general inquiry with a laundry list of items. Here are some examples of those more specific requests:

  • All policies, procedures, or guidelines relating to such things as:
    • The implementation of access controls and identity management, including any use of multi-factor authentication
    • The processes for business continuity, disaster recovery, and incident response.
    • Management of vendors and third party service providers, including notification protocols for cybersecurity events and the use of data for any purpose other than the direct performance of their duties.
    • Cybersecurity awareness training.
    • Encryption to protect all sensitive information transmitted, stored, or in transit.

The list above is not complete, but it makes clear the DOL is looking for information about what plan fiduciaries are doing to safeguard their own information and systems to address privacy and security, not just that of their service providers. Some plan fiduciaries might be wondering what should policies, procedures, or guidelines look like to protect plan data. There are many frameworks to consider when adopting reasonable safeguards. Examples include guidance published by the National Institute of Standards and Technology, the New York SHIELD Act, the Massachusetts data security regulations, the privacy and security standards under HIPAA, etc.

In addition to policies, procedures, and guidelines summarized above, the DOL also seeks in its audit request copies of other materials, some of which are listed below.

  • “All documents and communications relating to any past cybersecurity incidents.”

So, evidently, the DOL would like to discover whether the plan had a prior cybersecurity incident. It is unclear whether this request refers only to “breaches of security” or similar terms as defined under state breach notification laws which require notification, or mere “incidents” that do not rise to the level of a reportable breach.

  • “All documents and communications describing security reviews and independent security assessments of the assets or data of the Plan stored in a cloud or managed by service providers.”

Here the DOL makes a distinction between plan “assets” and plan “data,” seeking security reviews and assessments relating to both. Recent litigation called into question whether plan data could be considered a “plan asset.” In one of the most recent cases, Harmon v. Shell Oil Co., 2021 WL 1232694 (S.D. Tex. Mar. 30, 2021), the U.S. District Court for the Southern District of Texas rejected the argument that plan assets include plan data.

  • “All documents describing security technical controls, including firewalls, antivirus software, and data backup.”

An important note here is that it may not be enough to say, “we are doing this,” or “we have implemented antivirus and firewalls to protect our information systems.” The DOL is looking for documents that describe those safeguards and controls.

  • “All documents and communications from service providers relating to their cybersecurity capabilities and procedures.”
  • “All documents and communications from service providers regarding policies and procedures for collecting, storing, archiving, deleting, anonymizing, warehousing, and sharing data.”
  • “All documents and communications describing the permitted uses of data by the sponsor of the Plan or by any service providers of the Plan, including, but not limited to, all uses of data for the direct or indirect purpose of cross-selling or marketing products and services.”

The DOL would like to see how plan fiduciaries are communicating with their service providers to assess service provider cybersecurity risk, as well as the documents and other materials from service providers concerning the processing of plan data. Importantly, the DOL is not just looking for cybersecurity related information. The agency apparently wants to know how service providers are permitted to use plan data. Plan fiduciaries will want to think carefully about their current practices, including their communications, when selecting and working with service providers.

No plan fiduciary wants to experience a DOL audit of their retirement plans, or any other audit for that matter. But cybersecurity clearly is a new and important area of interest for the DOL and plan fiduciaries need to be prepared to respond. Feel free to contact us if you would like to discuss audit readiness concerning cybersecurity for your plans.

Colorado Becomes Third State To Enact a Comprehensive Privacy Law

Colorado is officially the third U.S. state to enact comprehensive privacy legislation, following California and Virginia. The Colorado General Assembly passed the Colorado Privacy Act (CPA), Senate Bill 21-109, on June 8, 2021, and Governor Jared Polis signed it into law on July 7, 2021.

The Colorado Privacy Act takes effect July 1, 2023, six months after the Virginia Consumer Data Protection Act (VCDPA) and California Privacy Rights Act (CPRA).

Applicability

The CPA provides new obligations on Controllers—that is, any entity that (i) determines the purposes and means of processing personal data, (ii) conducts business in Colorado or produces or delivers commercial products or services intentionally targeted to residents of the state, and (iii) either:  (a) controls or processes the personal data of more than 100,000 Colorado residents per year or (b) derives revenue from selling the personal data of more than 25,000 Colorado residents.

It also provides new rights to Consumers—or, any individual who is a Colorado resident acting in an individual or household context.

The CPA does not apply to data that is subject to other federal privacy laws such as the Health Insurance Portability and Accountability Act (HIPAA), the Children’s Online Privacy Protection Act (COPPA), the Gramm-Leach-Bliley Act (GLBA), the Family Educational Rights and Privacy Act (FERPA), and the Securities Exchange Act of 1934. The CPA also exempts employment data, higher education institutions, nonprofits, state and local governments, and public utility customer records (so long as they are not sold).

Consumer Rights under the Colorado Privacy Act

The rights the CPA affords to Consumers are similar to those in the VCDPA and CCPA/CPRA.

In broad strokes, the CPA regulates the use of and disclosures surrounding “personal data,” which includes information that is linked, or reasonably linkable, to an identifiable person, and “sensitive data,” which includes data revealing racial or ethnic origin, religious beliefs, a mental or physical health condition, sexual orientation, citizenship, genetic or biometric data, or personal data from a known child.

The CPA empowers Consumers with new controls over their data, including the right to:

  1. opt out of the processing of certain personal data;
  2. access personal data (up to twice per calendar year);
  3. correct inaccurate data;
  4. delete personal data; and
  5. data portability.

Controller Duties under the Colorado Privacy Act

Similarly, the CPA creates duties for Controllers, including the:

  • Duty of transparency;
  • Duty of purpose specification;
  • Duty of data minimization;
  • Duty to avoid secondary use;
  • Duty to avoid unlawful discrimination; and
  • Duty regarding sensitive data.

In addition, while Consumers may request access to their personal data, Controllers may not require that a Consumer create a new account in order to exercise this right (or retaliate with increased cost or decreased availability of a product or service ).  When responding to Consumer data requests, Controllers must:

  • Take action on the Consumer’s request without undue delay and within 45 days of receiving the request—with few exceptions.
  • Develop an internal process for Consumers to appeal refusals of data requests.
  • Notify the Consumer that it may contact the Colorado Attorney General if the Consumer has concerns about the result of the response and outcome of appeal.

Controllers must also conduct data protection assessments for each processing activity involving a heightened risk of harm to Consumers, including:

  • The sale of personal data;
  • Processing of sensitive data; or
  • Processing personal data for targeted advertising if it could lead to unfair or deceptive treatment or have a disparate impact on Consumers, financial or physical injury, physical or other intrusion upon seclusion, or other substantial injury

Controllers must present these data protection assessments to the CO Attorney General upon request.

Enforcement

One key difference between the CPA and California and Virginia privacy laws is that the CPA is enforceable by both the district attorney and office of the attorney general. This broadened enforcement mechanism could lead to greater scrutiny of affected businesses.

Unlike the CCPA, the CPA does not include a private right of action. The attorney general or district attorney may, however, institute a civil action or pursue injunctive relief. Failure to comply with the CPA may be considered a deceptive trade practice. Financial penalties are left to the discretion of the courts.

Key Takeaways

Colorado may be only the third state to enact comprehensive privacy legislation, but other states will likely be soon to follow. Differences between the CPA, VCDPA, and CPRA are subtle, and there are plenty of technical details to sift through. While this may ease the burden of compliance, companies still need to ensure their data collection activities fully comply with the provisions of each privacy act.

And with more states likely to follow suit, data privacy compliance will only get more complicated.

Please contact a Jackson Lewis attorney with any questions.

* Jackson Biesecker is a law clerk in our Privacy, Data & Cybersecurity Practice Group that contributed substantially to this article.

 

 

The “New” EU Standard Contractual Clauses: FAQs for U.S. Organizations

Globalization, compliance, and the growth in outsourcing have created a myriad of cross-border data transfer scenarios. These scenarios include marketing to and servicing customers, assessing global compliance with diversity and including goals, and outsourcing back office business functions. However, the emergence of far reaching data privacy regulation, such as the EU General Data Protection Regulation (“GDPR”), has erected roadblocks to the free flow of personal data, particularly from the European Economic Area (“EEA”) to countries without an EU adequacy decision, including the United States. Standard Contractual Clauses (“SCCs”) are one way to navigate the roadblocks, but the SCCs are not as simple as circulating a form agreement.

The recent Schrems II decision further complicated the flow of information when it invalidated the EU-U.S. Privacy Shield, and the original SCCs were unable to adequately address the EU Commission’s concerns about the protection of personal data. However, SCCs have played an increased role as an appropriate safeguard for transferring personal data. For U.S. companies sending or receiving personal data from the EEA, these new clauses will help accommodate an expanded set of transfer arrangements including processor to processor and processor to controller. Among other changes, the new SCCs address the data importer’s duties in situations where applicable laws affect its ability to comply with the SCCs, an issue raised in the Schrems II decision.

In short, the new SCCs are contractual terms adopted in part by the EU Commission to facilitate the transfer of personal data post-Schrems II. The SCCs are designed to ensure a non-GDPR importer has implemented appropriate safeguards to protect the data, and that data subjects have enforceable rights and effective legal remedies. The FAQs below summarize the new SCCs.

  1. What are the “new” SCCs?

On June 4, 2021, the EU Commission adopted “new” modernized SCCs to replace the 2001, 2004 and 2010 SCCs currently in use.

  1. How are the new SCCs different?

The EU Commission updated the SCCs to address more complex processing activities, the requirements of the GDPR, and the Schrems II decision. These clauses are modular so they can be tailored to the type of transfer.

  1. What types of data transfers are subject to the new SCCs?

The original SCCs apply to controller-controller and controller-processor transfers of personal data from the EU to countries without a Commission adequacy decision. The updated clauses are expanded to also include processor-processor and processor-controller transfers.

  1. Can multiple parties execute the SCCs?

Yes. While the existing SCCs were designed for two parties, the new clauses can be executed by multiple parties. The clauses also include a “docking clause” so that new parties can be added to the SCCs throughout the life of the contract.

  1. What obligations does a data importer have?

The obligations of the data importer are numerous and include, without limitation:

  • documenting the processing activities it performs on the transferred data,
  • notifying the data exporter if it is unable to comply with the SCCs,
  • returning or securely destroying the transferred data at the end of the contract,
  • applying additional safeguards to “sensitive data,”
  • adhering to purpose limitation, accuracy, minimization, retention, and destruction requirements,
  • notifying the exporter and data subject if it receives a legally binding request from a public authority to access the transferred data, if permitted, and
  • challenging a public authority access request if it reasonably believes the request is unlawful.
  1. Do the new SCCs require a risk assessment?

Yes. The SCCs require the data exporter to warrant there is no reason to believe local laws will prevent the importer from complying with its obligations under the SCCs. In order to make this representation, both parties must conduct and document a risk assessment of the proposed transfer.

  1. What does the risk assessment require?

The parties should review the facts and circumstances of the transfer (e.g., the nature of the data, duration of transfer, purpose for processing, storage location of the data, intended onward transfers), the relevant laws and practices of the importer’s jurisdiction, the existence or absence of public authority requests for access to the data in the importer’s jurisdiction, and any reasonable safeguards designed to supplement the protections of the SCCs. This documented assessment must be completed before fully executing the SCCs and it must be made available to the Supervisory Authority on request.

  1. Are the new SCCs negotiable?

No. The new SCCs cannot be negotiated, amended, or edited. However, additional terms can be included as long as they do not contradict or conflict with the underlying SCCs or the data subject’s privacy rights. Of course, those additional terms may be negotiated. It will also be important to consider what effect the new SCCs have on existing service agreement terms and conditions.

  1. What are the SCCs Annexes?

The SCCs include an Appendix with three Annexes for the parties to complete: Description of Transfer, Security Measures, and Sub-processors. These Annexes require detailed information about the transfer, particularly with respect to technical and organizational measures the importer will use to safeguard the data.

  1. Do the new SCCs apply to U.S. organizations that are not subject to the GDPR?

Yes, if a data exporter transfers data from the EU to a U.S. organization, the U.S. organization must execute the new SCCs unless the parties rely on an alternate transfer mechanism or an exception exists. This applies regardless of whether the U.S. company receives or accesses the data as a data controller or processor.

  1. When would a U.S. organization use the new SCCs to transfer or receive personal data from the EU?

A U.S. organization that is subject to the GDPR based on an “establishment” in the EU may transfer data from the EU to a data importer in the U.S. (or other country without an EU adequacy decision) in reliance on the SCCs unless the importer is also subject to the GDPR, the parties rely on an alternate transfer mechanism, or an exception applies. For example, assume the U.S. organization’s EU office transfers customer personal data to a third-party billing vendor located in the U.S. or transfers employee data to a compensation consultant in the U.S. In this case, if the vendor is not subject to the GDPR, the U.S. organization can enter into SCCs with that vendor to meet its obligations under the GDPR with regard to that transfer.

Perhaps a U.S. organization is not established in the EU but is subject to the GDPR because it offers goods or services to data subjects located in the EU or monitors their behavior in the EU. This organization may need to transfer the personal data of its EU customers to a third-party shipping vendor located in the U.S. It may transfer such data in reliance on the SCCs, unless the importer (the shipping vendor) is subject to the GDPR, the parties rely on an alternate transfer mechanism, or an exception applies.

Even in cases where a U.S. organization is not subject to the GDPR, but receives personal data in the U.S. from the EU or accesses personal data stored in the EU from the U.S., it must execute SCCs with the data exporter unless the parties rely on an alternate transfer mechanism or an exception exists. This applies regardless of whether the U.S. company is receiving or accessing the data as a data controller or data processor. For example, where a U.S. organization receives personal data as a controller for its own processing purposes (e.g., a U.S. ), the parties can execute controller – controller SCCs. Alternatively, if the U.S. organization receives personal data as a processor for the data exporter’s processing purposes (e.g., a U.S. marketing company receives customer personal data from an EU retailer), the parties can execute controller – processor SCCs.

In circumstances where a U.S. organization is not subject to the GDPR, but receives personal data from the EU as a processor and transfers that data to a sub-contractor or sub-processor in the U.S. (i.e., an onward transfer), the parties can execute processor – processor SCCs. For example, this may apply where a U.S. company provides fulfillment services to the data exporter and subcontracts shipping services to a third-party.

  1. Do the new SCCs give rights to individuals whose personal data is being transferred?

Yes. Individuals whose personal data is being transferred from the EU (i.e., data subjects) are third party beneficiaries of the SCCs and can invoke and enforce the SCCs against both the data exporter and importer.

  1. Does executing the new SCCs subject a U.S. company to EU jurisdiction?

With the exception of processor-controller transfers, the SCCs will be governed by an EU member state law that recognizes third party beneficiary rights and disputes arising from the clauses will be resolved in the courts of that member state. In addition, the importer must submit to the jurisdiction of the applicable Supervisory Authority and EU member state courts; commit to abide by any binding decision under the member state law; agree to respond to inquiries and submit to audits; and comply with remedial and compensatory measures adopted by the Supervisory Authority. In the case of a processor-controller transfer, the parties shall select the law of the country that will govern; however, that law must allow for third party beneficiary rights.

  1. What is the operative date of the new SCCs?

The 2001, 2004 and 2010 SCCs are repealed, effective September 27, 2021. New transfers made after September 27, 2021 must use the new SSCs.

  1. Should an organization replace the SSCs its currently using for ongoing transfers of personal data from the EU?

Yes, but there is a grace period. Organizations currently using the original SCCs for ongoing transfers must replace them with the new clauses by December 27, 2022. During the grace period, the parties must ensure the ongoing transfer is subject to appropriate safeguards.

  1. Should organizations replace SSCs that were used for a completed, one-time transfer of personal data from the EU?

Maybe. If the transfer of data from the EU to the U.S. has been completed, but the data importer continues to process the personal data, the parties must replace the original SCCs with the new clauses by December 27, 2022.

  1. Do the new SCCs impact GDPR data processing agreements?

Yes. The new SCCs may be used in lieu of a GDPR data processing agreement between a controller and processor or processor and processor during a transfer, thus eliminating the need for both a data processing agreement and SCCs. The new SCCs include the Article 28 provisions typically included in a GDPR data processing agreement.

  1. Do the new SSCs apply to transfers of personal data from the U.K. to the U.S.?

No. The original SCCs will continue to apply to U.K. – U.S. transfers of personal data until the U.K. recognizes the EU Commission’s new SCCs or adopts its own version.

  1. What steps should U.S. organizations take to prepare for the new SCCs?

Preparing for the new SCCs will require a commitment of time and resources. U.S. organizations that plan to transfer, receive, or access personal data from or in the EU after September 27, 2021 should consider the following steps well in advance of the SCC’s operative date:

  • Identifying ongoing transfers that will need to be updated and reviewing completed transfers to determine whether processing on the data is ongoing.
  • Implementing a process to conduct documented risk assessments prior to a transfer that includes
  • Reviewing transfer facts.
  • Identifying applicable national and local laws and practices.
  • Assessing the potential for public authority access to, or requests to access, transferred data.
  • Determining whether the organization previously received public authority access, or requests to access.
  • Identifying additional available reasonable safeguards for the transfer.
  • Developing internal policies for handling data transferred from the EU to ensure compliance with purpose limitations, storage and retention requirements, data minimization, data destruction and confidentiality obligations.
  • Training employees to identify cross border transfers of EU data that may be subject to the GDPR and SCCs including client, consumer, and HR data.
  • Reviewing the organization’s technical and organizational safeguards to ensure adequate protection of EU data during transmission and storage.
  • Determining whether data transferred or received from the EU will be transferred onward to a third party or vendor and reviewing vendor and third-party contracts to ensure the recipient will be contractually obligated to implement reasonable safeguards.
  • Reviewing and updating the organization’s data breach response plan to address the data transferred or received from the EU.
  • Reviewing and updating the organization’s business continuity plan to ensure the availability of data transferred or received from the EU.
  • Reviewing existing transfers to ensure adequate safeguards are in place.

September 27, 2021 is not far away. Most U.S. organizations will need to move quickly to identify new cross border data transfers commencing after that date and be prepared to implement the new procedures and documents for the SCCs. This is, of course, if they are not relying on an alternate transfer mechanism or an exception exists. Organizations will also need to review any ongoing transfers made in reliance on the old SCCs and take steps to comply. As with new transfers, this will require a documented risk assessment and a comprehensive understanding of the organization’s process for accessing and transferring personal data protected under GDPR.

NIST Preliminary Draft Cybersecurity Framework Profile for Ransomware Risk Management Provides Risk Management Strategies

The National Institute of Standards and Technology (NIST) recently released a preliminary draft of its Cybersecurity Framework Profile for Ransomware Risk Management. The public comment period for this draft runs through July 9, 2021. NIST says “The profile can be used as a guide to managing the risk of ransomware events. That includes helping to gauge an organization’s level of readiness to counter ransomware threats and to deal with the potential consequences of events.” NIST is taking an iterative approach to this framework and there will be at least one additional public comment period on it.

Protecting Against Ransomware Attacks

The NIST framework recommends the following steps to protect against the ransomware threat:

  • Use antivirus software at all times. Set your software to automatically scan emails and flash drives.
  • Keep computers fully patched. Run scheduled checks to keep everything up-to-date.
  • Block access to ransomware sites. Use security products or services that block access to known ransomware sites.
  • Allow only authorized apps. Configure operating systems or use third-party software to allow only authorized applications on computers.
  • Restrict personally owned devices on work networks.
  • Use standard user accounts versus accounts with administrative privileges whenever possible.
  • Avoid using personal apps—like email, chat, and social media—from work computers.
  • Beware of unknown sources. Don’t open files or click on links from unknown sources unless you first run an antivirus scan or look at links carefully.

Recovering From Ransomware Attacks

In addition, NIST recommends the following steps organizations can take now to help recover from a future ransomware event:

  • Make an incident recovery plan. Develop and implement an incident recovery plan with defined roles and strategies for decision making. This can be part of a continuity of operations plan.
  • Backup and restore. Carefully plan, implement, and test a data backup and restoration strategy—and secure and isolate backups of important data.
  • Keep your contacts. Maintain an up-to-date list of internal and external contacts for ransomware attacks, including law enforcement.

Determining Your Organization’s State of Readiness to Prevent And Mitigate Ransomware Attacks

Organizations can use the NIST framework to profile their state of readiness for ransomware attacks, identifying and prioritizing opportunities for improving their ransomware resistance. NIST identifies the following functions as a further means to address ransomware risks:

  • Identify – Develop an organizational understanding to manage cybersecurity risk to systems, people, assets, data, and capabilities. The activities in the Identify Function are foundational for effective use of the Framework. Understanding the business context, the resources that support critical functions, and the related cybersecurity risks enables an organization to focus and prioritize its efforts, consistent with its risk management strategy and business needs.
  • Protect – Develop and implement appropriate safeguards to ensure delivery of critical services. The Protect Function supports the ability to limit or contain the impact of a potential cybersecurity event.
  • Detect – Develop and implement appropriate activities to identify the occurrence of a cybersecurity event. The Detect Function enables timely discovery of cybersecurity events.
  • Respond – Develop and implement appropriate activities to take action regarding a detected cybersecurity incident. The Respond Function supports the ability to contain the impact of a potential cybersecurity incident.
  • Recover – Develop and implement appropriate activities to maintain plans for resilience and to restore any capabilities or services that were impaired due to a cybersecurity incident. The Recover Function supports timely recovery to normal operations to reduce the impact from a cybersecurity incident.

Ransomware continues to present a significant threat to organizations.  The NIST framework presents an opportunity to assess and improve prevention and mitigation measures. Organizations may not be able to prevent all attacks, but it is important to remain vigilant and be aware of emerging trends.

Here are some additional helpful resources for ransomware attack prevention and response:

 

City of Baltimore May Criminalize the Use of Facial Recognition Technologies by Businesses

The Baltimore City Council recently passed an ordinance, in a vote of 13-2, barring the use of facial recognition technology by city residents, businesses, and most of the city government (excluding the city police department) until December 2022.  Council Bill 21-0001  prohibits persons from “obtaining, retaining, accessing, or using certain face surveillance technology or any information obtained from certain face surveillance technology.”

Facial recognition technology has become more popular in recent years, including during the COVID-19 pandemic. As the need arose to screen persons entering a facility for symptoms of the virus, including temperature, thermal cameras, kiosks, and other devices embedded with facial recognition capabilities were put into use, often inadvertently. However, many have objected to the use of this technology in its current form, citing problems with the accuracy of the technology, as summarized in a June 9, 2020 New York Times article, “A Case for Banning Facial Recognition.”

While many localities across the nation have barred the use of facial recognition systems by city police, and other government agencies, such as San Francisco and Oakland, Baltimore is only the second city (following Portland, Oregon), to ban biometric technology use by private residents and businesses. Effective January 1, 2021 the City of Portland banned the use of facial recognition by private entities in any “places of public accommodation” within the boundaries of the city. “Places of public accommodation was broadly defined to include any “place or service offering to the public accommodations, advantages, facilities, or privileges whether in the nature of goods, services, lodgings, amusements, transportation or otherwise.”

Specifically, the Baltimore ordinance prohibits an individual or entity from obtaining, retaining, or using facial surveillance system or any information obtained from a facial surveillance system within the boundaries of Baltimore city. “Facial surveillance system” is defined as any computer software or application that performs face surveillance. Notably, the Baltimore ordinance explicitly excluded from the definition of “facial surveillance system” a biometric security system designed specifically to protect against unauthorized access to a particular location or an electronic device, meaning employers using a biometric security system for employee/visitor access to their facilities would appear to be still be permissible under the bill. The ordinance also excludes from its definition of “facial surveillance system” the Maryland Image Repository System (MIRS) used by the Baltimore City Police in criminal investigations.

A person in violation of the law is subject to fine of not more than $1,000, imprisonment of not more than 12 months, or both fine and imprisonment.  Each day that a violation continues is considered a separate offense. The criminalization of use of facial recognition, is first of its kind across the United States.

The Baltimore bill also includes a separate section applicable only to the Mayor and City Council of Baltimore City, requiring an annual surveillance report by the Director of Baltimore City Information and Technology or any successor entity, in consultation with the Department of Finance to be submitted to the Mayor of Baltimore detailing: 1) each purchase of surveillance technology during the prior fiscal year, disaggregated by the purchasing agency, and 2) an explanation of the use of the surveillance technology.  In addition, the report must be posted to the Baltimore City Information and Technology website. Examples of surveillance technology that must be included in the report include: automatic license plate readers, x-ray vans, mobile DNA capture technology and software designed to forecast criminal activity or criminality.

It is important to note, that the bill’s provisions are set to automatically expire December 31, 2022 unless the City Council, after appropriate study, including public hearings and testimonial evidence concludes that such prohibitions and requirements are in the public interest, in which case the law will be extended for an additional 5 years.

The Baltimore ordinance has been met with significant opposition by industry experts, particularly as the ordinance would be the first in the U.S. to criminalize private use of biometric technologies. In a joint letter, the Security Industry Association (SIA), the Consumer Technology Associations (CTA) and the Information Technology and Innovation Foundation (ITIF) and XR Association to reject the enactment of the Baltimore ordinance on grounds that it is overly broad and prohibits commercial applications of facial recognition technology that already have widespread public acceptance and provide “beneficial and noncontroversial” services, including for example: increased and customized accessibility for disabled persons, healthcare facilities to verify patient identities while reducing the need for close-proximity interpersonal interactions, banks to enhance consumer security to verify purchases and ATM access, and many more. A similar concern was voiced by Councilmember Issac Schliefer who was one of the two votes opposing the ordinance.

The ordinance now awaits signage by Baltimore Mayor Brandon Scott, and if signed, will become effective 30 days after enactment. In anticipation, of the ordinance’s potential enactment, businesses in the City of Baltimore should begin evaluating whether they are using facial recognition technologies, whether they fall into one of the exceptions in the ordinance, and if not what alternatives they have for verification, security, and other purposes for which the technology was implemented.

LexBlog