Globalization, compliance, and the growth in outsourcing have created a myriad of cross-border data transfer scenarios. These scenarios include marketing to and servicing customers, assessing global compliance with diversity and including goals, and outsourcing back office business functions. However, the emergence of far reaching data privacy regulation, such as the EU General Data Protection Regulation (“GDPR”), has erected roadblocks to the free flow of personal data, particularly from the European Economic Area (“EEA”) to countries without an EU adequacy decision, including the United States. Standard Contractual Clauses (“SCCs”) are one way to navigate the roadblocks, but the SCCs are not as simple as circulating a form agreement.

The recent Schrems II decision further complicated the flow of information when it invalidated the EU-U.S. Privacy Shield, and the original SCCs were unable to adequately address the EU Commission’s concerns about the protection of personal data. However, SCCs have played an increased role as an appropriate safeguard for transferring personal data. For U.S. companies sending or receiving personal data from the EEA, these new clauses will help accommodate an expanded set of transfer arrangements including processor to processor and processor to controller. Among other changes, the new SCCs address the data importer’s duties in situations where applicable laws affect its ability to comply with the SCCs, an issue raised in the Schrems II decision.

In short, the new SCCs are contractual terms adopted in part by the EU Commission to facilitate the transfer of personal data post-Schrems II. The SCCs are designed to ensure a non-GDPR importer has implemented appropriate safeguards to protect the data, and that data subjects have enforceable rights and effective legal remedies. The FAQs below summarize the new SCCs.

  1. What are the “new” SCCs?

On June 4, 2021, the EU Commission adopted “new” modernized SCCs to replace the 2001, 2004 and 2010 SCCs currently in use.

  1. How are the new SCCs different?

The EU Commission updated the SCCs to address more complex processing activities, the requirements of the GDPR, and the Schrems II decision. These clauses are modular so they can be tailored to the type of transfer.

  1. What types of data transfers are subject to the new SCCs?

The original SCCs apply to controller-controller and controller-processor transfers of personal data from the EU to countries without a Commission adequacy decision. The updated clauses are expanded to also include processor-processor and processor-controller transfers.

  1. Can multiple parties execute the SCCs?

Yes. While the existing SCCs were designed for two parties, the new clauses can be executed by multiple parties. The clauses also include a “docking clause” so that new parties can be added to the SCCs throughout the life of the contract.

  1. What obligations does a data importer have?

The obligations of the data importer are numerous and include, without limitation:

  • documenting the processing activities it performs on the transferred data,
  • notifying the data exporter if it is unable to comply with the SCCs,
  • returning or securely destroying the transferred data at the end of the contract,
  • applying additional safeguards to “sensitive data,”
  • adhering to purpose limitation, accuracy, minimization, retention, and destruction requirements,
  • notifying the exporter and data subject if it receives a legally binding request from a public authority to access the transferred data, if permitted, and
  • challenging a public authority access request if it reasonably believes the request is unlawful.
  1. Do the new SCCs require a risk assessment?

Yes. The SCCs require the data exporter to warrant there is no reason to believe local laws will prevent the importer from complying with its obligations under the SCCs. In order to make this representation, both parties must conduct and document a risk assessment of the proposed transfer.

  1. What does the risk assessment require?

The parties should review the facts and circumstances of the transfer (e.g., the nature of the data, duration of transfer, purpose for processing, storage location of the data, intended onward transfers), the relevant laws and practices of the importer’s jurisdiction, the existence or absence of public authority requests for access to the data in the importer’s jurisdiction, and any reasonable safeguards designed to supplement the protections of the SCCs. This documented assessment must be completed before fully executing the SCCs and it must be made available to the Supervisory Authority on request.

  1. Are the new SCCs negotiable?

No. The new SCCs cannot be negotiated, amended, or edited. However, additional terms can be included as long as they do not contradict or conflict with the underlying SCCs or the data subject’s privacy rights. Of course, those additional terms may be negotiated. It will also be important to consider what effect the new SCCs have on existing service agreement terms and conditions.

  1. What are the SCCs Annexes?

The SCCs include an Appendix with three Annexes for the parties to complete: Description of Transfer, Security Measures, and Sub-processors. These Annexes require detailed information about the transfer, particularly with respect to technical and organizational measures the importer will use to safeguard the data.

  1. Do the new SCCs apply to U.S. organizations that are not subject to the GDPR?

Yes, if a data exporter transfers data from the EU to a U.S. organization, the U.S. organization must execute the new SCCs unless the parties rely on an alternate transfer mechanism or an exception exists. This applies regardless of whether the U.S. company receives or accesses the data as a data controller or processor.

  1. When would a U.S. organization use the new SCCs to transfer or receive personal data from the EU?

A U.S. organization that is subject to the GDPR based on an “establishment” in the EU may transfer data from the EU to a data importer in the U.S. (or other country without an EU adequacy decision) in reliance on the SCCs unless the importer is also subject to the GDPR, the parties rely on an alternate transfer mechanism, or an exception applies. For example, assume the U.S. organization’s EU office transfers customer personal data to a third-party billing vendor located in the U.S. or transfers employee data to a compensation consultant in the U.S. In this case, if the vendor is not subject to the GDPR, the U.S. organization can enter into SCCs with that vendor to meet its obligations under the GDPR with regard to that transfer.

Perhaps a U.S. organization is not established in the EU but is subject to the GDPR because it offers goods or services to data subjects located in the EU or monitors their behavior in the EU. This organization may need to transfer the personal data of its EU customers to a third-party shipping vendor located in the U.S. It may transfer such data in reliance on the SCCs, unless the importer (the shipping vendor) is subject to the GDPR, the parties rely on an alternate transfer mechanism, or an exception applies.

Even in cases where a U.S. organization is not subject to the GDPR, but receives personal data in the U.S. from the EU or accesses personal data stored in the EU from the U.S., it must execute SCCs with the data exporter unless the parties rely on an alternate transfer mechanism or an exception exists. This applies regardless of whether the U.S. company is receiving or accessing the data as a data controller or data processor. For example, where a U.S. organization receives personal data as a controller for its own processing purposes (e.g., a U.S. ), the parties can execute controller – controller SCCs. Alternatively, if the U.S. organization receives personal data as a processor for the data exporter’s processing purposes (e.g., a U.S. marketing company receives customer personal data from an EU retailer), the parties can execute controller – processor SCCs.

In circumstances where a U.S. organization is not subject to the GDPR, but receives personal data from the EU as a processor and transfers that data to a sub-contractor or sub-processor in the U.S. (i.e., an onward transfer), the parties can execute processor – processor SCCs. For example, this may apply where a U.S. company provides fulfillment services to the data exporter and subcontracts shipping services to a third-party.

  1. Do the new SCCs give rights to individuals whose personal data is being transferred?

Yes. Individuals whose personal data is being transferred from the EU (i.e., data subjects) are third party beneficiaries of the SCCs and can invoke and enforce the SCCs against both the data exporter and importer.

  1. Does executing the new SCCs subject a U.S. company to EU jurisdiction?

With the exception of processor-controller transfers, the SCCs will be governed by an EU member state law that recognizes third party beneficiary rights and disputes arising from the clauses will be resolved in the courts of that member state. In addition, the importer must submit to the jurisdiction of the applicable Supervisory Authority and EU member state courts; commit to abide by any binding decision under the member state law; agree to respond to inquiries and submit to audits; and comply with remedial and compensatory measures adopted by the Supervisory Authority. In the case of a processor-controller transfer, the parties shall select the law of the country that will govern; however, that law must allow for third party beneficiary rights.

  1. What is the operative date of the new SCCs?

The 2001, 2004 and 2010 SCCs are repealed, effective September 27, 2021. New transfers made after September 27, 2021 must use the new SSCs.

  1. Should an organization replace the SSCs its currently using for ongoing transfers of personal data from the EU?

Yes, but there is a grace period. Organizations currently using the original SCCs for ongoing transfers must replace them with the new clauses by December 27, 2022. During the grace period, the parties must ensure the ongoing transfer is subject to appropriate safeguards.

  1. Should organizations replace SSCs that were used for a completed, one-time transfer of personal data from the EU?

Maybe. If the transfer of data from the EU to the U.S. has been completed, but the data importer continues to process the personal data, the parties must replace the original SCCs with the new clauses by December 27, 2022.

  1. Do the new SCCs impact GDPR data processing agreements?

Yes. The new SCCs may be used in lieu of a GDPR data processing agreement between a controller and processor or processor and processor during a transfer, thus eliminating the need for both a data processing agreement and SCCs. The new SCCs include the Article 28 provisions typically included in a GDPR data processing agreement.

  1. Do the new SSCs apply to transfers of personal data from the U.K. to the U.S.?

No. The original SCCs will continue to apply to U.K. – U.S. transfers of personal data until the U.K. recognizes the EU Commission’s new SCCs or adopts its own version.

  1. What steps should U.S. organizations take to prepare for the new SCCs?

Preparing for the new SCCs will require a commitment of time and resources. U.S. organizations that plan to transfer, receive, or access personal data from or in the EU after September 27, 2021 should consider the following steps well in advance of the SCC’s operative date:

  • Identifying ongoing transfers that will need to be updated and reviewing completed transfers to determine whether processing on the data is ongoing.
  • Implementing a process to conduct documented risk assessments prior to a transfer that includes
  • Reviewing transfer facts.
  • Identifying applicable national and local laws and practices.
  • Assessing the potential for public authority access to, or requests to access, transferred data.
  • Determining whether the organization previously received public authority access, or requests to access.
  • Identifying additional available reasonable safeguards for the transfer.
  • Developing internal policies for handling data transferred from the EU to ensure compliance with purpose limitations, storage and retention requirements, data minimization, data destruction and confidentiality obligations.
  • Training employees to identify cross border transfers of EU data that may be subject to the GDPR and SCCs including client, consumer, and HR data.
  • Reviewing the organization’s technical and organizational safeguards to ensure adequate protection of EU data during transmission and storage.
  • Determining whether data transferred or received from the EU will be transferred onward to a third party or vendor and reviewing vendor and third-party contracts to ensure the recipient will be contractually obligated to implement reasonable safeguards.
  • Reviewing and updating the organization’s data breach response plan to address the data transferred or received from the EU.
  • Reviewing and updating the organization’s business continuity plan to ensure the availability of data transferred or received from the EU.
  • Reviewing existing transfers to ensure adequate safeguards are in place.

September 27, 2021 is not far away. Most U.S. organizations will need to move quickly to identify new cross border data transfers commencing after that date and be prepared to implement the new procedures and documents for the SCCs. This is, of course, if they are not relying on an alternate transfer mechanism or an exception exists. Organizations will also need to review any ongoing transfers made in reliance on the old SCCs and take steps to comply. As with new transfers, this will require a documented risk assessment and a comprehensive understanding of the organization’s process for accessing and transferring personal data protected under GDPR.

The National Institute of Standards and Technology (NIST) recently released a preliminary draft of its Cybersecurity Framework Profile for Ransomware Risk Management. The public comment period for this draft runs through July 9, 2021. NIST says “The profile can be used as a guide to managing the risk of ransomware events. That includes helping to gauge an organization’s level of readiness to counter ransomware threats and to deal with the potential consequences of events.” NIST is taking an iterative approach to this framework and there will be at least one additional public comment period on it.

Protecting Against Ransomware Attacks

The NIST framework recommends the following steps to protect against the ransomware threat:

  • Use antivirus software at all times. Set your software to automatically scan emails and flash drives.
  • Keep computers fully patched. Run scheduled checks to keep everything up-to-date.
  • Block access to ransomware sites. Use security products or services that block access to known ransomware sites.
  • Allow only authorized apps. Configure operating systems or use third-party software to allow only authorized applications on computers.
  • Restrict personally owned devices on work networks.
  • Use standard user accounts versus accounts with administrative privileges whenever possible.
  • Avoid using personal apps—like email, chat, and social media—from work computers.
  • Beware of unknown sources. Don’t open files or click on links from unknown sources unless you first run an antivirus scan or look at links carefully.

Recovering From Ransomware Attacks

In addition, NIST recommends the following steps organizations can take now to help recover from a future ransomware event:

  • Make an incident recovery plan. Develop and implement an incident recovery plan with defined roles and strategies for decision making. This can be part of a continuity of operations plan.
  • Backup and restore. Carefully plan, implement, and test a data backup and restoration strategy—and secure and isolate backups of important data.
  • Keep your contacts. Maintain an up-to-date list of internal and external contacts for ransomware attacks, including law enforcement.

Determining Your Organization’s State of Readiness to Prevent And Mitigate Ransomware Attacks

Organizations can use the NIST framework to profile their state of readiness for ransomware attacks, identifying and prioritizing opportunities for improving their ransomware resistance. NIST identifies the following functions as a further means to address ransomware risks:

  • Identify – Develop an organizational understanding to manage cybersecurity risk to systems, people, assets, data, and capabilities. The activities in the Identify Function are foundational for effective use of the Framework. Understanding the business context, the resources that support critical functions, and the related cybersecurity risks enables an organization to focus and prioritize its efforts, consistent with its risk management strategy and business needs.
  • Protect – Develop and implement appropriate safeguards to ensure delivery of critical services. The Protect Function supports the ability to limit or contain the impact of a potential cybersecurity event.
  • Detect – Develop and implement appropriate activities to identify the occurrence of a cybersecurity event. The Detect Function enables timely discovery of cybersecurity events.
  • Respond – Develop and implement appropriate activities to take action regarding a detected cybersecurity incident. The Respond Function supports the ability to contain the impact of a potential cybersecurity incident.
  • Recover – Develop and implement appropriate activities to maintain plans for resilience and to restore any capabilities or services that were impaired due to a cybersecurity incident. The Recover Function supports timely recovery to normal operations to reduce the impact from a cybersecurity incident.

Ransomware continues to present a significant threat to organizations.  The NIST framework presents an opportunity to assess and improve prevention and mitigation measures. Organizations may not be able to prevent all attacks, but it is important to remain vigilant and be aware of emerging trends.

Here are some additional helpful resources for ransomware attack prevention and response:

 

The Baltimore City Council recently passed an ordinance, in a vote of 13-2, barring the use of facial recognition technology by city residents, businesses, and most of the city government (excluding the city police department) until December 2022.  Council Bill 21-0001  prohibits persons from “obtaining, retaining, accessing, or using certain face surveillance technology or any information obtained from certain face surveillance technology.”

Facial recognition technology has become more popular in recent years, including during the COVID-19 pandemic. As the need arose to screen persons entering a facility for symptoms of the virus, including temperature, thermal cameras, kiosks, and other devices embedded with facial recognition capabilities were put into use, often inadvertently. However, many have objected to the use of this technology in its current form, citing problems with the accuracy of the technology, as summarized in a June 9, 2020 New York Times article, “A Case for Banning Facial Recognition.”

While many localities across the nation have barred the use of facial recognition systems by city police, and other government agencies, such as San Francisco and Oakland, Baltimore is only the second city (following Portland, Oregon), to ban biometric technology use by private residents and businesses. Effective January 1, 2021 the City of Portland banned the use of facial recognition by private entities in any “places of public accommodation” within the boundaries of the city. “Places of public accommodation was broadly defined to include any “place or service offering to the public accommodations, advantages, facilities, or privileges whether in the nature of goods, services, lodgings, amusements, transportation or otherwise.”

Specifically, the Baltimore ordinance prohibits an individual or entity from obtaining, retaining, or using facial surveillance system or any information obtained from a facial surveillance system within the boundaries of Baltimore city. “Facial surveillance system” is defined as any computer software or application that performs face surveillance. Notably, the Baltimore ordinance explicitly excluded from the definition of “facial surveillance system” a biometric security system designed specifically to protect against unauthorized access to a particular location or an electronic device, meaning employers using a biometric security system for employee/visitor access to their facilities would appear to be still be permissible under the bill. The ordinance also excludes from its definition of “facial surveillance system” the Maryland Image Repository System (MIRS) used by the Baltimore City Police in criminal investigations.

A person in violation of the law is subject to fine of not more than $1,000, imprisonment of not more than 12 months, or both fine and imprisonment.  Each day that a violation continues is considered a separate offense. The criminalization of use of facial recognition, is first of its kind across the United States.

The Baltimore bill also includes a separate section applicable only to the Mayor and City Council of Baltimore City, requiring an annual surveillance report by the Director of Baltimore City Information and Technology or any successor entity, in consultation with the Department of Finance to be submitted to the Mayor of Baltimore detailing: 1) each purchase of surveillance technology during the prior fiscal year, disaggregated by the purchasing agency, and 2) an explanation of the use of the surveillance technology.  In addition, the report must be posted to the Baltimore City Information and Technology website. Examples of surveillance technology that must be included in the report include: automatic license plate readers, x-ray vans, mobile DNA capture technology and software designed to forecast criminal activity or criminality.

It is important to note, that the bill’s provisions are set to automatically expire December 31, 2022 unless the City Council, after appropriate study, including public hearings and testimonial evidence concludes that such prohibitions and requirements are in the public interest, in which case the law will be extended for an additional 5 years.

The Baltimore ordinance has been met with significant opposition by industry experts, particularly as the ordinance would be the first in the U.S. to criminalize private use of biometric technologies. In a joint letter, the Security Industry Association (SIA), the Consumer Technology Associations (CTA) and the Information Technology and Innovation Foundation (ITIF) and XR Association to reject the enactment of the Baltimore ordinance on grounds that it is overly broad and prohibits commercial applications of facial recognition technology that already have widespread public acceptance and provide “beneficial and noncontroversial” services, including for example: increased and customized accessibility for disabled persons, healthcare facilities to verify patient identities while reducing the need for close-proximity interpersonal interactions, banks to enhance consumer security to verify purchases and ATM access, and many more. A similar concern was voiced by Councilmember Issac Schliefer who was one of the two votes opposing the ordinance.

The ordinance now awaits signage by Baltimore Mayor Brandon Scott, and if signed, will become effective 30 days after enactment. In anticipation, of the ordinance’s potential enactment, businesses in the City of Baltimore should begin evaluating whether they are using facial recognition technologies, whether they fall into one of the exceptions in the ordinance, and if not what alternatives they have for verification, security, and other purposes for which the technology was implemented.

By now, plan fiduciaries and their service providers likely have heard about the DOL’s cybersecurity guidance. The Department of Labor’s stepping into cybersecurity in this way – a posting of best practices on the agency’s website – has left plan fiduciaries with some questions. Here are a few:

  • “When is this effective?”
  • “Does this apply to me?”
  • “Could I be liable if a service provider has a data breach?”
  • “We are halfway through the term of our services agreement with our recordkeeper, do we need to do something now?”
  • “This is IT’s problem, right?”
  • “What exactly do we have to do to be ‘prudent’?”
  • “Do we have to communicate anything to plan participants?”
  • “If our service provider had a data breach, do we have to terminate the relationship?” “What factors should we considering in making that decision?”

So, what are plan fiduciaries actually thinking? Fortunately, we’ve been able to obtain snippets of conversations between plan fiduciaries that may provide some insight into that question. Here is our first installment, and, of course, we redacted the text to protect the privacy of the individuals.

Retirement Plan Committee Chair: So, what did you think of your first retirement plan committee meeting?

New Committee Member: Well it sounds like it will be really interesting…though, I’m a little bit nervous about the personal responsibility part and I’m not much of a technology person. I keep hearing about these breaches in the news, ransomware, you know, and I was one of the people on the gas line due to the Colonial Pipeline incident.

Retirement Plan Committee Chair: I know what you mean. During the time we were out of the office for COVID, if it weren’t for my 13-year-old, I don’t think I would have been able to get onto any conference calls! But I think we have a good team and good procedures. There is a fiduciary training coming up and I believe they will cover this.

New Committee Member: Yea, that will be good. I am not sure I know all the service providers we have for the plans. We spoke a lot about the 401(k) plan’s recordkeeper tonight, are there others?

Retirement Plan Committee Chair: That is a good question. We definitely will need to identify all of our service providers, particularly those handling plan data. I know we have an auditor, and then there is our investment advisory firm…

New Committee Member (interrupting): …and what about the financial wellness vendor?

Retirement Plan Committee Chair: Yes, them too. Well, we should probably regroup after the training and come up with a plan. I have to run, see you next week.

New Committee Member:  OK, bye.

It looks like this organization takes its retirement plan administration seriously and has some thoughtful people on the team. Retirement committees generally are not required under ERISA but they can be a valuable tool for organizing the administrative responsibilities of an employee benefit plan.

Getting more educated on “cybersecurity” is a good initial step for a committee or plan fiduciaries generally. Done right, training will help fiduciaries better understand the threats and vulnerabilities to data generally (not just from criminal hackers) and gain more insight into the DOL’s best practices. Such training also can help plan fiduciaries (and personnel on virtually all levels of plan administration) appreciate more of the ways data may be accessed or transmitted in the course of operating a plan. Looking at plan operations from that perspective, where data lives and how it moves, can help plan fiduciaries identify the service providers they need to be thinking about.

Perhaps the most important nugget from the exchange above for addressing the DOL’s guidance is from the Retirement Plan Committee Chair – come up with a plan!

The Texas Legislature, which meets every other year, pushed a change to its data breach notification law at the end of the session in late May, and yesterday Governor Greg Abbott signed the bill into law.  It follows a growing trend of changes to privacy and cybersecurity laws at the state level.

Texas House Bill 3746 will amend Texas Business and Commerce Code § 521.053, which requires notifications to individuals and the Texas Attorney General following certain data breaches.  The amendment adds a requirement for the Texas Attorney General to post on its website a listing of data breach notifications received, when a breach involves 250 or more Texas residents. California has a similar requirement, although it is for breaches affecting 500 or more residents.

Specifically, the Texas amendment would require the Texas Attorney General to:

  • Post on the Attorney General’s public website a listing of notifications received, excluding any sensitive personal information, any information that may compromise a data system’s security, and any other information reported to the Attorney General that is made confidential by law;
  • Maintain an updated listing on the website, and update the list no later than every 30 days; and
  • Remove data no later than one year following the date it was added, unless the entity notified the Attorney General of additional incidents.

The amendment also now requires that entities reporting a breach to the Texas Attorney General provide the number of Texas residents receiving notification of the breach, in addition to the current requirements of:

  • A detailed description of the nature and circumstances of the breach or the use of sensitive personal information acquired as a result of the breach;
  • The number of residents affected by the breach;
  • The measures taken by the person regarding the breach and any measures the person intends to take regarding the breach after notification; and
  • Information regarding whether law enforcement is engaged in investigating the breach.

The Texas amendment may indicate a growing trend towards increased information sharing in an effort to prevent future data breaches. On the federal level, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) has implemented several programs in the past year to promote information sharing and awareness.  “Information sharing is essential to the protection of critical infrastructure and to furthering cybersecurity for the nation. As the lead federal department for the protection of critical infrastructure and the furthering of cybersecurity, the CISA has developed and implemented numerous information sharing programs. Through these programs, CISA develops partnerships and shares substantive information with the private sector, which owns and operates the majority of the nation’s critical infrastructure. CISA also shares information with state, local, tribal, and territorial governments and with international partners, as cybersecurity threat actors are not constrained by geographic boundaries”, CISA states. More information on CISA information sharing and awareness programs is available here.

The updated Texas law will take effect September 1, 2021.  With no shortage of large-scale breaches and heightened public awareness across the nation, organizations regardless of jurisdiction are advised to evaluate and enhance their data breach prevention and response capabilities.

 

UPDATE: On June 16, Gov. Ned Lamont signed HB 5310 into law which becomes effective October 1, 2021.

State legislatures across the nation are prioritizing privacy and security matters, and Connecticut is no exception. This week, Connecticut Attorney General William Tong announced the passage of An Act Concerning Data Privacy Breaches, a measure that will enhance and strengthen Connecticut’s data breach notification law. The Connecticut House of Representatives unanimously approved the bill on May 27th, and Senate followed with unanimous approval shortly after.  The bill now heads to Governor Ned Lamont for signage.

Connecticut has led the nation in data privacy for over a decade, and this legislation ensures that we will continue to do so. Since we passed one of our nation’s first laws protecting consumers from online data breaches, technology and risks have evolved. This legislation ensures that our laws reflect those evolving risks and continue to offer strong, comprehensive protection for Connecticut residents,

Attorney General Tong observed in his announcement of the data breach notification bill.

Key aspects of Connecticut’s enhanced data breach notification law include:

  • Expansion of the definition of “personal information.

Originally, Connecticut defined “personal information” as an individual’s first name or first initial and last name in combination with any one, or more, of the following data:

    • Social security number
    • Driver’s license number
    • State identification card number
    • Credit or debit card number
    • Financial account number in combination with any required security code, access code, or password that would permit access to such financial account.

The new law if enacted will look more like similar laws in California and Florida by including additional data categories:

    • Individual taxpayer identification number
    • Identity protection personal identification number issued by the IRS
    • Passport number, military identification number or other identification number issued by the government that is used to verify identity
    • Medical information regarding an individual’s medical history, mental or physical condition or medical treatment or diagnosis by a healthcare professional
    • Health insurance policy number or subscriber identification number, or any unique identifier by a health insurer to identify the individual
    • Biometric information consisting of data generated by electronic measurements of an individual’s unique physical characteristics and used to authenticate or ascertain the individual’s identity, such as a fingerprint, voice print, retina or iris image; and
    • User name or electronic mail address, in combination with a password or security question and answer that would permit access to an online account.
  • Notification Time and Content.

The new law would shorten the time a business has to notify affected Connecticut residents and the Office of the Attorney General of a data breach time from 90 days to 60 days. Remember, as with most other breach notification mandates, the timing requirement is “without unreasonable delay but not later than 60 days” in this case. In addition, if identification of a resident of the state whose personal information was breached or reasonably believed to have been breached will not be completed within 60 days, the business must provide preliminary substitute notice as outlined by the law, and proceed in good faith to work to identify affected residents and provide direct notice as expediently as possible. Incident response plans would need to be reviewed to ensure this requirement is incorporated.

  • Breach of Login Credential. 

The new law would add a section addressing unique notification requirements in the case of a breach of login credentials. In such a case, notice to an affected resident may be provided in electronic or other form that directs the resident to promptly change any password or security questions and answers, or to take other appropriate steps to protect the affected online account, or any account with the same login credentials.

  • HIPAA and HITECH Act Exception.

Any person subject to and in compliance with HIPAA and/or the HITECH Act privacy and security obligations is deemed in compliance of the new law with a couple of critical exceptions. First, as under New York’s SHIELD Act, a person subject to HIPAA or HITECH that is required to notify Connecticut residents of a data breach under HITECH still must notify Connecticut’s Attorney General at the same time residents are notified. Second,  if the person would have been required to provide identity theft prevention and/or mitigation services under Connecticut law, which is for a period of 24 months, that requirement remains.

  • Investigation Materials.

Under the new law, documents, material and information connected to the investigation of a breach of security would be exempt from public disclosure, unless required to be made available to third parties by the Attorney General in furtherance of the investigation.

This new law, if signed keeps Connecticut in line with other states across the nation currently enhancing their data breach notification laws in light of recent large-scale data breaches and heightened public awareness.  Organizations across the United States should be evaluating and enhancing their data breach prevention and response capabilities.

Below are several resources for understanding current trends in the state data breach notification law landscape:

In a landmark decision, the U.S. Supreme Court has ruled that the Computer Fraud and Abuse Act (CFAA), 18 U.S.C. § 1030 et seq., does not prohibit improper use of computer information to which an individual has authorized access. Rather, the law prohibits obtaining information from areas of a computer, such as files, folders, or databases, that are outside the limits of the individual’s authorized access. Van Buren v. United States, No. 19-783 (June 3, 2021).

Before the Court took up the case, a sharp split existed among circuit courts, with serious ramifications for employers. The First, Fifth, Seventh, and Eleventh Circuits had adopted a broad construction of the CFAA, allowing claims to go forward when an individual misused information they were otherwise permitted to access. The Second, Fourth, and Ninth Circuits took a narrower approach, concluding that CFAA claims were limited to situations in which an individual accessed information off-limits to them, and mere misuse of information to which they had authorized access could not constitute a violation.  The Supreme Court resolved this split in favor of the narrower reading.

Employers should assess whether they have sufficient safeguards in place to protect against the conduct in Van Buren. While improper use of information through authorized access may no longer violate the CFAA, it can still wreak havoc on a business. Jackson Lewis’s Privacy, Data and Cybersecurity practice group, in conjunction with the Non-Competes and Protection Against Unfair Competition practice group, published an article on the Jackson Lewis website, explaining the Van Buren case in depth and its potential impact.

In late May, New York Attorney General Letitia James announced a $200,000 settlement agreement with Filters Fast, an online water filtration retailer, stemming from a 2019 data breach compromising the personal information of over 300,000 consumers across the U.S., including nearly 17,000 in New York state.  The settlement also requires the online retailer to strengthen its cybersecurity policies and procedures.

The settlement was the result of an enforcement action brought by the State AG under New York’s Stop Hacks and Improve Electronic Data Security Act (SHIELD Act). See our SHIELD Act FAQs here.  The SHIELD Act was enacted in 2019 with the goal of strengthening protection for New York residents against data breaches affecting their private information.   The Act imposes expansive data security obligations and updated the State’s existing data breach notification requirements.

The Filters Fast breach affected the names, billing addresses, and credit card expiration dates and security codes of customers who purchased products on the Company’s website for nearly a year, between July 2019 – July 2020. Filters Fast was first made aware of the breach in February 2020, but after conducting an internal investigation concluded that a breach had not occurred.  After receiving several additional reports of compromised data, however, the Company’s internal investigator concluded in late July of 2020 that a breach had in fact occurred, and the website was patched. On August 14th 2020 – over a year after the breach had initially occurred, and approximately six months after the Company first became aware of it – notification of the breach was sent to affected customers.

“New Yorkers should never have to worry that their personal information will be attacked during a routine online checkout process,” said Attorney General James in her announcement of the settlement. “Online information security has been especially critical during the COVID-19 pandemic, during which New Yorkers have increasingly relied on online retailers, such as Filters Fast, to purchase basic household goods. My office is committed to protecting consumers, which is why we will continue to use every available tool to hold companies accountable when they fail to safeguard personal information.”

In addition to the settlement payment, the AG’s agreement with Fast Filters  requires several improvements to the company’s policies and procedures to help prevent future data security incidents, such as:

  • Creating a comprehensive information security program that includes regular updates to keep pace with changes in technology and security threats, as well as regular reporting to the company’s CEO concerning security risks;
  • Designing an incident response and data breach notification plan that encompasses preparation, detection and analysis, containment, eradication, and recovery;
  • Adopting personal information safeguards and controls — including encryption, segmentation, penetration testing, logging and monitoring, virus protection policy, custom application code change reviews, authentication policy and procedures, management of service providers, and patch management; and
  • Ensuring that third-party security assessments take place over the next five years.

The SHIELD Act is far-reaching: it affects any business that holds private information of a New York resident — regardless of whether the organization does business in New York, and including small businesses. Under the Act, individuals and businesses that collect computerized data, including private information about New York residents, must implement and maintain reasonable administrative, physical and technical safeguards. The Act provides several safeguards which may be implemented to ensure compliance.

Data privacy and security risks continue to emerge with enforcement not far behind. Regardless of their location, organizations should be assessing and reviewing their data breach prevention and response activities, building robust data protection programs, and investing in written information security programs (WISPs).

Additional resources on security program implementation, particularly for small and mid-sized organizations are available here:

 

The EU Commission is expected to adopt the long awaited updated Standard Contractual Clauses (“SCCs”) on June 4, 2021. In the wake of the Schrems II decision invalidating the EU-U.S. Privacy Shield, the SCCs have played an increased role as an appropriate safeguard for transferring personal data from the European Economic Area to recipients in the U.S. and other countries without an EU Adequacy Decision. Globalization and the growth in outsourcing have created unanticipated transfer scenarios the original SCCs were unable to adequately address. For U.S. companies sending or receiving personal data from the European Economic Area, these new clauses will help accommodate an expanded set of transfer arrangements including processor to processor and processor to controller. Among other changes, it is anticipated the SCCs will address the data importer’s duties in situations where applicable laws affect its ability to comply with the SCCs, an issue raised in the Schrems II decision. Companies currently transferring personal data in reliance on existing SCCs will have a grace period in which to replace them with the new SCCs.