Co-authors: Nadine C. Abrams and Richard Mrizek 

In a ruling that may have significant impact on the constant influx of biometric privacy suits under the Biometric Information Privacy Act (BIPA) in Illinois, the Illinois Supreme Court will soon weigh in on whether claims under Sections 15(b) and (d) of the BIPA, 740 ILCS 14/1, et seq., “accrue each time a private entity scans a person’s biometric identifier and each time a private entity transmits such a scan to a third party, respectively, or only upon the first scan and first transmission.” Adopting a “per-scan” theory of accrual or liability under the BIPA would lead to absurd and unjust results, argued a friend-of-the-court brief filed by Jackson Lewis in Cothron v. White Castle Systems, Inc., in the Illinois Supreme Court, on behalf of a coalition of trade associations whose more than 30,000 members employ approximately half of all workers in the State of Illinois.

To date, more than 1,450 class action lawsuits have been filed under BIPA. Businesses that collect, use, and store biometric data should be tracking the Cothron decision closely.  The full update on Jackson Lewis’s brief in the Cothron case before the Illinois Supreme Court is available here.

 

 

 

Some members of the California legislature want their state to remain the leader for data privacy and cybersecurity regulation in the U.S. This includes protections for biometric information, similar to those under the Biometric Information Privacy Act in Illinois, 740 ILCS 14 et seq. (BIPA). State Senator Bob Wieckowski introduced SB 1189 on February 17, 2022, which would add protections for biometric information in his state on top of other statutory provisions, such as the California Privacy Rights Act (CPRA) which goes into effect January 1, 2023.

If enacted, SB 1189 would significantly expand privacy and security protection for biometric information in California and likely influence additional legislative activity in the U.S. Notably, unlike some of the limitations on application in the California Consumer Privacy Act (CCPA), the Bill would apply to any private entity (defined as an individual, partnership, corporation, limited liability company, association, or similar group, however organized, other than the University of California). It could also open the door to a wave of litigation, similar to what organizations subject to the BIPA currently face.

SB 1189 includes a fairly broad definition of biometric information, tracking the definition under the CCPA that went into effect January 1, 2020:

(1) “Biometric information” means a person’s physiological, biological, or behavioral characteristics, including information pertaining to an individual’s deoxyribonucleic acid (DNA), that can be used or is intended to be used, singly or in combination with each other or with other identifying data, to establish individual identity.

(2) Biometric information includes, but is not limited to, imagery of the iris, retina, fingerprint, face, hand, palm, vein patterns, and voice recordings, from which an identifier template, such as a faceprint, a minutiae template, or a voiceprint, can be extracted, and keystroke patterns or rhythms, gait patterns or rhythms, and sleep, health, or exercise data that contain identifying information.

Many are familiar with or have encountered devices that scan fingerprints or a person’s face which may capture or create biometric information. This definition appears to go beyond those more “traditional” technologies. So, for example, if you’ve developed a unique style for tapping away at your keyboard while at work, you might be creating biometric information. The contours of this definition are quite vague, so private entities should carefully consider the capturing of certain data sets and the capabilities of new devices, systems, equipment, etc.

The Bill would prohibit private entities from collecting, capturing, purchasing, etc. a person’s biometric information unless the private entity:

  • requires the biometric information either to: (i) provide a service requested or authorized by the subject of the biometric information, or (ii) satisfy another valid business purpose (as defined in the CCPA) which is included in the written public policy described below, AND
  • first (i) informs the person or their legally authorized representative, in writing, of both of the biometric information being collected, stored, or used, and the specific purpose and length of time for which the biometric information is being collected, stored, or used, and (ii) receives a written release executed by the subject of the biometric information or their legally authorized representative.

In this regard, SB 1189 looks a lot like the BIPA, with some additional requirements for the written release. For example, the written release may not be combined with an employment contract or another consent form.

Under SB 1189, private entities in possession of biometric information also would be required to develop and make available to the public a written policy that establishes a retention schedule and guidelines for destroying biometric information. In general, destruction of the information would be required no later than one year after the individual’s last intentional interaction with the private entity. This is similar to the period required in the Texas biometric law.

In addition to requiring reasonable safeguards to protect biometric information, the Bill would place limitations on the disclosure of biometric information. Unless disclosed to complete a financial transaction requested by the data subject or disclosed as required by law, a written release would be required to disclose biometric information. The release would need to indicate the data to be disclosed, the reason for the disclosure, and the intended recipients.

Perhaps the most troubling provision of the Bill for private entities is section 1798.306. Again, looking a lot like the BIPA, SB 1189 would establish a private right of action permitting individuals to allege a violation of the law and bring a civil action for any of the following:

  • The greater of (i) statutory damages between $100 and $1,000 per violation per day, and (ii) actual damages.
  • Punitive damages.
  • Reasonable attorney’s fees and litigation costs.
  • Any other relief, including equitable or declaratory relief, that the court determines appropriate.

Though still early in the legislative process for SB 1189, its introduction illustrates a continued desire by state and local lawmakers to enact protections for biometric information. See, e.g., recent developments in New York, Maryland, and Oregon described in our Biometric Law Map. Before implementing technologies or systems that might involve biometric information, private entities need to carefully consider the emerging legislative landscape.

On January 24, 2022, New York Attorney General Letitia James announced a $600,000 settlement agreement with EyeMed Vision Care, a vision benefits company, stemming from a 2020 data breach compromising the personal information of approximately 2.1 million individuals across the United States, including nearly 99,000 in New York State (the “Incident”).

This settlement was the result of an enforcement action brought by the NY Attorney General under New York’s Stop Hacks and Improve Electronic Data Security Act (“SHIELD Act”). Enacted in 2019, the SHIELD Act aims to strengthen protections for New York residents against data breaches affecting their private information.   The SHIELD Act imposes expansive data security obligations and updated New York’s existing data breach notification requirements.  Our SHIELD Act FAQs are available here.

Notably, EyeMed found itself in the AG’s crosshairs not because of what it did after discovering the Incident, but instead because of what it failed to do beforehand.  Specifically, the AG alleged that, pre-Incident, EyeMed had not maintained reasonable safeguards in the areas of authentication, password management, logging and monitoring, and data retention.  The AG also alleged that EyeMed’s privacy policy had misrepresented the extent to which it protected the privacy, security, confidentiality, and integrity of personal information.

Based on these findings, the AG successfully secured—in addition to the $600,000 payment—EyeMed’s agreement to maintain a written information security program.  This program must include, at minimum, policies and procedures related to password management, authentication and account management, encryption, penetration testing, logging and monitoring, and data retention.  EyeMed is required to review this program annually and to provide training to its workforce on compliance with the program’s requirements.

The EyeMed breach stemmed from a common form of cyberattack in which the bad actor gains access to certain of an organization’s email accounts—and to the sensitive data therein.  In EyeMed’s case, the bad actor accessed emails and attachments containing a wide range of PHI and PII, including:

  • Names;
  • Contact information, including addresses;
  • Dates of birth;
  • Account information, including identification numbers for health insurance accounts and vision insurance accounts;
  • Full or partial Social Security Numbers;
  • Medicaid and Medicare numbers;
  • Driver’s license or other government ID numbers;
  • Birth or marriage certificates;
  • Medical diagnoses and conditions; and
  • Medical treatment information.

EyeMed first became aware of the bad actor’s activities on July 1, 2020—one (1) week after the attacker initially gained access to EyeMed’s email account—and subsequently blocked the bad actor’s access to this account.  After conducting an internal investigation and engaging a forensic cybersecurity firm (through outside counsel), EyeMed determined that the bad actor may have exfiltrated documents and information from the account.  Beginning on September 28, 2020, EyeMed began notifying affected individuals and regulators about the breach, and offering them identity theft protection services.

The SHIELD Act is far-reaching.  It affects any business (including a small business) that holds private information of a New York resident—regardless of whether the organization does business in New York. Under the Act, individuals and businesses that collect computerized data, including private information about New York residents, must implement and maintain reasonable administrative, physical, and technical safeguards.

The fine and non-monetary requirements of the EyeMed settlement are significant and highlight the need for organizations to carefully craft—and regularly revisit—their written information security programs.  As the AG made clear when announcing this settlement, enforcing compliance with the SHIELD Act’s mandate that organizations maintain reasonable data security safeguards will be a focal point for her office moving forward.

The Massachusetts Information Privacy and Security Act (MIPSA) continues to advance through the state legislative process, and is now before the full legislature. While the Act has several hurdles to clear before becoming law, its notable for two reasons. First, the comprehensive nature of the MIPSA exemplifies the direction state data protection laws are heading in the absence of a comprehensive federal consumer data protection law. Second, given the borderless nature of e-commerce, the most robust state consumer data protection law will likely become the de facto national consumer data protection law, and the MIPSA may take that title. This post highlights significant portions of the current version of the Act.

Who is protected? 

The MIPSA protects the personal information of Massachusetts residents.

Who is subject to the MIPSA?

The Act applies to an entity that has annual global gross revenue in excess of 25 million dollars; determines the purposes and means of processing of the personal information of not less than 100,000 individuals; or is a data broker. In addition, the entity conducts business in the state, or if not physically present in the state, processes personal information in the context of offering of goods or services targeted at state residents or monitors the in-state behavior of residents. Where an entity does not otherwise meet these criteria, it may voluntarily certify to the state Attorney General that it is in compliance with and agrees to be bound by the MIPSA.

Are any entities exempt?

Massachusetts state agencies and government bodies, national securities associations and registered futures associations are exempt.

What data is protected?

MIPSA applies to the personal information of a Massachusetts resident, which is defined as information that identifies, relates to, describes, is reasonably capable of being associated with, or could reasonably be linked, directly or indirectly, with an identified or identifiable individual. Personal information does not include de-identified information or publicly available information. For the limited purposes of a sale, personal information also includes information that identifies, relates to, describes, is reasonably capable of being associated with, or could reasonably be linked, directly or indirectly, with an identified or identifiable household.

Does the Act include special protections for Sensitive Information?

The Act carves out heightened protections for sensitive information. These include the right to notice of collection and use, and the right to limit use and disclosure to purposes necessary to perform the services or provide the goods requested, and for other controller internal uses as authorized by the Act.

Sensitive information is personal information that reveals an individual’s racial or ethnic origin, religious beliefs, philosophical beliefs, union membership, citizenship, or immigration status. It also includes biometric information or genetic information that is processed for the purpose of uniquely identifying an individual; personal information concerning a resident’s mental or physical health diagnosis or treatment, sex life or sexual orientation; specific geolocation information; personal information from a child; a Social Security Number, driver’s license number, military identification number, passport number, or state-issued identification card number; and a financial account number, credit or debit card number, with or without any required security code, access code, personal identification number or password, that would permit access to an individual’s financial account.

Is any personal information exempt from the Act?

Protected health information under HIPAA is exempt as is certain data, information, and health records created under HIPAA and Massachusetts state law. Exempt data also includes data collected, processed, or regulated with respect to clinical trials, the Health Care Quality Improvement Act of 1986, the Patient Safety and Quality Improvement Act, FCRA, Driver’s Privacy Protection Act, FERPA, the Farm Credit Act, GLBA, COPPA, the Massachusetts Health Insurance Connector and Preferred Provider Arrangements.

Does the MIPSA apply to employee personal information or information collected in the B2B context?

The Act also exempts personal information collected and processed in the context of an individual acting as a job applicant to, an employee of, or an agent or independent contractor of a controller, processor, or third-party including emergency contact information and information used to administer benefits for another person relating to the individual.

Information collected and used in the course of an individual acting in a commercial context is exempt.

What are the controller’s obligations under the MIPSA?

The Act creates an affirmative obligation to implement appropriate technical and organizational safeguards to ensure the security of the information. In addition, the controller must have a lawful basis to process the personal information. Processing must be done in a fair and transparent manner, which includes providing appropriate privacy notices at or before the point of collection. The controller must collect personal information for an identified and legitimate purpose and processing should be limited to what is necessary to achieve the purpose. The information must be accurate and retained only as long as necessary to achieve the purpose for which it was collected. For processing that may involve a high risk of harm to individuals, the controller may be obligated to conduct a risk assessment. When engaging a processor, the controller must enter into a data processing agreement with the processor that contains mandated provisions designed to ensure the privacy and security of personal information.

What rights do protected individuals have?

Massachusetts residents have the right to know, access, port, delete and correct their personal information, subject to certain limitations. The Act also provides for the right to opt out of the sale of personal information and limit the use and disclosure of sensitive information in particular with respect to targeted advertising. The data controller is prohibited from discriminating against the individual for exercising any of these rights.

Can my organization be sued for violations of the law?

The MIPSA does not include a private cause of action for violations of the Act. However, the proposed bill also amends the state data breach notification law to provide residents with a private right of action where their personal information was subject to a data breach resulting from the entity’s failure to implement reasonable safeguards.

How will the law be enforced?

The state Attorney General is authorized to commence a civil investigation when there is reasonable cause to believe an entity has engaged in, is engaging in, or is about to engage in a violation of the Act. After notice, the entity will have 30 days to cure the violation. In the event the entity fails to cure, the Attorney General may seek a temporary restraining order, preliminary injunction, or permanent injunction to restrain any violations r and may seek civil penalties of up to $7,500 for each violation.

Next steps?

The MIPSA sets a high bar for data protection practices. Whether enacted in whole or part, the Act provides a road map for where data protection laws are headed. Many of the 2022 proposed state laws follow or surpass the protections introduced by the CCPA. Preparing to meet each more comprehensive law will require continued data mapping, ongoing evaluation and development of written information security programs, heightened scrutiny of vendor relationships and agreements, risk assessments, and updated employee data protection and security awareness training.

We will continue monitor the progress of this bill.

New Hampshire Sues Massachusetts Over Remote Worker Taxes | Best States | US NewsWhen Massachusetts issued its data security regulations in 2009 (Regulations), it led the way for states on data security. The Regulations became effective 12 years ago, almost to the day, March 1, 2010. The Bay State is now contemplating comprehensive privacy legislation, the Massachusetts Information Privacy and Security Act (MIPSA), similar to what has been enacted in California, Colorado, and Virginia. As we review this legislation, the MIPSA provides an important reminder, even if it is not ultimately enacted.

The MIPSA would provide individuals a private right of action if their personal information is subject to a breach of security under Massachusetts law caused by a failure to implement reasonable cybersecurity controls. Damages could be up to $500 per individual per incident or actual damages, which ever is greater. The CCPA contains a similar provision.

Under the MIPSA, if enacted in its current form and following a similar approach taken in neighboring Connecticut, controllers would be able to avoid punitive damages in such cases provided they:

  • created, maintained, and complied with a written cybersecurity program with administrative, physical, and technical safeguards that conforms to an industry recognized framework and
  • design the program in accordance with the Regulations based on an appropriate scale and scope.

Examples of industry recognized frameworks under MIPSA would include:

  • National Institute of Standards and Technology’s (NIST) special publications 800-171 or 800-53
  • The Center for Internet Security’s “Center for Internet Security Critical Security

The Wall Street Journal reported on Friday that the state legislature’s Joint Committee on Advanced Information Technology passed the MIPSA along with a bipartisan vote, no objections. It now moves to the full legislature.

If you have waited 12 years to develop that perfect written information security program (WISP), this might be the time to apply the finishing touches. If you have opened a new business in or expanded to Massachusetts, or recently began collecting personal information of Massachusetts residents, a WISP is a critical compliance requirement. If the MIPSA is enacted, a WISP could play a significant role in minimizing exposure to your organization should it be sued in connection with a data breach.

 

Photo from usnews.com

On February 9, the Securities and Exchange Commission (“SEC”) voted to propose rule 206(4)-9 under the Advisers Act and 38a-2 under the Investment Company Act (collectively, “Proposed Rule”). In general, the Proposed Rule would require all advisers and funds to adopt and implement cybersecurity policies and procedures containing several elements. While acknowledging spending on cybersecurity measures in the financial services industry is considerable, the SEC suggested it may nonetheless be inadequate, citing a recent survey finding that 58% of financial firms self-reported “underspending” on cybersecurity measures.

For financial services firms, the stakes are particularly high—it is where the money is.

The Proposed Rule would apply to advisers and funds that are registered or required to be registered with the SEC. Covered advisers include those who provide a variety of services to their clients, such as: financial planning advice, portfolio management, pension consulting, selecting other advisers, publication of periodicals and newsletters, security rating and pricing, market timing, and educational seminars. Compliance will be particularly important for those advisors and funds serving the retirement plan industry and now facing increased cybersecurity scrutiny by plan fiduciaries under the Department of Labor’s cybersecurity guidance.

The SEC recognizes the level of risk can vary advisor to advisor, but also notes that those advisors on the lower end of the risk continuum do not have a pass on compliance. A data breach at an adviser that only offers advice on wealth allocation strategies may not have a significant negative effect on its clients due to the limited personal information it maintains. Compare that to a breach experienced by an adviser performing portfolio management services which holds greater quantities of more sensitive personal information, and may have a degree of control over client assets. But even if personal information is not acquired by hackers, a ransomware event could cause a disruption to the advisor’s services adversely impacting their clients, such as limiting their ability to access funds.

So, there is not a one-size-fits-all approach to addressing cybersecurity risks and the Proposed Rule would allow firms to tailor their cybersecurity policies and procedures to fit the nature and scope of their business and address their individual cybersecurity risks. This is not unlike other cybersecurity frameworks, such as the Security Rule for healthcare providers and plans under the Health Insurance Portability and Accountability Act.

According to the SEC’s Fact Sheet, the Proposed Rule would:

  • Require advisers and funds to adopt and implement written policies and procedures that are reasonably designed to address cybersecurity risks. This requirement includes a comprehensive, documented risk assessment of the adviser’s or fund’s business operations. At least annually, advisors and funds would need to review and evaluate the design and effectiveness of their cybersecurity policies and procedures, which would allow them to update them in the face of ever-changing cyber threats and technologies.
  • Require advisers to report significant cybersecurity incidents to the Commission on proposed Form ADV-C, with similar reporting for funds.
  • Enhance adviser and fund disclosures related to cybersecurity risks and incidents. For instance, the Proposed Rule would amend adviser and fund disclosure requirements to provide current and prospective advisory clients and fund shareholders with improved information regarding cybersecurity risks and cybersecurity incidents.
  • Require advisers and funds to maintain, make, and retain certain cybersecurity-related books and records. This would include records related to compliance with the Proposed Rule and the occurrence of cybersecurity incidents.

The SEC hopes the rules would promote a more comprehensive framework to address cybersecurity risks for advisers and funds, resulting in a reduction in risk and impact of a significant cybersecurity incident. But the SEC also hopes to give clients and investors better information with which to make investment decisions, and itself better information with which to conduct comprehensive monitoring and oversight of ever-evolving cybersecurity risks and incidents affecting advisers and funds.

Facial recognition, voiceprint, and other biometric-related technology are booming, and they continue to infiltrate different facets of everyday life. The technology brings countless potential benefits, as well as significant data privacy and cybersecurity risks.

Whether it is facial recognition technology being used with COVID-19 screening tools and in law enforcement, continued use of fingerprint-based time management systems, or the use of various biometric identifiers such as voiceprint for physical security and access management, applications in the public and private sectors involving biometric identifiers and information continue to grow … so do concerns about the privacy and security of that information and civil liberties. Over the past few years, significant compliance and litigation risks have emerged that factor heavily into the deployment of biometric technologies, particularly facial recognition. This is particularly the case in Illinois under the Biometric Information Privacy Act (BIPA).

Read our Special Report which discusses these concerns and the growing legislating activity. You can also access our Biometric Law Map

In honor of Data Privacy Day, we provide the following “Top 10 for 2022.”  While the list is by no means exhaustive, it does provide some hot topics for organizations to consider in 2022.

  1. State Consumer Privacy Law Developments

On January 1, 2020, the CCPA ushered into the U.S. a range of new rights for consumers, including:

  • The right to request deletion of personal information;
  • The right to request that a business disclose the categories of personal information collection and the categories of third parties to which the information was sold or disclosed; and
  • The right to opt-out of sale of personal information; and
  • The California consumer’s right to bring a private right of action against a business that experiences a data breach affecting their personal information as a result of the business’s failure to implement “reasonable safeguards.”

In November of 2020, California voters passes the California Privacy Rights Act (CPRA) which amends and supplements the CCPA, expanding compliance obligations for companies and consumer rights. Of particular note, the CPRA extends the employment-related personal information carve-out until January 1, 2023. The CPRA also introduces consumer rights relating to certain sensitive personal information, imposes an affirmative obligation on businesses to implement reasonable safeguards to protect certain consumer personal information, and prevents businesses from retaliating against employees for exercising their rights.  The CPRA’s operative date is January 1, 2023 and draft implementation regulations are expected by July 1, 2022. Businesses should monitor CCPA/CPRA developments and ensure their privacy programs and procedures remain aligned with current CCPA compliance requirements. For practical guidance on navigating compliance, check out our newly updated CCPA/CPRA FAQS.

In addition to California developments, in 2021, Virginia and Colorado also passed consumer privacy laws similar in kind to the CCPA, both effective January 1, 2023 (together with the CPRA). While the three state laws share common principles, including consumer rights of deletion, access, correction and data portability for personal data, they also contain key nuances, which pose challenges for broad compliance.  Moreover at least 26 states have considered or are considering similar consumer privacy laws, which will only further complicate the growing patchwork of state compliance requirements.

In 2022, businesses are strongly urged to prioritize their understanding of what state consumer privacy obligations they may have, and strategize for implementing policies and procedures to comply.

  1. Biometric Technology Related Litigation and Legislation

There was a continued influx of biometric privacy class action litigation in 2021 and this will likely continue in 2022. In early 2019, the Illinois Supreme Court handed down a significant decision concerning the ability of individuals to bring suit under the Illinois’s Biometric Information Privacy Act (BIPA). In short, individuals need not allege actual injury or adverse effect beyond a violation of his/her rights under BIPA to qualify as an aggrieved person and be entitled to seek liquidated damages, attorneys’ fees and costs and injunctive relief under the Act.

Consequently, simply failing to adopt a policy required under BIPA, collecting biometric information without a release or sharing biometric information with a third party without consent could trigger liability under the statute. Potential damages are substantial as BIPA provides for statutory damages of $1,000 per negligent violation or $5,000 per intentional or reckless violation of the Act. There continues to be a flood of BIPA litigation, primarily against employers with biometric timekeeping/access systems that have failed to adequately notify and obtain written releases from their employees for such practices.

Biometric class action litigation has also been impacted by COVID-19. Screening programs in the workplace may involve the collection of biometric data, whether by a thermal scanner, facial recognition scanner or other similar technology. In late 2020, plaintiffs’ lawyers filed a class action lawsuit on behalf of employees concerning their employer’s COVID-19 screening program, which is alleged to have violated the BIPA. According to the complaint, employees were required to undergo facial geometry scans and temperature scans before entering company warehouses, without prior consent from employees as required by law.  This case is still alive and well, at the start of 2022, after significant attempts by the defense, a federal district judge in Illinois declined to dismiss the proposed class action, as the allegations relating to violations regarding “possession” and “collection” of biometric data pass muster at this stage.  Many businesses have been sued under the BIPA for similar COVID related claims in the past year, and 2022 will likely see continued class action litigation in this space.

In 2021, biometric technology-related laws began to evolve at a rapid pace, signaling a continued trend into 2022.  In July 2021, New York City established BIPA-like requirements for retail and hospitality businesses that collect and use “biometric identifier information” from customers.  In September 2021, the City of Baltimore officially banned private use of facial recognition technology. Baltimore’s local ordinance prohibiting persons (including residents, businesses, and most of the city government) from “obtaining, retaining, accessing, or using certain face surveillance technology or any information obtained from certain face surveillance technology”.  Other localities have also established prohibitions on use of biometric technology including Portland (Oregon), San Francisco. State legislatures have also increased focus on biometric technology regulation. In addition to Illinois’s BIPA, Washington and Texas have similar laws, and states including Arizona, Florida, Idaho, Massachusetts and New York have also proposed such legislation. The proposed biometric law in New York state would mirror Illinois’ BIPA, including its private right of action provision. In California, the CCPA also broadly defines biometric information as one of the categories of personal information protected by the law.

Additionally, states are increasingly amending their breach notification laws to add biometric information to the categories of personal information that require notification, including 2021 amendment in Connecticut and 2020 amendments in California, D.C., and Vermont. Similar proposals across the U.S. are likely in 2022.

In response to the constantly evolving legislation related to biometric technology, we have created an interactive biometric law state map to help businesses that want to deploy these technologies, which inevitably require the collection, storage, and/or disclosure of biometric information, track their privacy and security compliance obligations.

  1. Ransomware Attacks

Ransomware attacks continued to make headlines in 2021 impacting large organizations, including Colonial Pipeline, Steamship Authority of Massachusetts, the NBA, JBS Foods, the D.C. Metropolitan Police Department and many more. Ransomware attacks are nothing new, but they are increasing in severity. There has been an increase in frequency of attacks and higher ransomware payments, in large part due to increased remote work and the associated security challenges.  The healthcare industry in particular has been substantially impacted by the onset of the COVID-19 pandemic  – a recent study by Comparitech found that ransomware attacks on the healthcare industry has resulted in a financial loss of over $20 billion in impacted revenue, litigation and ransomware payments and growing.

In fact, the FBI jointly with the Cybersecurity and Infrastructure Security Agency (CISA) went so far as to issue a warning to be on high alert for ransomware attacks for holidays in light of numerous targeted attacks over other holidays earlier in the year.

Moreover in 2021, the National Institute of Standards Technology (NIST)  released a preliminary draft of its Cybersecurity Framework Profile for Ransomware Risk Management. The NIST framework provides steps for protecting against ransomware attacks, recovering from ransomware attacks, and determining you organization’s state of readiness to prevent and mitigate ransomware attacks.

Ransomware continues to present a significant threat to organizations as we move into 2022. Organizations may not be able to prevent all attacks, but it is important to remain vigilant and be aware of emerging trends.

Here are some helpful resources for ransomware attack prevention and response:

  1. Biden Administration Prioritizes Cybersecurity

In large part due to significant threat of ransomware attacks discussed above, the Biden Administration has made clear that cybersecurity protections are a priority. In May of 2021, on the heels of the Colonial Pipeline ransomware attack that snarled the flow of gas on the east coast for days, the Biden Administration issued an Executive Order on “Improving the Nation’s Cybersecurity” (EO). The EO was in the works prior to the Colonial Pipeline cyberattack, however was certainly prioritized as a result. The EO made a clear statement on the policy of the Administration, “It is the policy of my Administration that the prevention, detection, assessment, and remediation of cyber incidents is a top priority and essential to national and economic security.  The Federal Government must lead by example.  All Federal Information Systems should meet or exceed the standards and requirements for cybersecurity set forth in and issued pursuant to this order.” This EO will mostly impacts the federal government and its agencies. However, several of the requirements in the EO will reach certain federal contractors, and also will influence the private sector.

Shortly after the Biden Administration issued the EO, it followed in August 2021 with the issuance of a National Security Memo (NSM) with the intent of improving cybersecurity for critical infrastructure systems. This NSM established an Industrial Control Systems Cybersecurity Initiative (the “Initiative”) that will be a voluntary, collaborative effort between the federal government and members of the critical infrastructure community aimed at improving voluntary cybersecurity standards for companies that provide critical services.

The primary objective of the Initiative is to encourage, develop, and enable deployment of a baseline of security practices, technologies and systems that can provide threat visibility, indications, detection, and warnings that facilitate response capabilities in the event of a cybersecurity threat.  According to the President’s Memo, “we cannot address threats we cannot see.”

And most recently, in early January 2022, President Biden issued an additional NSM to improve the cybersecurity of National Security, Department of Defense, and Intelligence Community Systems.  “Cybersecurity is a national security and economic security imperative for the Biden Administration, and we are prioritizing and elevating cybersecurity like never before…Modernizing our cybersecurity defenses and protecting all federal networks is a priority for the Biden Administration, and this National Security Memorandum raises the bar for the cybersecurity of our most sensitive systems,” stated the White House in its issuance of the latest NSM.

The U.S. government will continue to ramp up efforts to strengthen its cybersecurity as we head into 2022, impacting both the public and private sector. Businesses across all sectors should be evaluating their data privacy and security threats and vulnerabilities and adopt measures to address their risk and improve compliance.

  1. COVID-19 privacy and security considerations

During 2020 and 2021, COVID-19 presented organizations large and small with new and unique data privacy and security considerations. And while we had high hopes that increased vaccination rates would put this pandemic in the rearview mirror, the latest omicron strand showed us otherwise. Most organizations, particularly in their capacity as employers, needed to adopt COVID-19 screening and testing measures resulting in the collection of medical and other personal information from employees and others. While the Supreme Court has stayed OSHA’s ETS mandating that employers with 100+ employees require COVID-199 vaccination and the Biden Administration ultimately withdrew the same, some localities have instituted mandates depending on industry, and many employers have voluntarily decided to institute vaccine requirements for employees.  Ongoing vigilance will be needed to maintain the confidential and secure collection, storage, disclosure, and transmission of medical and COVID-19 related data that may now include tracking data related to vaccinations or the side effects of vaccines.

Several laws apply to data the organizations may collect in this instance. In the case of employees, for example, the Americans with Disability Act (ADA) requires maintaining the confidentiality of employee medical information and this may include COVID-19 related data. Several state laws also have safeguard requirements and other protections for such data that organization should be aware of when they or others on their behalf process that information.

Many employees will continue to telework during 2022 (and beyond). A remote workforce creates increased risks and vulnerabilities for employers in the form of sophisticated phishing email attacks or threat actors gaining unauthorized access through unsecured remote access tools. It also presents privacy challenges for organizations trying to balance business needs and productivity with expectations of privacy. These risks and vulnerabilities can be addressed and remediated through periodic risk assessments, robust remote work and bring your own device policies, and routine monitoring.

As organizations continue to work to create safe environments for the in-person return of workers, customers, students, patients and visitors, they may rely on various technologies such as wearables, apps, devices, kiosks, and AI designed to support these efforts. These technologies must be reviewed for potential privacy and security issues and implemented in a manner that minimizes legal risk.

Some reminders and best practices when collecting and processing information referred to above and rolling out these technologies include:

  • Complying with applicable data protection laws when data is collected, shared, secured and stored including the ADA, Genetic Information Nondiscrimination Act, CCPA, GDPR and various state laws. This includes providing required notice at collection under the California Consumer Privacy Act (CCPA), or required notice and a documented lawful basis for processing under the GDPR, if applicable.
  • Complying with contractual agreements regarding data collection; and
  • Contractually ensuring vendors who have has access to or collect data on behalf of the organization implement appropriate measures to safeguard the privacy and security of that data.
  1. “New” EU Standard Contractual Clauses

In July of 2020 the Court of Justice of the European Union (CJUE) published its decision in Schrems II which declared the EU-US Privacy Shield invalid for cross border data transfers and affirmed the validity standard contractual clauses (“SCCs) as an adequate mechanism for transferring person data from the EEA, subject to heightened scrutiny.  However, the original SCCs were unable to adequately address the EU Commission’s concerns about the protection of personal data.

On June 4, 2021, the EU Commission adopted “new” modernized SCCs to replace the 2001, 2004, and 2010 versions in use up to that point – effective since September 27,2021. The EU Commission updated the SCCs to address more complex processing activities, the requirements of the GDPR, and the Schrems II decision. These clauses are modular so they can be tailored to the type of transfer.  if a data exporter transfers data from the EU to a U.S. organization, the U.S. organization must execute the new SCCs unless the parties rely on an alternate transfer mechanism or an exception exists. This applies regardless of whether the U.S. company receives or accesses the data as a data controller or processor. The original SCCs apply to controller-controller and controller-processor transfers of personal data from the EU to countries without a Commission adequacy decision. The updated clauses are expanded to also include processor-processor and processor-controller transfers. While the existing SCCs were designed for two parties, the new clauses can be executed by multiple parties. The clauses also include a “docking clause” so that new parties can be added to the SCCs throughout the life of the contract.

The obligations of the data importer are numerous and include, without limitation:

  • documenting the processing activities it performs on the transferred data,
  • notifying the data exporter if it is unable to comply with the SCCs,
  • returning or securely destroying the transferred data at the end of the contract,
  • applying additional safeguards to “sensitive data,”
  • adhering to purpose limitation, accuracy, minimization, retention, and destruction requirements,
  • notifying the exporter and data subject if it receives a legally binding request from a public authority to access the transferred data, if permitted, and
  • challenging a public authority access request if it reasonably believes the request is unlawful.

The SCCs require the data exporter to warrant there is no reason to believe local laws will prevent the importer from complying with its obligations under the SCCs. In order to make this representation, both parties must conduct and document a risk assessment of the proposed transfer.

If an organization that transfers data cross border has not already done so it should be implementing the new procedures and documents for the SCCs. This is, of course, if they are not relying on an alternate transfer mechanism or an exception exists. Organizations will also need to review any ongoing transfers made in reliance on the old SCCs and take steps to comply. As with new transfers, this will require a documented risk assessment and a comprehensive understanding of the organization’s process for accessing and transferring personal data protected under GDPR. For additional guidance on the new EU SCCs, our comprehensive FAQs are available here.

  1. TCPA

In April 2021, the U.S. Supreme Court issued a monumental decision with significant impact on the future of Telephone Consumer Protection Act (TCPA) class action litigation. The court narrowly ruled to qualify as an “automatic telephone dialing system”, a device must be able to either “store a telephone number using a random or sequential generator or to produce a telephone number using a random or sequential number generator”.  The underlying decision of the Ninth Circuit was reversed and remanding.

The Supreme Court unanimously concluded, in a decision written by Justice Sotomayor, that to qualify as an “automatic telephone dialing system” under the TCPA, a device must have the capacity either to store, or to produce, a telephone number using a random or sequential number generator.

“Expanding the definition of an autodialer to encompass any equipment that merely stores and dials telephone numbers would take a chainsaw to these nuanced problems when Congress meant to use a scalpel,” Justice Sotomayor pointed out in rejecting the Ninth Circuit’s broad interpretation of the law.

Moreover, Sotomayor noted that, “[t]he statutory context confirms that the autodialer definition excludes equipment that does not “us[e] a random or sequential number generator.””  The TCPA’s restrictions on the use of autodialers include, using an autodialer to call certain “emergency telephone lines” and lines “for which the called party is charged for the call”. The TCPA also prohibits the use of an autodialer “in such a way that two or more telephone lines of a multiline business are engaged simultaneously.” The Court narrowly concluded that “these prohibitions target a unique type of telemarketing equipment that risks dialing emergency lines randomly or tying up all the sequentially numbered lines at a single entity.”

The Supreme Court’s decision resolved a growing circuit split, where several circuits had previously interpreted the definition of an ATDS broadly  to encompass any equipment that merely stores and dials telephone numbers, while other circuits provided a narrower interpretation, in line with the Supreme Court’s ruling. It was expected the Supreme Court’s decision would help resolve the ATDS circuit split and provide greater clarity and certainty for parties facing TCPA litigation. In the six months following the Supreme Court’s decision, the Institute of Legal Reform documented a 31% drop in TCPA filings, compared to the six months prior to the ruling.  Nonetheless, many claims based on broad ATDS definitions are still surviving early stages of litigation in the lower courts, and some states have enacting (or are considering) “mini-TCPAs” which include a broader definition of ATDS. While the Supreme Court’s decision was considered a win for defendants facing TCPA litigation, organizations are advised to review and update their telemarketing and/or automatic dialing practices to ensure TCPA compliance, as they move into 2022.

  1. Global Landscape of Data Privacy & Security

2021 was a significant year for the global landscape of data privacy and security.  As discussed above, on June 4th, the European Commission adopted new standard contractual clauses for the transfer of personal data from the EU to “third countries”, including the U.S. On August 20, China passed its first comprehensive privacy law, the Personal Information Protection Law (PIPL), similar in kind to the EU’s GDPR.  The law took effect in November of 2021.  In addition, China published 1) Security Protection Regulations on the Critical Information Infrastructure and 2) the Data Security Law which aim to regulate data activities, implement effective data safeguards, protect individual and entity legitimate rights and interests, and ensure state security – both effective September of 2021.  Finally, Brazil enacted  Lei Geral de Proteção de Dados Pessoais (LGPD), its first comprehensive data protection regulation, again with GDPR-like principles. The LGPD became enforceable in August of 2021.

In 2022, U.S. organizations may face increased data protection obligations as a result of where they have offices, facilities, or employees; whose data they collect; where the data is stored; whether it is received from outside the U.S.; and how it is processed or shared. These factors may trigger country-specific data protection obligations such as notice and consent requirements, vendor contractual obligations, data localization or storage concerns, and safeguarding requirements. Some of these laws may apply to data collection activities in a country regardless of whether the U.S. business is located there.

  1. Federal Consumer Privacy Law

Numerous comprehensive data protection laws were proposed at the federal level in recent years. These laws have generally stalled due to bipartisan debate over federal preemption and a private right of action. And while, every year, we ask ourselves whether this will be the year, 2022 may indeed be the year the U.S. enacts a federal consumer privacy law.  2022 has barely begun and a coalition which includes the U.S. Chamber of Congress together with local business organizations in over 20 states have issued a letter to Congress highlighting the importance of enacting a federal consumer privacy law as soon as possible.

“Data is foundational to America’s economic growth and keeping society safe, healthy and inclusive…Fundamental to the use of data is trust,” the coalition noted. “A national privacy law that is clear and fair to business and empowering to consumers will foster the digital ecosystem necessary for America to compete.”

Moreover, with California, Virginia, and Colorado all with comprehensive consumer privacy laws (as discussed above), and approximately half of U.S. states contemplating similar legislation, there is a growing patchwork of state laws that “threatens innovation and create consumer and business confusion,” as stated in the coalition’s letter to Congress.

Will 2022 be the year the U.S. government enacts a federal consumer privacy law? Only time will tell.  We will continue to update as developments unfold.

  1. Cyber Insurance

Over the past several years, if your organization experienced a cyberattack, such as ransomware or a diversion of funds due to a business email compromise (BEC), and you had cyber insurance, you likely were very thankful. However, if you are renewing that policy (or in the cyber insurance market for the first time), you are probably looking at much steeper rates, higher deductibles, and even co-insurance, compared to just a year or two ago. This is dependent on finding a carrier to provide competitive terms, although there are some steps organizations can take to improve insurability.

Claims paid under cyber insurance policies are significantly up, according to Marc Schein*, CIC, CLCS, National Co-Chair Cyber Center of Excellence for Marsh McLennan Agency who closely tracks cyber insurance trends. Mr. Schein identified the key drivers hardening the cyber insurance market: ransomware and business interruption.

According to Fitch Ratings’ Cyber Report 2020, insurance direct written premiums for the property and casualty industry increased 22% in the past year to over $2.7 billion, representing the demand for cyber coverage. The industry statutory direct loss plus defense and cost containment (DCC) ratio for standalone cyber insurance rose sharply in 2020 to 73% compared with an average of 42% for the previous five years (2015-2019). The average paid loss for a closed standalone cyber claim moved to $358,000 in 2020 from $145,000 in 2019.

The effects of these, other increases in claims, and losses from cyberattacks had a dramatic impact on cyber insurance. Perhaps the most concerning development for organizations in the cyber insurance market is the significantly increased scrutiny carriers are applying to an applicant’s insurability.

There are no silver bullets, but implementing administrative, physical and technical safeguards to protect personal information may dramatically reduce the chances of a cyberattack, and that is music to an underwriter’s ears. As an organization heads into 2022, ensuring such safeguards are instituted and regularly reviewed, can go a long way.

*      *     *     *     *

For these reasons and others, we believe 2022 will be a significant year for privacy and data security.

Happy Privacy Day!

Few want to get past the COVID-19 pandemic more than leaders of federal and state unemployment benefit departments. For the last 2 years they have been successfully targeted for fraud and data breaches, racking up billions in losses. Thousands of employees across the country, including yours truly, have had false claims submitted in their name.

Why is this happening? It appears to be a combination of factors, most leading back to one driving force – COVID-19. Congress’ passing rich unemployment compensation benefits to offset the economic carnage stemming from the pandemic created a significant incentive for criminal hackers, specifically the Pandemic Unemployment Assistance (PUA) program. During the same time, the numbers of workers in state unemployment offices went down due to layoffs, while the number of applications for unemployment benefits skyrocketed. Couple that with an expansion of benefits to workers without traditional pay stubs (e.g., gig workers) making verification harder, and data security gaps and challenges regularly facing state agencies and organizations generally, and there is a perfect storm for fraud and data breaches to proliferate.

Here’s a rundown of just some of the losses reported by Yahoo!news:

  • Oregon – $24 million in 2020
  • Washington – $646 million in 2020
  • California – $20 billion, since the start of the pandemic through October 2021
  • Federal – $87.3 billion since the start of the pandemic through September 30, 2021, per the DOL (relying on a historical improper payment rate of 10%).

What are some of the effects? There is, of course, a significant loss of taxpayer dollars, not to mention all the time spent trying to resolve the fraud, getting the much-needed benefits to those whose benefits were delayed due to the fraud, and implementing stronger controls.

With so many employees learning of and reporting false unemployment claims being submitted in their name, employers across the country have had to jump into to help. Frequently, many employees at a single company reported fraud at the same time, making it seem as if the company was the victim of a breach. While it is always important to appropriately investigate suspected data incidents, a compromise to the employer’s systems generally was not the reason for the employees’ reports in these cases.

Is it coming to an end? Maybe not. On Friday, Pennsylvania’s Department of Labor and Industry (L&I) reported it is investigating “sophisticated attacks” on its systems. According to reports,

unemployment recipients stopped receiving their checks, and that L&I telephone agents told they were among numerous Pennsylvanians whose direct-deposit banking information had been changed

What can affected organizations and individuals do? Affected federal and state agencies have been and continue to be taking steps to minimize these attacks and the resulting fraud. One of those steps is to deploy facial recognition technologies to more strongly verify the the identities of claimants. By late summer, more than half of the states in the U.S. have contracted with ID.me to provide ID verification services. For private sector organizations, the deployment of such technologies to verify identities of customers and employees faces a growing web of regulation.  Other efforts to curb this kind of activity includes steps all organizations might consider, like enabling multi-factor authentication (MFA). This is something the PA L&I wished it did. Hopefully, pandemics are not regular occurrences. But planning for business interruption is critical.

For organizations and their employees affected by unemployment fraud, it is important to quickly report incidents and follow recommended steps by the applicable agency. Below are just a few of the online resources that may be helpful.

The California Consumer Privacy Act (CCPA), considered one of the most expansive U.S. privacy laws to date, went into effect on January 1, 2020. The CCPA placed significant limitations on the collection and sale of a consumer’s personal information and provides consumers new and expansive rights with respect to their personal information.

Less than one year later, on November 3, 2020, a majority of California residents voted in favor of Proposition 24, which included the California Privacy Rights Act (CPRA). The CPRA builds upon the CCPA’s extensive framework of privacy rights and obligations, both expanding and modifying key aspects of the CCPA, and generally becomes effective January 1, 2023.

Click here to read our CCPA/CPRA FAQs

We substantially updated our prior CCPA FAQs to cover many of the CPRA changes. Our hope is they help businesses learn more about the obligations they may have and strategies for implementing policies and procedures to comply.