Governor Newsom recently signed two significant bills focused on protecting digital likeness rights: Assembly Bill (AB)1836 and Assembly Bill (AB) 2602. These legislative measures aim to address the complex issues surrounding the commercial use of an individual’s digital rights and establish guidelines for responsible AI use in the digital age.

California AB 1836 addresses the use of likeness and digital replica rights for various individuals and establishes regulatory safeguards for digital replicas and avatars used in commercial settings. The bill outlines the following key provisions:

  • The law defines digital replicas as any digital representation of an individual that is created using their likeness, voice, or other personal attributes.
  • Explicit consent must be obtained from individuals before their digital replicas can be used for any commercial purpose. Consent must be documented and cannot be implied or assumed.
  • The law restricts the use of digital replicas in contexts that could mislead or deceive consumers, including political endorsements, commercial advertisements, and other public statements without the individual’s explicit consent.
  • Violations of AB 1836 can result in significant penalties, including fines and potential civil lawsuits. The bill empowers individuals to seek damages if their digital replicas are used without consent.

AB 2602 complements AB 1836 by further strengthening the legal framework surrounding digital replicas. AB 2602 specifically addresses the following aspects:

  • Sets forth stringent privacy protections for individuals whose digital replicas are used in any capacity. This includes safeguarding personal data and ensuring that digital replicas are not exploited for unauthorized purposes.
  • Mandates that any use of digital replicas must be accompanied by clear disclosures indicating the nature of the replica and the purpose for which it is being used. This ensures that consumers are informed and not misled.
  • Imposes harsher penalties for violations, including higher fines and longer statutes of limitations for filing civil lawsuits. It also provides for criminal charges in severe cases of misuse.
  • Businesses using digital replicas must undergo regular third-party audits to verify compliance with AB 2602. These audits will help maintain transparency and accountability.

While California has taken a pioneering step with AB 1836 and AB 2602, other states have also enacted or proposed legislation to address digital replica rights. The following are examples of how different states handle these rights:

  • New York: New York has robust laws protecting individuals’ rights to their likeness and voice. The state’s Civil Rights Law Sections 50 and 51 provide individuals with the right to control the commercial use of their image and voice. Explicit consent is required for any commercial usage, similar to AB 1836.
  • Florida: Florida’s statutes also protect individuals’ rights to their likeness. The Florida Statutes Section 540.08 mandates that explicit consent must be obtained for using an individual’s name, photograph, or likeness for commercial purposes. The law provides a framework similar to California’s AB 1836.
  • Illinois: Illinois has the Right of Publicity Act, which prohibits the unauthorized use of an individual’s identity for commercial purposes. The Act is comprehensive, covering various aspects of an individual’s persona, including their voice, signature, photograph, and likeness. Violations can lead to considerable fines and damages.
  • Texas: Texas recognizes an individual’s right to control the use of their likeness and voice through the Texas Property Code Section 26.001, which requires written consent for commercial use. The law is designed to protect individuals from unauthorized exploitation of their persona.

Employers and businesses must be aware of the following takeaways from AB 1836, AB 2602, and other state legislation to ensure compliance and avoid potential legal repercussions:

  • Obtain Clear Consent: Employers must implement mechanisms to obtain clear and documented consent from employees or any individuals whose digital replicas will be used for commercial activities, such as marketing. Employers might consider following a similar rule whenever using an employee’s digital replica.
  • Review Existing Practices: It is essential for businesses to review their current practices involving digital replicas and ensure they align with the new legal requirements. This includes updating contracts and privacy policies to meet the standards set by different states.
  • Train Employees: Businesses should provide training to employees on the implications of these laws and the importance of obtaining consent before using digital replicas. This training should cover the specific requirements of the states in which the business operates.
  • Monitor Compliance: Establish a compliance monitoring system to regularly check that all practices involving digital replicas adhere to the provisions of AB 1836, AB 2602, and other relevant state legislation. Regular audits and updates can help maintain compliance across multiple jurisdictions.

California Assembly Bills 1836 and 2602 mark significant developments in the realm of digital replica rights, emphasizing the need for explicit consent, transparency, and enhanced privacy protections. If you have questions about AB 1836 and 2602 or related issues, contact a Jackson Lewis attorney to discuss.

Artificial Intelligence (AI) has created numerous opportunities for growth and economic development throughout California.  However, the unregulated use of AI can lead to a Pandora’s Box of undesirable consequences. A regulatory framework that leads to inconsistent results likely will lead to other problems.  Acknowledging this, the most recent California legislature included a bevy of bills aimed at regulating the use of AI, a formal, legal definition of AI to use across various California statutes. 

On September 28, 2024, Governor Newsom signed Assembly Bill (AB) 2885, which defines AI as

an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objective, infer from the input it receives how to generate outputs that can influence physical or virtual environments. 

The purpose of this definition is to standardize the definition of AI across various California statutes, including the California Business and Professions Code, Education Code, and Government Code.  According to the California legislature, this definition is broad enough to cover all conceivable uses of AI, yet it limits what is considered AI solely to “engineered or machine-based systems” (i.e., not biological organisms).  Moving forward, we can expect the legislature to continue using this definition of AI as it navigates the novel legal issues that arise in our ever-evolving technological world.

The amendments of this bill take effect January 1, 2025.

Announcing its fourth ransomware cybersecurity investigation and settlement, the Office for Civil Rights (OCR) also observed there has been a 264% increase in large ransomware breaches since 2018.

Here, the OCR reached an agreement with a medium-size private healthcare provider following a ransomware attack relating to potential violations of the HIPAA Security Rule. The settlement included a payment of $250,000 and a promise by the covered entity to take certain steps regarding the security of PHI.

“Cybercriminals continue to target the heath care sector with ransomware attacks. Health care entities that do not thoroughly assess the risks to electronic protected health information and regularly review the activity within their electronic health record system leave themselves vulnerable to attack, and expose their patients to unnecessary risks of harm,” OCR Director Melanie Fontes Rainer.

In this case, the OCR announcement states that nearly 300,000 patients were affected by the ransomware attack. Like most OCR investigations under similar circumstances, the agency examines the covered entity’s compliance with the Security Rule. And, as described in many of its settlements, the OCR focuses on the administrative, physical, and/or technical standards it believes the covered entity or business associate failed to satisfy. By focusing on these actions now, a covered entity facing an OCR investigation, perhaps because of a ransomware or other data breach, likely will be in a stronger defensible position.

These actions include: 

  • Conduct an accurate and thorough risk analysis to determine the potential risks and vulnerabilities to the confidentiality, integrity, and availability of its ePHI; 
  • Implement a risk management plan to address and mitigate security risks and vulnerabilities identified in their risk analysis; 
  • Develop a written process to regularly review records of information system activity, such as audit logs, access reports, and security incident tracking reports; 
  • Develop policies and procedures for responding to an emergency or other occurrence that damages systems that contain ePHI; 
  • Develop written procedures to assign a unique name and/or number for identifying and tracking user identity in its systems that contain ePHI; and 
  • Review and revise, if necessary, written policies and procedures to comply with the HIPAA Privacy and Security Rules.  

The OCR also recommends the following steps to mitigate or prevent cyber-threats: 

  • Review all vendor and contractor relationships to ensure business associate agreements are in place as appropriate and address breach/security incident obligations. 
  • Integrate risk analysis and risk management into business processes; conducted regularly and when new technologies and business operations are planned. 
  • Ensure audit controls are in place to record and examine information system activity. 
  • Implement regular review of information system activity. 
  • Utilize multi-factor authentication to ensure only authorized users are accessing ePHI. 
  • Encrypt ePHI to guard against unauthorized access to ePHI. 
  • Incorporate lessons learned from incidents into the overall security management process. 
  • Provide training specific to organization and job responsibilities and on regular basis; reinforce workforce members’ critical role in protecting privacy and security. 

Of course, taking these steps should include documenting that you took them. During an OCR investigation, the agency is not going to take your word for the good work that you and your team did. You will need to be able to show the steps taken, and that means written policies and procedures, written assessments, sign in sheets for training and the materials covered during the training, etc.

HIPAA covered entities and business associates are not all the same, and some will be expected to have a more robust program than others. The good news is that the regulations contemplate this risk-based approach to compliance. But all covered entities and business associates need to take some action in these areas to protect the PHI they collect and maintain.

If there is one thing artificial intelligence (AI) systems need is data and lots of it as training AI is essential for achieving success for a given use case. A recent investigation by Australia’s privacy regulator into the country’s largest medical imaging provider, I-MED Radiology Network, illustrates concerns about the use of medical data to AI systems. This investigation may offer important insights for healthcare providers in the U.S. also trying to leverage the benefits of AI. They too grapple with where those applications intersect with privacy and data security laws, including the Health Insurance Portability and Accountability Act (HIPAA).

The Australian Case: I-MED Radiology’s Alleged AI Data Misuse

The Office of the Australian Information Commissioner (OAIC) has initiated an inquiry into allegations that I-MED Radiology Network shared patient chest x-rays with Harrison.ai, a health technology company, to train AI models without first obtaining patient consent. According to reports, a leaked email indicates that Harrison.ai distanced itself from responsibility for patient consent, asserting that compliance with privacy regulations was I-MED’s obligation. Harrison.ai has since stated that the data used was de-identified and that it complied with all legal obligations.

Under Australian privacy law, particularly the Australian Privacy Principles (APPs), personal information can only be disclosed for its intended or a secondary use that the patient would reasonably expect. It remains unclear whether training AI on medical data qualifies as a “reasonable expectation” for secondary use.

The OAIC’s preliminary inquiries into I-MED Radiology may ultimately clarify how medical data can be used in AI contexts under Australian law, and may offer insights for healthcare providers across borders, including those in the United States.

HIPAA Considerations for U.S. Providers Using AI

The investigation of I-MED raises significant issues that U.S. healthcare providers, subject to HIPAA, should consider, especially given the growing adoption of AI tools in medical diagnostics and treatment. To date, the U.S. Department of Health and Human Services (HHS) has not provided any specific guidance for HIPAA covered entities or business associates concerning AI. In April 2024, HHS publicly shared its plan for promoting responsible use of artificial intelligence (AI) in automated and algorithmic systems by state, local, tribal, and territorial governments in the administration of public benefits – PDF. In October 2023, HHS and the Health Sector Cybersecurity Coordination Center (HC3) published a white paper entitled, AI-Augmented Phishing and the Threat to the Health Sector. More is expected.  

HIPAA regulates the privacy and security of protected health information (PHI), generally requiring covered entities to obtain patient consent or authorization before using or disclosing PHI for purposes outside of certain exceptions, such as treatment, payment, or healthcare operations (TPO).

In the context of AI, the use of de-identified data for research or development purposes—such as training AI systems—can generally proceed without specific patient authorization where that the data meets HIPAA’s strict de-identification standards. HIPAA generally defines de-identified information as data from which all identifiable information has been removed in such a way that it cannot be linked back to the individual.

However, U.S. healthcare providers must ensure that de-identification is properly executed, particularly when AI is involved, as the re-identification risks in AI models can be heightened due to the vast amounts of data processed and the sophisticated methods used to analyze it. Therefore, even when de-identified data is used, entities should carefully evaluate the robustness of their de-identification methods and consider whether additional safeguards are needed to mitigate any risks of re-identification.

Risk of Regulatory Scrutiny

While HIPAA does not currently impose specific obligations on AI use beyond general privacy and security requirements, the I-MED case highlights how AI-driven data practices can attract regulatory attention. U.S. healthcare providers should be prepared for similar scrutiny from federal and state regulators as AI becomes more integrated into healthcare systems.

In addition, there is increasing pressure on policymakers to update healthcare privacy laws, including HIPAA, to address the unique challenges posed by AI and machine learning. Providers should stay informed about potential regulatory changes and proactively implement AI governance frameworks that ensure compliance with both current and emerging legal standards.

Conclusion: Lessons for U.S. Providers

The ongoing investigation into I-MED Radiology’s alleged misuse of medical data for AI training underscores the importance of ensuring legal compliance, patient transparency, and robust data governance in AI applications. For U.S. healthcare providers subject to HIPAA, the case offers several key takeaways:

  1. Develop/Expand Governance to Address AI. AI technologies, including generative AI, are affecting all parts of an organization, from providing core services, to IT, to HR, and marketing as well. Different use cases will drive varied considerations making a clear yet adaptable governance structure important for ensuring compliance and minimizing organizational risk.
  2. Ensure proper de-identification: When using de-identified data for AI training, healthcare entities should verify that their de-identification methods meet HIPAA’s stringent standards and account for AI’s re-identification risks.
  3. Monitor evolving AI regulations: With increased regulatory attention on AI, healthcare providers should prepare for potential legal developments and enhance their AI governance frameworks accordingly.

By staying proactive, U.S. healthcare providers can harness the power of AI while maintaining compliance with privacy laws and safeguarding patient trust.

According to the California legislature, audio recordings, video recordings, and still images can be compelling evidence of the truth.  However, the proliferation of Artificial Intelligence (AI), specifically, generative AI, has made it drastically easier to create fake content that is almost impossible to distinguish from authentic content.  To address this concern, California’s Governor signed Senate Bill (SB) 942, which requires businesses that provide generative AI systems to make accessible tools to detect whether content was created by AI.

SB 942 defines “covered provider” as “a person [or business] that creates, codes, or otherwise produces generative artificial intelligence systems[, and] that has over 1,000,000 monthly visitors or users and is publicly accessible within the geographic boundaries of the state.”  Under SB 942, a covered provider must offer a publicly accessible AI detection tool at no cost. This tool allows users to assess whether the content was created or altered by AI and provides system provenance data (i.e., information explaining where the data originated) without revealing personal information.

Moreover, AI-generated content must include clear and conspicuous disclosures identifying it as such. Latent disclosures must also convey information about the content’s origin and authenticity, detectable by the AI detection tool.

While this law will not end the challenges employers face trying to discern deepfakes from reality,  it might help to avoid some critical missteps. Recall the disruption experienced by a school community, in particular its high school principal, in Pikesville, Maryland, when a recording suggested the principal made racially insensitive and antisemitic remarks. It took several months for the Baltimore County Police Department to investigate and conclude that the recording was a fake, a “deepfake,” generated by AI technology. The increased transparency that SB 942 could bring may have reduced or eliminated the flood of calls to the school, heightened security, and employment actions taken against the principal.

 Violations of the act can result in civil penalties of $5,000 per violation, enforceable by the Attorney General, city attorneys, or county counsels.  This means that certain California businesses that provide generative AI services should create a plan for implementing an AI detection tool that allows consumers to distinguish between AI versus human-created content.

Fortunately, technologies exist and are being developed to help organizations address these transparency issues. For example, the Coalition for Content Provenance and Authenticity (C2PA) “addresses the prevalence of misleading information online through the development of technical standards for certifying the source and history (or provenance) of media content.” C2PA may be used to embed metadata into AI-generated content to help verify its source and other information.

 The requirements of SB 942 take effect January 1, 2026.

Virtually all organizations have an obligation to safeguard their personal data against unauthorized access or use.  Failure to comply with such obligations can lead to significant financial and reputational harm.

In a recent settlement agreement with the SEC, a New York-based registered transfer agent, Equiniti Trust Company LLC, formerly known as American Stock Transfer & Trust Company LLC, agreed to pay $850K to settle charges that it failed to assure client securities and funds were protected against theft or misuse.

Equiniti suffered not one, but two separate cyber intrusions in 2022 and 2023, respectively, resulting in a total loss of $6.6 million in client funds.  According to the director of the SEC’s San Francisco regional office, Monique Winkler, the Company “failed to provide the safeguards necessary to protect its clients’ funds and securities from the types of cyber intrusions that have become a near-constant threat to companies and the markets”.  The cyber intrusions in question, business email compromise (BEC) attacks. 

Business Email Compromises

BEC attacks are typically perpetrated by gaining unauthorized access to a Company’s email account through compromised credentials or by email spoofing (i.e., creating slight variations on legitimate addresses to deceive victims into thinking the fake account is authentic).  Once inside the account, threat actors can wreak all sorts of havoc, including manipulating existing payment instructions to redirect funds.

The Incidents

In the first incident, an unknown threat actor, pretending to be an employee of a U.S.-based public issuer client of American Stock Transfer, instructed the Company to (i) issue millions of new shares of the issuer, (ii) liquidate those shares, and (iii) send the proceeds to an overseas bank.  In accordance with these instructions, American Stock Transfer transferred roughly $4.78 million to several bank accounts located in Hong Kong.

Just seven months later, in an unrelated incident, an unknown threat actor was able to create fake accounts with the Company by using stolen Social Security numbers of various American Stock Transfer accountholders.  Despite differences in the name and other personal information on the accounts, these newly created, fraudulent accounts, were automatically linked by American Stock Transfer to legitimate client accounts based solely on the matching Social Security numbers.  This improper linking of accounts allowed the threat actor to liquidate securities held in the legitimate accounts and transfer approximately $1.9 million to external bank accounts.

In its August 2024 Order, the SEC stated that in both of the above-mentioned instances, American Stock Transfer “did not assure that it held securities in its custody and possession in safekeeping and handled them in a manner reasonably free from risk of theft, and did not assure that it protected funds in its custody and possession against misuse.”  The SEC found that the Company’s previous efforts in providing reasonable safeguards by (1) notifying their employees about a rapid increase in fraud attempts industry wide; (2) requiring employees involved in processing client payments to always perform a call-back to the client number on file to verify requests; and (3) warning employees to pay particular attention to email domains and addresses and ensure they match the intended sender were insufficient as although these steps identified mitigation measures, the company fell short of taking additional steps to actually implement the safeguards and procedures outlined for their employees.

Takeaways

This settlement agreement highlights the risks associated with a growing threat of cyber intrusions, including BEC attacks, and the increasing need for financial institutions to ensure that robust security measures are in place.

BEC attacks target large and small organizations alike and with very sophisticated threat actors, an attack can go undetected for long periods of time.  Organizations must take proactive steps to protect their systems before it is too late.  Such steps may include for example, use of Multi-Factor Authorization (MFA), periodic security audits and preparation of incident response plans.  Moreover, it is critical for organizations to not only implement measures to prevent these attacks, but to also be prepared to respond when they occur.

Jackson Lewis’ Financial Services and Privacy, Data, and Cybersecurity groups will continue to track this development.  Please contact a Jackson Lewis attorney with any questions.

Data privacy and security risk and compliance issues relating to exchanges of personal information during merger, acquisition, and similar transactions can sometimes be overlooked. In 2023, we summarized an enforcement action resulting in a $400,000 settlement following a data breach that affected personal information obtained during a transaction.

California aims to bolster its California Consumer Privacy Act (CCPA) to more clearly address certain obligations under the CCPA during transactions. Awaiting Governor Newsom’s signature is Assembly Bill (AB) 1824 which seeks to protect elections made by consumers to opt-out of the sale or sharing of their personal information following a transaction. More specifically, when a business receives personal information from another business as an asset that is part of a merger, acquisition, bankruptcy, or other transaction, and the transferee business assumes control of all of, or part of, the transferor, the transferee business must comply with a consumer’s opt-out elections made to the transferor business.

With this change, suppose a consumer properly opts-out of Company A’s sale of personal information, and Company A is later acquired by and controlled by Company B.  In this case, under AB 1824, Company B would be obligated to abide by the consumer’s opt-out election provided to Company A. Among the many issues that come with the transfer of confidential and personal information during a transaction, due diligence should consider a process to capture and communicate the optout elections of consumers of the transferor business.

If signed, the amendments made by AB 1824 would take effect January 1, 2025.

One of our recent posts discussed the uptick in AI risks reported in SEC filings, as analyzed by Arize AI. There, we highlighted the importance of strong governance for mitigating some of these risks, but we didn’t address the specific risks identified in those SEC filings. We discuss them briefly here as they are risks likely facing most organizations that either are exploring, developing, and/or have already deployed AI in some way, shape, or form. 

Arize AI’s “The Rise of Generative AI in SEC filings” reviewed the most recent annual financial reports as of May 1, 2024, filed by US-based companies in the Fortune 500. The report is filled with interesting statistics, including evaluating the AI risks identified by the reporting entities. Perhaps the most telling statistic is how quickly companies have moved to identify these risks and their reports:

Looking at the subsequent annual financial reports filed in 2012 reveals a surge in companies disclosing cyber and information security as a risk factor. However, the jump in those disclosures – 86.9% between 2010 and 2012 – is easily dwarfed by the 473.5% increase in companies citing AI as a risk factor between 2022 and 2024.

Arize AI Report, Page 10.

The Report organizes the AI risks identified into four basic categories: competitive impacts, general harms, regulatory compliance, and data security.

In the case of competitive risks, understandably, a organization’s competitor being first to market with a compelling AI application is a risk to the organization’s business. Similarly, the increasing availability and quality of AI products and services may soften the demand for the products and services of organizations that had been leaders in the space. At the same time, competitive forces may be at play in attracting the best talent on the market, something that, of course, AI recruiting tools can help to achieve.  

The general harms noted by many in the Fortune 500 revolve around issues we hear a lot about – 

  • Does the AI perform as advertised?
  • What types of reputational harm could affect a company when its use of AI is claimed to be biased, inaccurate, inconsistent, unethical, etc.?
  • Will the goals of desired use cases be achieved/performed in a manner that sufficiently protects against violations of privacy, IP, and other rights and obligations? 
  • Can organizations stop harmful or offensive content from being generated? 

Not to be forgotten, the third category is regulatory risk. Unfortunately, this category is likely to get worse before it gets better, if it ever does. A complex patchwork is forming, compromised of international, federal, state, and local, as well as specific industry guidelines. Meeting the challenges of these regulatory risks often depends largely on the particularly use case. For example, an AI-powered productivity management application to assess and monitor remote workers may come with significantly different regulatory compliance requirements than an automated employment decision tool (AEDT) used in the recruiting process. Similarly, leveraging generative AI to help shape customer outreach in the hospitality or retail industries certainly will raise different regulatory considerations than if deployed in the healthcare, pharmaceutical, or education industries. And, industry-specific regulation may not be the end of the story. Generally applicable state laws will add their own layers of complexity. In one form or another, several states have already enacted several measures to address the use of AI, including California, Colorado, Illinois, Tennessee, and Utah, in addition to the well known New York City law.

Last, but certainly not least, are data security risks. Two forms of this risk are worth noting – the data needed to fuel AI and the use of AI as a tool to refine attacks by cyber threat actors on individuals and information systems. Because vast amounts of data often are necessary for AI models to be successful, organizations have serious concerns about what date maybe used, even with respect to inadvertent disclosures of confidential and personal information. With different departments or divisions in an organization making their own use of AI, their approaches to data privacy and security may not be entirely aligned. Nuances in the law can amplify these risks.

While many are using AI to help secure information systems, cyber threat actors with access to essentially the same technology have different purposes in mind. Earlier this year we discussed the use of AI to enhance phishing attacks. In October 2023, the U.S. Department of Health and Human Services (HHS) and the Health Sector Cybersecurity Coordination Center (HC3) published a white paper entitled, AI-Augmented Phishing and the Threat to the Health Sector, the HC3 Paper. While many have been using ChatGPT and similar platforms to leverage generative AI capabilities to craft client emails, layout vacation itineraries, support coding efforts, and help write school papers, threat actors have been hard at work using the technology for other purposes.

Making this even easier for attackers, tools such as FraudGPT have been developed specifically for nefarious purposes. FraudGPT is a generative AI tool that can be used to craft malware and texts for phishing emails. It is available on the dark web and on Telegram for a relatively cheap price – a $200 per month or $1700 per year subscription fee – which makes it well within the price range of even moderately-sophisticated cybercriminals.

Thinking about these categories of risks identified by the Fortune 500, we believe, can be instructive for any organization trying to leverage the power of AI to help advance its business. As we noted in our prior post, adopting appropriate governance structures will be necessary for identifying and taking steps to manage these risks. Of course, the goal will be to eliminate them, but that may not always be possible. However, an organization’s defensible position can be substantially improved through taking prudent steps in the course of developing and/or deploying AI.

A little more than three years ago, the U.S. Department of Labor (DOL) posted cybersecurity guidance on its website for ERISA plan fiduciaries. That guidance extended only to ERISA-covered retirement plans, despite health and welfare plans facing similar risks to participant data.

Last Friday, the DOL’s Employee Benefits Security Administration (EBSA) issued Compliance Assistance Release No. 2024-01. The EBSA’s purpose for the guidance was simple – confirm that the agency’s 2021 guidance generally applies to all ERISA-covered employee benefit plans, including health and welfare plans. In doing so, EBSA reiterated its view of the expanding role for ERISA plan fiduciaries relating to protecting plan data:

“Responsible plan fiduciaries have an obligation to ensure proper mitigation of cybersecurity risks.

In 2021, we outlined the DOL’s requirements for plan fiduciaries here, and in a subsequent post discussed DOL audit activity that followed shortly after the DOL issued its newly minted cybersecurity requirements.

As noted in our initial post, the EBSA’s best practices included:

  • Maintain a formal, well documented cybersecurity program.
  • Conduct prudent annual risk assessments.
  • Implement a reliable annual third-party audit of security controls.
  • Follow strong access control procedures.
  • Ensure that any assets or data stored in a cloud or managed by a third-party service provider are subject to appropriate security reviews and independent security assessments.
  • Conduct periodic cybersecurity awareness training.
  • Have an effective business resiliency program addressing business continuity, disaster recovery, and incident response.
  • Encrypt sensitive data, stored and in transit.

Indeed, the substance of the guidance is largely the same, as indicated above, and still covers three areas – Tips for Hiring a Service Provider, Cybersecurity Program Best Practices, and Online Security Tips (for plan participants). What is different are some of the issues raised by the new plans to which the expanded guidance applies – health and welfare plans. Here are some examples.

  • The plans covered by the DOL’s guidance. As noted, the DOL’s cybersecurity guidance now extends to health and welfare plans. This includes plans such as medical, dental, and vision plans. It also includes other familiar benefit plans for employees, including plans that provide life and AD&D insurance, LTD benefits, business travel insurance, certain employee assistance programs and wellness programs, most health flexible spending arrangements, health reimbursement arrangements, and other benefit plans covered by ERISA. Recall that an “employee welfare benefit plan” under ERISA generally includes:

“any plan, fund, or program…established or maintained by an employer or by an employee organization…for the purpose of providing for its participants or their beneficiaries, through the purchase of insurance or otherwise…medical, surgical, or hospital care or benefits, or benefits in the event of sickness, accident, disability, death or unemployment, or vacation benefits, apprenticeship or other training programs, or day care centers, scholarship funds, or prepaid legal services.

A threshold compliance step for ERISA fiduciaries, therefore, will be to identify the plans in scope. However, cybersecurity should be a significant compliance concern for just about any benefit offered to employees, whether covered by ERISA or not.

  • Identifying service providers. It is tempting to focus on a plan’s most prominent service providers – the insurance carrier, claims administrator, etc. However, the DOL’s guidance extends to all service providers, such as brokers, consultants, auditors, actuaries, wellness providers, concierge services, cloud storage companies, etc. Fiduciaries will need to identify what individuals and/or entities are providing services to the plan.
  • Understanding the features of plan administration. The nature and extent of plan administration for retirement plans as compared to health and welfare plans often is significantly different, despite both being covered by ERISA which includes a similar set of compliance requirements. For instance, retirement plans tend to collect personal information only about the employee, although there may be a beneficiary or two. However, health and welfare plans, particularly medical plans, often cover an employee’s spouse and dependents. Additionally, for many companies, different groups of employees monitor retirement plans versus health and welfare plans. And, of course, more often than not, there are different vendors servicing these categories employee benefit plans.
  • What about HIPAA? Since 2003, certain group health plans have had to comply with the privacy and security regulations issued under the Health Insurance Portability and Accountability Act of 1996 (HIPAA). The DOL’s cybersecurity guidance, however, raises several distinct issues. First, the DOL’s recent pronouncements concerning cybersecurity are directed at fiduciaries, who as a result may need to take a more active role in compliance efforts. Second, obligations under the DOL’s guidance are not limited to group health plans or plans that reimburse the cost of health care. As noted above, popular benefits for employees such as life and disability benefits are covered by the DOL cybersecurity rule, not HIPAA. Third, the DOL guidance appears to require greater oversight and monitoring of plan service providers than HIPAA requires of business associates. In several places, the Office of Civil Rights’ guidance for HIPAA compliance states that covered entities are not required to monitor a business associate’s HIPAA compliance. See, e.g., here and here.  

The EBSA’s Compliance Assistance Release No. 2024-01 significantly expands the scope of compliance for ERISA fiduciaries with respect to their employee benefit plans and cybersecurity, and by extension the service providers to those plans. Third-party plan service providers and plan fiduciaries should begin taking reasonable and prudent steps to implement safeguards that will adequately protect plan data. EBSA’s guidance should help the responsible parties get there, along with the plan fiduciaries and plan sponsors’ trusted counsel and other advisors.

Organizations across the spectrum rely heavily on website tracking technologies to understand user behavior, enhance customer experience, and drive growth.  The convenience and insights these technologies offer come with a caveat, however: They can land your organization in hot water if not managed in careful compliance with fast-evolving law.

Recent history is rife with litigation and regulatory actions targeting organizations that employ website tracking technologies like session replay, cookies, and pixels.  When used without proper care and consideration, these tools expose organizations to substantial litigation and regulatory risk.

Hundreds of lawsuits were filed over the past few years alleging the use of various website tracking technologies violates wiretap and video privacy laws and constitutes a tortious invasion of privacy.

Website tracking technologies have also garnered regulatory attention from state and federal regulators, including, recently, the Office of the New York State Attorney General (OAG), which has published guidance titled “Website Privacy Controls: A Guide For Business” (the “Guide”). 

The Guide notes that the impetus for its creation was that:

Unfortunately, not all businesses have taken appropriate steps to ensure that their disclosures are accurate and that privacy controls work as described. An investigation by the Office of the New York State Attorney General (OAG) identified more than a dozen popular websites, together serving tens of millions of visitors each month, with privacy controls that were effectively broken. Visitors to these websites who attempted to disable tracking technologies would nevertheless continue to be tracked. The OAG also encountered websites with privacy controls and disclosures that were confusing and even potentially misleading.

The Guide highlights common mistakes the OAG identified through its investigation, including:

  • Uncategorized or miscategorized tags and cookies;
  • Misconfigured tools that allow tracking even when a consumer has tried to disable;
  • Hardcoded tags that have not been configured to work with the sites’ privacy controls; and
  • Cookieless tracking, using forms of tracking that may be outside the scope of the site’s consent-management tool.

To mitigate the risk these mistakes pose, the Guide recommends:

  • Designating a qualified individual to oversee the implementation and management of website tracking;
  • Taking appropriate steps to identify the types of data that will be collected and how the data will be used and shared;
  • Conducting reviews regularly to ensure tags and tools are properly configured;
  • Ensuring privacy controls are accurate; and
  • Avoiding misleading language in privacy disclosures.

Website tracking technologies are here to stay and can provide enormous value to the organizations that utilize them.  It has become clear, however, that such organizations must maintain thoughtful controls to manage the associated risks.  Regulators and the plaintiffs’ bar are homed in on website privacy compliance and, unlike in many other areas of compliance, non-compliance is public—i.e., anyone can visit your site, review your privacy disclosures (or lack thereof), check what features your site offers that may involve the automatic collection of data, and even run scans to determine what tracking technologies are in use on your site.  Organizations that don’t take proactive steps to ensure their websites are compliant therefore become “low-hanging fruit” for claims and enforcement actions.

If you have concerns about the tracking technologies in use on your website, Jackson Lewis’s Privacy, Data & Cybersecurity team can assist, including by helping you assess your current website tracking risk and develop a plan to better manage it.