Virtually all organizations have an obligation to safeguard their personal data against unauthorized access or use.  Failure to comply with such obligations can lead to significant financial and reputational harm.

In a recent settlement agreement with the SEC, a New York-based registered transfer agent, Equiniti Trust Company LLC, formerly known as American Stock Transfer & Trust Company LLC, agreed to pay $850K to settle charges that it failed to assure client securities and funds were protected against theft or misuse.

Equiniti suffered not one, but two separate cyber intrusions in 2022 and 2023, respectively, resulting in a total loss of $6.6 million in client funds.  According to the director of the SEC’s San Francisco regional office, Monique Winkler, the Company “failed to provide the safeguards necessary to protect its clients’ funds and securities from the types of cyber intrusions that have become a near-constant threat to companies and the markets”.  The cyber intrusions in question, business email compromise (BEC) attacks. 

Business Email Compromises

BEC attacks are typically perpetrated by gaining unauthorized access to a Company’s email account through compromised credentials or by email spoofing (i.e., creating slight variations on legitimate addresses to deceive victims into thinking the fake account is authentic).  Once inside the account, threat actors can wreak all sorts of havoc, including manipulating existing payment instructions to redirect funds.

The Incidents

In the first incident, an unknown threat actor, pretending to be an employee of a U.S.-based public issuer client of American Stock Transfer, instructed the Company to (i) issue millions of new shares of the issuer, (ii) liquidate those shares, and (iii) send the proceeds to an overseas bank.  In accordance with these instructions, American Stock Transfer transferred roughly $4.78 million to several bank accounts located in Hong Kong.

Just seven months later, in an unrelated incident, an unknown threat actor was able to create fake accounts with the Company by using stolen Social Security numbers of various American Stock Transfer accountholders.  Despite differences in the name and other personal information on the accounts, these newly created, fraudulent accounts, were automatically linked by American Stock Transfer to legitimate client accounts based solely on the matching Social Security numbers.  This improper linking of accounts allowed the threat actor to liquidate securities held in the legitimate accounts and transfer approximately $1.9 million to external bank accounts.

In its August 2024 Order, the SEC stated that in both of the above-mentioned instances, American Stock Transfer “did not assure that it held securities in its custody and possession in safekeeping and handled them in a manner reasonably free from risk of theft, and did not assure that it protected funds in its custody and possession against misuse.”  The SEC found that the Company’s previous efforts in providing reasonable safeguards by (1) notifying their employees about a rapid increase in fraud attempts industry wide; (2) requiring employees involved in processing client payments to always perform a call-back to the client number on file to verify requests; and (3) warning employees to pay particular attention to email domains and addresses and ensure they match the intended sender were insufficient as although these steps identified mitigation measures, the company fell short of taking additional steps to actually implement the safeguards and procedures outlined for their employees.

Takeaways

This settlement agreement highlights the risks associated with a growing threat of cyber intrusions, including BEC attacks, and the increasing need for financial institutions to ensure that robust security measures are in place.

BEC attacks target large and small organizations alike and with very sophisticated threat actors, an attack can go undetected for long periods of time.  Organizations must take proactive steps to protect their systems before it is too late.  Such steps may include for example, use of Multi-Factor Authorization (MFA), periodic security audits and preparation of incident response plans.  Moreover, it is critical for organizations to not only implement measures to prevent these attacks, but to also be prepared to respond when they occur.

Jackson Lewis’ Financial Services and Privacy, Data, and Cybersecurity groups will continue to track this development.  Please contact a Jackson Lewis attorney with any questions.

Data privacy and security risk and compliance issues relating to exchanges of personal information during merger, acquisition, and similar transactions can sometimes be overlooked. In 2023, we summarized an enforcement action resulting in a $400,000 settlement following a data breach that affected personal information obtained during a transaction.

California aims to bolster its California Consumer Privacy Act (CCPA) to more clearly address certain obligations under the CCPA during transactions. Awaiting Governor Newsom’s signature is Assembly Bill (AB) 1824 which seeks to protect elections made by consumers to opt-out of the sale or sharing of their personal information following a transaction. More specifically, when a business receives personal information from another business as an asset that is part of a merger, acquisition, bankruptcy, or other transaction, and the transferee business assumes control of all of, or part of, the transferor, the transferee business must comply with a consumer’s opt-out elections made to the transferor business.

With this change, suppose a consumer properly opts-out of Company A’s sale of personal information, and Company A is later acquired by and controlled by Company B.  In this case, under AB 1824, Company B would be obligated to abide by the consumer’s opt-out election provided to Company A. Among the many issues that come with the transfer of confidential and personal information during a transaction, due diligence should consider a process to capture and communicate the optout elections of consumers of the transferor business.

If signed, the amendments made by AB 1824 would take effect January 1, 2025.

One of our recent posts discussed the uptick in AI risks reported in SEC filings, as analyzed by Arize AI. There, we highlighted the importance of strong governance for mitigating some of these risks, but we didn’t address the specific risks identified in those SEC filings. We discuss them briefly here as they are risks likely facing most organizations that either are exploring, developing, and/or have already deployed AI in some way, shape, or form. 

Arize AI’s “The Rise of Generative AI in SEC filings” reviewed the most recent annual financial reports as of May 1, 2024, filed by US-based companies in the Fortune 500. The report is filled with interesting statistics, including evaluating the AI risks identified by the reporting entities. Perhaps the most telling statistic is how quickly companies have moved to identify these risks and their reports:

Looking at the subsequent annual financial reports filed in 2012 reveals a surge in companies disclosing cyber and information security as a risk factor. However, the jump in those disclosures – 86.9% between 2010 and 2012 – is easily dwarfed by the 473.5% increase in companies citing AI as a risk factor between 2022 and 2024.

Arize AI Report, Page 10.

The Report organizes the AI risks identified into four basic categories: competitive impacts, general harms, regulatory compliance, and data security.

In the case of competitive risks, understandably, a organization’s competitor being first to market with a compelling AI application is a risk to the organization’s business. Similarly, the increasing availability and quality of AI products and services may soften the demand for the products and services of organizations that had been leaders in the space. At the same time, competitive forces may be at play in attracting the best talent on the market, something that, of course, AI recruiting tools can help to achieve.  

The general harms noted by many in the Fortune 500 revolve around issues we hear a lot about – 

  • Does the AI perform as advertised?
  • What types of reputational harm could affect a company when its use of AI is claimed to be biased, inaccurate, inconsistent, unethical, etc.?
  • Will the goals of desired use cases be achieved/performed in a manner that sufficiently protects against violations of privacy, IP, and other rights and obligations? 
  • Can organizations stop harmful or offensive content from being generated? 

Not to be forgotten, the third category is regulatory risk. Unfortunately, this category is likely to get worse before it gets better, if it ever does. A complex patchwork is forming, compromised of international, federal, state, and local, as well as specific industry guidelines. Meeting the challenges of these regulatory risks often depends largely on the particularly use case. For example, an AI-powered productivity management application to assess and monitor remote workers may come with significantly different regulatory compliance requirements than an automated employment decision tool (AEDT) used in the recruiting process. Similarly, leveraging generative AI to help shape customer outreach in the hospitality or retail industries certainly will raise different regulatory considerations than if deployed in the healthcare, pharmaceutical, or education industries. And, industry-specific regulation may not be the end of the story. Generally applicable state laws will add their own layers of complexity. In one form or another, several states have already enacted several measures to address the use of AI, including California, Colorado, Illinois, Tennessee, and Utah, in addition to the well known New York City law.

Last, but certainly not least, are data security risks. Two forms of this risk are worth noting – the data needed to fuel AI and the use of AI as a tool to refine attacks by cyber threat actors on individuals and information systems. Because vast amounts of data often are necessary for AI models to be successful, organizations have serious concerns about what date maybe used, even with respect to inadvertent disclosures of confidential and personal information. With different departments or divisions in an organization making their own use of AI, their approaches to data privacy and security may not be entirely aligned. Nuances in the law can amplify these risks.

While many are using AI to help secure information systems, cyber threat actors with access to essentially the same technology have different purposes in mind. Earlier this year we discussed the use of AI to enhance phishing attacks. In October 2023, the U.S. Department of Health and Human Services (HHS) and the Health Sector Cybersecurity Coordination Center (HC3) published a white paper entitled, AI-Augmented Phishing and the Threat to the Health Sector, the HC3 Paper. While many have been using ChatGPT and similar platforms to leverage generative AI capabilities to craft client emails, layout vacation itineraries, support coding efforts, and help write school papers, threat actors have been hard at work using the technology for other purposes.

Making this even easier for attackers, tools such as FraudGPT have been developed specifically for nefarious purposes. FraudGPT is a generative AI tool that can be used to craft malware and texts for phishing emails. It is available on the dark web and on Telegram for a relatively cheap price – a $200 per month or $1700 per year subscription fee – which makes it well within the price range of even moderately-sophisticated cybercriminals.

Thinking about these categories of risks identified by the Fortune 500, we believe, can be instructive for any organization trying to leverage the power of AI to help advance its business. As we noted in our prior post, adopting appropriate governance structures will be necessary for identifying and taking steps to manage these risks. Of course, the goal will be to eliminate them, but that may not always be possible. However, an organization’s defensible position can be substantially improved through taking prudent steps in the course of developing and/or deploying AI.

A little more than three years ago, the U.S. Department of Labor (DOL) posted cybersecurity guidance on its website for ERISA plan fiduciaries. That guidance extended only to ERISA-covered retirement plans, despite health and welfare plans facing similar risks to participant data.

Last Friday, the DOL’s Employee Benefits Security Administration (EBSA) issued Compliance Assistance Release No. 2024-01. The EBSA’s purpose for the guidance was simple – confirm that the agency’s 2021 guidance generally applies to all ERISA-covered employee benefit plans, including health and welfare plans. In doing so, EBSA reiterated its view of the expanding role for ERISA plan fiduciaries relating to protecting plan data:

“Responsible plan fiduciaries have an obligation to ensure proper mitigation of cybersecurity risks.

In 2021, we outlined the DOL’s requirements for plan fiduciaries here, and in a subsequent post discussed DOL audit activity that followed shortly after the DOL issued its newly minted cybersecurity requirements.

As noted in our initial post, the EBSA’s best practices included:

  • Maintain a formal, well documented cybersecurity program.
  • Conduct prudent annual risk assessments.
  • Implement a reliable annual third-party audit of security controls.
  • Follow strong access control procedures.
  • Ensure that any assets or data stored in a cloud or managed by a third-party service provider are subject to appropriate security reviews and independent security assessments.
  • Conduct periodic cybersecurity awareness training.
  • Have an effective business resiliency program addressing business continuity, disaster recovery, and incident response.
  • Encrypt sensitive data, stored and in transit.

Indeed, the substance of the guidance is largely the same, as indicated above, and still covers three areas – Tips for Hiring a Service Provider, Cybersecurity Program Best Practices, and Online Security Tips (for plan participants). What is different are some of the issues raised by the new plans to which the expanded guidance applies – health and welfare plans. Here are some examples.

  • The plans covered by the DOL’s guidance. As noted, the DOL’s cybersecurity guidance now extends to health and welfare plans. This includes plans such as medical, dental, and vision plans. It also includes other familiar benefit plans for employees, including plans that provide life and AD&D insurance, LTD benefits, business travel insurance, certain employee assistance programs and wellness programs, most health flexible spending arrangements, health reimbursement arrangements, and other benefit plans covered by ERISA. Recall that an “employee welfare benefit plan” under ERISA generally includes:

“any plan, fund, or program…established or maintained by an employer or by an employee organization…for the purpose of providing for its participants or their beneficiaries, through the purchase of insurance or otherwise…medical, surgical, or hospital care or benefits, or benefits in the event of sickness, accident, disability, death or unemployment, or vacation benefits, apprenticeship or other training programs, or day care centers, scholarship funds, or prepaid legal services.

A threshold compliance step for ERISA fiduciaries, therefore, will be to identify the plans in scope. However, cybersecurity should be a significant compliance concern for just about any benefit offered to employees, whether covered by ERISA or not.

  • Identifying service providers. It is tempting to focus on a plan’s most prominent service providers – the insurance carrier, claims administrator, etc. However, the DOL’s guidance extends to all service providers, such as brokers, consultants, auditors, actuaries, wellness providers, concierge services, cloud storage companies, etc. Fiduciaries will need to identify what individuals and/or entities are providing services to the plan.
  • Understanding the features of plan administration. The nature and extent of plan administration for retirement plans as compared to health and welfare plans often is significantly different, despite both being covered by ERISA which includes a similar set of compliance requirements. For instance, retirement plans tend to collect personal information only about the employee, although there may be a beneficiary or two. However, health and welfare plans, particularly medical plans, often cover an employee’s spouse and dependents. Additionally, for many companies, different groups of employees monitor retirement plans versus health and welfare plans. And, of course, more often than not, there are different vendors servicing these categories employee benefit plans.
  • What about HIPAA? Since 2003, certain group health plans have had to comply with the privacy and security regulations issued under the Health Insurance Portability and Accountability Act of 1996 (HIPAA). The DOL’s cybersecurity guidance, however, raises several distinct issues. First, the DOL’s recent pronouncements concerning cybersecurity are directed at fiduciaries, who as a result may need to take a more active role in compliance efforts. Second, obligations under the DOL’s guidance are not limited to group health plans or plans that reimburse the cost of health care. As noted above, popular benefits for employees such as life and disability benefits are covered by the DOL cybersecurity rule, not HIPAA. Third, the DOL guidance appears to require greater oversight and monitoring of plan service providers than HIPAA requires of business associates. In several places, the Office of Civil Rights’ guidance for HIPAA compliance states that covered entities are not required to monitor a business associate’s HIPAA compliance. See, e.g., here and here.  

The EBSA’s Compliance Assistance Release No. 2024-01 significantly expands the scope of compliance for ERISA fiduciaries with respect to their employee benefit plans and cybersecurity, and by extension the service providers to those plans. Third-party plan service providers and plan fiduciaries should begin taking reasonable and prudent steps to implement safeguards that will adequately protect plan data. EBSA’s guidance should help the responsible parties get there, along with the plan fiduciaries and plan sponsors’ trusted counsel and other advisors.

Organizations across the spectrum rely heavily on website tracking technologies to understand user behavior, enhance customer experience, and drive growth.  The convenience and insights these technologies offer come with a caveat, however: They can land your organization in hot water if not managed in careful compliance with fast-evolving law.

Recent history is rife with litigation and regulatory actions targeting organizations that employ website tracking technologies like session replay, cookies, and pixels.  When used without proper care and consideration, these tools expose organizations to substantial litigation and regulatory risk.

Hundreds of lawsuits were filed over the past few years alleging the use of various website tracking technologies violates wiretap and video privacy laws and constitutes a tortious invasion of privacy.

Website tracking technologies have also garnered regulatory attention from state and federal regulators, including, recently, the Office of the New York State Attorney General (OAG), which has published guidance titled “Website Privacy Controls: A Guide For Business” (the “Guide”). 

The Guide notes that the impetus for its creation was that:

Unfortunately, not all businesses have taken appropriate steps to ensure that their disclosures are accurate and that privacy controls work as described. An investigation by the Office of the New York State Attorney General (OAG) identified more than a dozen popular websites, together serving tens of millions of visitors each month, with privacy controls that were effectively broken. Visitors to these websites who attempted to disable tracking technologies would nevertheless continue to be tracked. The OAG also encountered websites with privacy controls and disclosures that were confusing and even potentially misleading.

The Guide highlights common mistakes the OAG identified through its investigation, including:

  • Uncategorized or miscategorized tags and cookies;
  • Misconfigured tools that allow tracking even when a consumer has tried to disable;
  • Hardcoded tags that have not been configured to work with the sites’ privacy controls; and
  • Cookieless tracking, using forms of tracking that may be outside the scope of the site’s consent-management tool.

To mitigate the risk these mistakes pose, the Guide recommends:

  • Designating a qualified individual to oversee the implementation and management of website tracking;
  • Taking appropriate steps to identify the types of data that will be collected and how the data will be used and shared;
  • Conducting reviews regularly to ensure tags and tools are properly configured;
  • Ensuring privacy controls are accurate; and
  • Avoiding misleading language in privacy disclosures.

Website tracking technologies are here to stay and can provide enormous value to the organizations that utilize them.  It has become clear, however, that such organizations must maintain thoughtful controls to manage the associated risks.  Regulators and the plaintiffs’ bar are homed in on website privacy compliance and, unlike in many other areas of compliance, non-compliance is public—i.e., anyone can visit your site, review your privacy disclosures (or lack thereof), check what features your site offers that may involve the automatic collection of data, and even run scans to determine what tracking technologies are in use on your site.  Organizations that don’t take proactive steps to ensure their websites are compliant therefore become “low-hanging fruit” for claims and enforcement actions.

If you have concerns about the tracking technologies in use on your website, Jackson Lewis’s Privacy, Data & Cybersecurity team can assist, including by helping you assess your current website tracking risk and develop a plan to better manage it.

With organizations holding more and more data digitally, there is an increased need to ensure data remains accessible across the organization at any given time. To that end, many organizations use tools that synchronize the organization’s data across various databases, applications, cloud services, and mobile devices, which involves updating data in real-time or at scheduled intervals to ensure that changes made in one location are reflected in all other locations where the data is stored. Data syncing ensures that the organization’s data is consistent and up to date across different systems, devices, or platforms. 

For organizations, data syncing improves collaboration among employees, allows real-time access and updates to information from multiple devices, and fosters seamless teamwork, irrespective of location or the devices being used. Consistent data across devices reduces the risk of errors, discrepancies, or outdated information, improving the accuracy and reliability of data used for decision-making and reporting. Data syncing also facilitates data backup and recovery, which allows quick recovery of data in case of misplaced or malfunctioning devices. Overall, data syncing helps organizations operate more efficiently, make better decisions, and protect their data, ultimately leading to improved business performance and competitiveness in today’s digital age.

While syncing devices provide seamless integration and accessibility across multiple devices, organizations must be mindful of the potential data privacy and security risks, which are illustrated by a recent experiment conducted with syncing accounts. 

In this experiment, a digital forensic team logged into the same syncing account on a smartphone and a laptop, and the team disabled the sync option on both devices. By doing so, text messages—for example—that are sent and received on one device should not appear on another device with the same syncing account. Despite this, the forensic team reported that they were still receiving incoming messages on both the phone and the laptop. Aside from logging out of the syncing account entirely, the team was unable to locate a method to completely disable message syncing.

Setting aside the accuracy of the experiment itself and whether the devices used were properly updated, this experiment underscores the broader implications for organizations that fail to actively manage their data syncing programs.

Key Takeaways

Verify that sync settings are functioning properly. It may be tempting for an organization to set up a robust data syncing tool and simply assume that it is working as intended. This strategy—as illustrated by the experiment—can lead to unintended results that can put the organization at significant risk. If an employee with access to sensitive personal information transfers to a new position at the organization—where such access is no longer required—an improperly configured data syncing tool could permit this employee to continue to have sensitive personal information available on their devices, which could lead to significant unauthorized access and potential use of that data. Periodic audits of data syncing tools can help manage this risk and ensure that data syncing features are working as intended.

Address data privacy and security concerns. Data syncing across an organization’s devices will, in turn, increase the number of devices that potentially contain confidential information, which creates substantial data privacy and security risk. These new devices will expand the organization’s data breach footprint and require updates to data mapping assessments (e.g., due to having more locations where confidential information is stored). Syncing can also inadvertently cause data to be transferred to devices that are not compliant with certain legal or regulatory frameworks (e.g., syncing protected health information to a mobile device that lacks encryption). While ensuring that the software’s data syncing features are working as intended, the organization should also ensure that it has robust policies and procedures in place to regulate how data is created, shared, and stored on the organization’s devices.

Take care when employees depart. Data syncing features can also present issues when handling employees that depart from an organization, as these employees could potentially use their company-owned or personal devices retain the organization’s data and continue to receive that data on a going-forward basis. Take an employee, for example, that has syncing enabled on their laptop belonging to the organization, that employee’s employment with the organization ends, but the employee refuses to return the laptop to the organization. Assuming the laptop does not have remote wipe capabilities, even if the company disables syncing on the former employee’s laptop, there is a potential risk that the organization’s data could continue to be transmitted to the former employee’s laptop—long after the employee is no longer authorized to access this data. As a result, it is important that the organization implements appropriate safeguards to secure the organization’s confidential information from unauthorized access, including implementing the ability to remotely wipe a device holding the organization’s data, as well as clearly delineating the process for ensuring that a departed employee no longer has access to the organization’s data.  

While data syncing tools provide significant value and convenience, it is important for organizations to carefully consider the risks associated with data syncing and take thoughtful, proactive steps to mitigate this risk.

A recent Forbes article summarizes a potentially problematic aspect of AI which highlights the importance of governance and the quality of data when training AI models.  It is called “model collapse.”  It turns out that over time, when AI models use data that earlier AI models created (rather than data created by humans), something is lost in the process at each iteration and the AI model can fail.

According to the Forbes article:

Model collapse, recently detailed in a Nature article by a team of researchers, is what happens when AI models are trained on data that includes content generated by earlier versions of themselves. Over time, this recursive process causes the models to drift further away from the original data distribution, losing the ability to accurately represent the world as it really is. Instead of improving, the AI starts to make mistakes that compound over generations, leading to outputs that are increasingly distorted and unreliable.

As the researchers published in Nature who observed this effect noted:

In our work, we demonstrate that training on samples from another generative model can induce a distribution shift, which—over time—causes model collapse. This in turn causes the model to mis-perceive the underlying learning task. To sustain learning over a long period of time, we need to make sure that access to the original data source is preserved and that further data not generated by LLMs remain available over time. The need to distinguish data generated by LLMs from other data raises questions about the provenance of content that is crawled from the Internet: it is unclear how content generated by LLMs can be tracked at scale. One option is community-wide coordination to ensure that different parties involved in LLM creation and deployment share the information needed to resolve questions of provenance. Otherwise, it may become increasingly difficult to train newer versions of LLMs without access to data that were crawled from the Internet before the mass adoption of the technology or direct access to data generated by humans at scale.

These findings highlight several important considerations when using AI tools. One is maintaining a robust governance program that includes, among other things, measures to stay abreast of developing risks. We’ve heard a lot about hallucinations. Model collapse is a relatively new and a potentially devastating challenge to the promise of AI. It raises an issue similar to the concerns with hallucinations, namely, that the value of the results received from a generative AI tool, one that an organization comes to rely on, can significantly diminish over time.

Another related consideration is the need to be continually vigilant about the quality of the data being used. Trying to distinguish and preserve human generated content may become more difficult over time as sources of data will be increasingly rooted in AI-generated content. The consequences could be significant, as the Forbes piece notes:

[M]odel collapse could exacerbate issues of bias and inequality in AI. Low-probability events, which often involve marginalized groups or unique scenarios, are particularly vulnerable to being “forgotten” by AI models as they undergo collapse. This could lead to a future where AI is less capable of understanding and responding to the needs of diverse populations, further entrenching existing biases and inequalities.

Accordingly, organizations need to build strong governance and controls around the data on which their (or their vendors’) AI models were and continue to be trained. That need is only made more clear considering the potential for model collapse is only one of a number of risks and challenges facing organizations when developing and/or deploying AI.

While the craze over generative AI, ChatGPT, and the fear of employees in the professions landing on breadlines in the imminent future may have subsided a bit, many concerns remain about how best to use and manage AI. Of course, these concerns are not specific to Fortune 500 companies.

A recent story in CIODive reports that most Fortune 500 businesses have identified AI as a potential risk factor in their SEC filings. As the article suggests, many organizations are grappling with how to use AI and derive a discernable benefit amid many present challenges. Perhaps the most critical challenge, as organizations toil to find and deliver effective use cases, is a lack of effective governance, leaving business leaders and risk managers concerned. No doubt, organizations below the Fortune 500 are facing the same obstacles, likely with fewer resources.

Putting a structure around the use of AI in an organization is no easy task. There are so many questions:

  • Who in the organization leads the effort? Is it one person, a group? From what areas of expertise? With what level of institutional knowledge?
  • As an organization, do we have a sufficient understanding of AI? How deep is our bench? Does our third-party service provider(s)?
  • If we engage a third party to help, what questions should we ask? What should we cover in the agreement? Can we shift some of the liability?
  • What is the ongoing quality of our data? Does it include inherent biases? Can we adjust for that?
  • How do we measure success, ROI?
  • Who is authorized to use AI or generative AI, and under what circumstances?
  • How do we train the AI tool? How do we train employees or others to use the tool?
  • Have we adequately addressed privacy and security of confidential and personal information?
  • What kind of recordkeeping policies and procedures should we adopt?
  • Have we appropriately considered potential ethical issues surrounding the development and use of the AI?
  • How do we keep up with the rapidly emerging law and compliance obligations relating to the development and deployment of AI? What requirements are specific to our industry?
  • How do we approach notice, transparency, safety, etc.?
  • How do we track what different groups in the organization are doing with AI, the problems they are having, and the ones they may not be aware of?  

On top of this list being incomplete, organizations also should be thinking about whether and how these and other considerations may be shaped based on the particular use case. That is, for example, deploying a generative AI tool to develop content for a marketing campaign likely has significantly different challenges to wrestle with than, say, permitting sales and other employees to use AI notetakers, or permitting the HR department to source, select, and assess candidates and employees in the workplace.

For sure, the development and deployment of AI will continue to face significant headwinds in the near future. While no governance structure eliminates all risk, addressing some of the questions above and others should help to manage that risk, which many organizations inside and outside the Fortune 500 recognize.   

The Swiss Federal Council has added the U.S. to the list of countries with an adequate level of data protection. Effective September 15, 2024, U.S. organizations that certify to the Swiss–U.S. Data Privacy Framework (DPF) can commence receiving transfers of personal data from Switzerland without implementing additional safeguards.

While U.S. organizations were permitted to certify to the DPF as early as July 10, 2023, transfers of personal data to the U.S. solely in reliance on the Swiss-U.S. DPF were delayed until Switzerland’s recognition of adequacy for the Swiss-U.S. DPF. Transfers to certified organizations required additional safeguards (e.g., standard contractual clauses). With a formal adequacy decision, transfers to U.S. companies certified to the DPF may now proceed without additional safeguards.

Similar to the invalidated Swiss-U.S. Privacy Shield, the Swiss-U.S. Data Privacy Framework is administered by the U.S. Department of Commerce, and U.S. organizations must certify to participate. The certification process includes submitting an application and a privacy policy conforming to the Swiss-U.S. DPF Principles, certifying adherence to the Swiss-U.S. DPF Principles, and identifying an independent recourse mechanism. Transferred personal data subject to the DPF includes HR-related data, client or customer data, and personal data collected in the business-to-business context. For purposes of the DPF,   a transfer means not only a transmission of personal data from Switzerland to the U.S. but access to personal data in Switzerland (e.g., in a server) from the U.S.

If you have questions about transatlantic transfers of personal data or related issues, please reach out to a member of our Privacy, Data, and Cybersecurity practice group. For more information on the Swiss-U.S. Data Privacy Framework, please see our earlier blog post.

Illinois continues to enact legislation regulating artificial intelligence (AI) and generative AI technologies.

  • A little less than a year ago, Gov. JB Pritzker signed H.B. 2123 into law. That law, becoming effective January 1, 2024, expanded the state’s Civil Remedies for Nonconsensual Dissemination of Private Sexual Images Act to permit persons about whom “digitally altered sexual images” (a form of “deepfake”) are published without consent to sue for damages and/or seek expanded injunctive relief.
  • We recently summarized amendments to the Illinois Human Rights Act that added certain uses of AI and generative AI by covered employers that could constitute civil rights violations.
  • Here we briefly discuss two more recently enacted laws focused on the impact AI and generative AI technologies have on individuals’ digital likeness and publicity rights.

It is not uncommon for organizations to involve their employees along with other individuals in marketing and promotional or other commercial activities. Whether it is seeking employee participation in television advertisements, radio spots, as influencers in social media, or other interactions with consumers, using an employee’s image or likeness can have significant beneficial impacts on the branding and promotion of an organization. Expanding digital technologies, powered by AI and generative AI, can vastly expand the marketing and promotional options organizations have, including through the use of video, voice prints, etc. The ubiquity of these technologies, their ease of use, and near-instantaneous path to wide distribution bring tremendous opportunities, but also significant risk.

In recent legislative sessions, Illinois passed two significant bills  – House Bill (HB) 4762 and House Bill (HB) 4875 –  designed to protect individuals’ digital likeness and publicity rights.

HB 4875

HB 4875 amends Illinois’ existing Right of Publicity Act to protect against the unauthorized use of “digital replicas” amid the widespread adoption of artificial intelligence and generative AI technologies. A “digital replica” means:

a newly created, electronic representation of the voice, image, or likeness of an actual individual created using a computer, algorithm, software, tool, artificial intelligence, or other technology that is fixed in a sound recording or audiovisual work in which that individual did not actually perform or appear, and which a reasonable person would believe is that particular individual’s voice, image, or likeness being imitated.

Unauthorized use of a digital replica generally means doing so without consent. Indeed, the new law provides that “a person may not knowingly distribute, transmit, or make available to the general public a sound recording or audiovisual work with actual knowledge that the work contains an unauthorized digital replica.” Notably, this proscription is not contingent on there being a commercial purpose.

Importantly, in addition to holding persons liable for knowingly distributing, transmitting, or making available to the general public works containing unauthorized digital replicas, the law also holds individuals or entities liable if they materially contribute to, induce, or facilitate a violation of the law by another party, knowing that the other party is in violation.

Organizations that have obtained consent from workers regarding the use of their name and likeness may want to reconsider the language in those consents to ensure they are capturing these technologies along with the traditional photos, videos, and similar content. This law takes effect January 1, 2025.

HB 4762

HB 4762, also known as the Digital Voice and Likeness Protection Act, seeks to safeguard individuals from unauthorized use of their digital replicas. This bill addresses the growing concern over the misuse of digital likenesses created through advanced technologies, including generative AI.

The Act stipulates that a provision in an agreement between an individual and any other person for the performance of personal or professional services is unenforceable and against public policy if it satisfies all of the following:

  • allows for the creation and use of a digital replica of the individual’s voice or likeness in place of work the individual would otherwise have performed in person;
  • does not include a reasonably specific description of the intended uses of the digital replica; and
  • the individual was not either: (i) represented by counsel in negotiating the agreement that governs the use of the digital replica, or (ii) represented by a labor union where the terms of the applicable collective bargaining covers the use of the digital replicas.

This Act applies to agreements entered into after the effective date of this Act, August 9, 2024.

If you have questions about the applications of HB 4762 and HB 4875 or related issues contact a Jackson Lewis attorney to discuss.