With organizations holding more and more data digitally, there is an increased need to ensure data remains accessible across the organization at any given time. To that end, many organizations use tools that synchronize the organization’s data across various databases, applications, cloud services, and mobile devices, which involves updating data in real-time or at scheduled intervals to ensure that changes made in one location are reflected in all other locations where the data is stored. Data syncing ensures that the organization’s data is consistent and up to date across different systems, devices, or platforms. 

For organizations, data syncing improves collaboration among employees, allows real-time access and updates to information from multiple devices, and fosters seamless teamwork, irrespective of location or the devices being used. Consistent data across devices reduces the risk of errors, discrepancies, or outdated information, improving the accuracy and reliability of data used for decision-making and reporting. Data syncing also facilitates data backup and recovery, which allows quick recovery of data in case of misplaced or malfunctioning devices. Overall, data syncing helps organizations operate more efficiently, make better decisions, and protect their data, ultimately leading to improved business performance and competitiveness in today’s digital age.

While syncing devices provide seamless integration and accessibility across multiple devices, organizations must be mindful of the potential data privacy and security risks, which are illustrated by a recent experiment conducted with syncing accounts. 

In this experiment, a digital forensic team logged into the same syncing account on a smartphone and a laptop, and the team disabled the sync option on both devices. By doing so, text messages—for example—that are sent and received on one device should not appear on another device with the same syncing account. Despite this, the forensic team reported that they were still receiving incoming messages on both the phone and the laptop. Aside from logging out of the syncing account entirely, the team was unable to locate a method to completely disable message syncing.

Setting aside the accuracy of the experiment itself and whether the devices used were properly updated, this experiment underscores the broader implications for organizations that fail to actively manage their data syncing programs.

Key Takeaways

Verify that sync settings are functioning properly. It may be tempting for an organization to set up a robust data syncing tool and simply assume that it is working as intended. This strategy—as illustrated by the experiment—can lead to unintended results that can put the organization at significant risk. If an employee with access to sensitive personal information transfers to a new position at the organization—where such access is no longer required—an improperly configured data syncing tool could permit this employee to continue to have sensitive personal information available on their devices, which could lead to significant unauthorized access and potential use of that data. Periodic audits of data syncing tools can help manage this risk and ensure that data syncing features are working as intended.

Address data privacy and security concerns. Data syncing across an organization’s devices will, in turn, increase the number of devices that potentially contain confidential information, which creates substantial data privacy and security risk. These new devices will expand the organization’s data breach footprint and require updates to data mapping assessments (e.g., due to having more locations where confidential information is stored). Syncing can also inadvertently cause data to be transferred to devices that are not compliant with certain legal or regulatory frameworks (e.g., syncing protected health information to a mobile device that lacks encryption). While ensuring that the software’s data syncing features are working as intended, the organization should also ensure that it has robust policies and procedures in place to regulate how data is created, shared, and stored on the organization’s devices.

Take care when employees depart. Data syncing features can also present issues when handling employees that depart from an organization, as these employees could potentially use their company-owned or personal devices retain the organization’s data and continue to receive that data on a going-forward basis. Take an employee, for example, that has syncing enabled on their laptop belonging to the organization, that employee’s employment with the organization ends, but the employee refuses to return the laptop to the organization. Assuming the laptop does not have remote wipe capabilities, even if the company disables syncing on the former employee’s laptop, there is a potential risk that the organization’s data could continue to be transmitted to the former employee’s laptop—long after the employee is no longer authorized to access this data. As a result, it is important that the organization implements appropriate safeguards to secure the organization’s confidential information from unauthorized access, including implementing the ability to remotely wipe a device holding the organization’s data, as well as clearly delineating the process for ensuring that a departed employee no longer has access to the organization’s data.  

While data syncing tools provide significant value and convenience, it is important for organizations to carefully consider the risks associated with data syncing and take thoughtful, proactive steps to mitigate this risk.

A recent Forbes article summarizes a potentially problematic aspect of AI which highlights the importance of governance and the quality of data when training AI models.  It is called “model collapse.”  It turns out that over time, when AI models use data that earlier AI models created (rather than data created by humans), something is lost in the process at each iteration and the AI model can fail.

According to the Forbes article:

Model collapse, recently detailed in a Nature article by a team of researchers, is what happens when AI models are trained on data that includes content generated by earlier versions of themselves. Over time, this recursive process causes the models to drift further away from the original data distribution, losing the ability to accurately represent the world as it really is. Instead of improving, the AI starts to make mistakes that compound over generations, leading to outputs that are increasingly distorted and unreliable.

As the researchers published in Nature who observed this effect noted:

In our work, we demonstrate that training on samples from another generative model can induce a distribution shift, which—over time—causes model collapse. This in turn causes the model to mis-perceive the underlying learning task. To sustain learning over a long period of time, we need to make sure that access to the original data source is preserved and that further data not generated by LLMs remain available over time. The need to distinguish data generated by LLMs from other data raises questions about the provenance of content that is crawled from the Internet: it is unclear how content generated by LLMs can be tracked at scale. One option is community-wide coordination to ensure that different parties involved in LLM creation and deployment share the information needed to resolve questions of provenance. Otherwise, it may become increasingly difficult to train newer versions of LLMs without access to data that were crawled from the Internet before the mass adoption of the technology or direct access to data generated by humans at scale.

These findings highlight several important considerations when using AI tools. One is maintaining a robust governance program that includes, among other things, measures to stay abreast of developing risks. We’ve heard a lot about hallucinations. Model collapse is a relatively new and a potentially devastating challenge to the promise of AI. It raises an issue similar to the concerns with hallucinations, namely, that the value of the results received from a generative AI tool, one that an organization comes to rely on, can significantly diminish over time.

Another related consideration is the need to be continually vigilant about the quality of the data being used. Trying to distinguish and preserve human generated content may become more difficult over time as sources of data will be increasingly rooted in AI-generated content. The consequences could be significant, as the Forbes piece notes:

[M]odel collapse could exacerbate issues of bias and inequality in AI. Low-probability events, which often involve marginalized groups or unique scenarios, are particularly vulnerable to being “forgotten” by AI models as they undergo collapse. This could lead to a future where AI is less capable of understanding and responding to the needs of diverse populations, further entrenching existing biases and inequalities.

Accordingly, organizations need to build strong governance and controls around the data on which their (or their vendors’) AI models were and continue to be trained. That need is only made more clear considering the potential for model collapse is only one of a number of risks and challenges facing organizations when developing and/or deploying AI.

While the craze over generative AI, ChatGPT, and the fear of employees in the professions landing on breadlines in the imminent future may have subsided a bit, many concerns remain about how best to use and manage AI. Of course, these concerns are not specific to Fortune 500 companies.

A recent story in CIODive reports that most Fortune 500 businesses have identified AI as a potential risk factor in their SEC filings. As the article suggests, many organizations are grappling with how to use AI and derive a discernable benefit amid many present challenges. Perhaps the most critical challenge, as organizations toil to find and deliver effective use cases, is a lack of effective governance, leaving business leaders and risk managers concerned. No doubt, organizations below the Fortune 500 are facing the same obstacles, likely with fewer resources.

Putting a structure around the use of AI in an organization is no easy task. There are so many questions:

  • Who in the organization leads the effort? Is it one person, a group? From what areas of expertise? With what level of institutional knowledge?
  • As an organization, do we have a sufficient understanding of AI? How deep is our bench? Does our third-party service provider(s)?
  • If we engage a third party to help, what questions should we ask? What should we cover in the agreement? Can we shift some of the liability?
  • What is the ongoing quality of our data? Does it include inherent biases? Can we adjust for that?
  • How do we measure success, ROI?
  • Who is authorized to use AI or generative AI, and under what circumstances?
  • How do we train the AI tool? How do we train employees or others to use the tool?
  • Have we adequately addressed privacy and security of confidential and personal information?
  • What kind of recordkeeping policies and procedures should we adopt?
  • Have we appropriately considered potential ethical issues surrounding the development and use of the AI?
  • How do we keep up with the rapidly emerging law and compliance obligations relating to the development and deployment of AI? What requirements are specific to our industry?
  • How do we approach notice, transparency, safety, etc.?
  • How do we track what different groups in the organization are doing with AI, the problems they are having, and the ones they may not be aware of?  

On top of this list being incomplete, organizations also should be thinking about whether and how these and other considerations may be shaped based on the particular use case. That is, for example, deploying a generative AI tool to develop content for a marketing campaign likely has significantly different challenges to wrestle with than, say, permitting sales and other employees to use AI notetakers, or permitting the HR department to source, select, and assess candidates and employees in the workplace.

For sure, the development and deployment of AI will continue to face significant headwinds in the near future. While no governance structure eliminates all risk, addressing some of the questions above and others should help to manage that risk, which many organizations inside and outside the Fortune 500 recognize.   

The Swiss Federal Council has added the U.S. to the list of countries with an adequate level of data protection. Effective September 15, 2024, U.S. organizations that certify to the Swiss–U.S. Data Privacy Framework (DPF) can commence receiving transfers of personal data from Switzerland without implementing additional safeguards.

While U.S. organizations were permitted to certify to the DPF as early as July 10, 2023, transfers of personal data to the U.S. solely in reliance on the Swiss-U.S. DPF were delayed until Switzerland’s recognition of adequacy for the Swiss-U.S. DPF. Transfers to certified organizations required additional safeguards (e.g., standard contractual clauses). With a formal adequacy decision, transfers to U.S. companies certified to the DPF may now proceed without additional safeguards.

Similar to the invalidated Swiss-U.S. Privacy Shield, the Swiss-U.S. Data Privacy Framework is administered by the U.S. Department of Commerce, and U.S. organizations must certify to participate. The certification process includes submitting an application and a privacy policy conforming to the Swiss-U.S. DPF Principles, certifying adherence to the Swiss-U.S. DPF Principles, and identifying an independent recourse mechanism. Transferred personal data subject to the DPF includes HR-related data, client or customer data, and personal data collected in the business-to-business context. For purposes of the DPF,   a transfer means not only a transmission of personal data from Switzerland to the U.S. but access to personal data in Switzerland (e.g., in a server) from the U.S.

If you have questions about transatlantic transfers of personal data or related issues, please reach out to a member of our Privacy, Data, and Cybersecurity practice group. For more information on the Swiss-U.S. Data Privacy Framework, please see our earlier blog post.

Illinois continues to enact legislation regulating artificial intelligence (AI) and generative AI technologies.

  • A little less than a year ago, Gov. JB Pritzker signed H.B. 2123 into law. That law, becoming effective January 1, 2024, expanded the state’s Civil Remedies for Nonconsensual Dissemination of Private Sexual Images Act to permit persons about whom “digitally altered sexual images” (a form of “deepfake”) are published without consent to sue for damages and/or seek expanded injunctive relief.
  • We recently summarized amendments to the Illinois Human Rights Act that added certain uses of AI and generative AI by covered employers that could constitute civil rights violations.
  • Here we briefly discuss two more recently enacted laws focused on the impact AI and generative AI technologies have on individuals’ digital likeness and publicity rights.

It is not uncommon for organizations to involve their employees along with other individuals in marketing and promotional or other commercial activities. Whether it is seeking employee participation in television advertisements, radio spots, as influencers in social media, or other interactions with consumers, using an employee’s image or likeness can have significant beneficial impacts on the branding and promotion of an organization. Expanding digital technologies, powered by AI and generative AI, can vastly expand the marketing and promotional options organizations have, including through the use of video, voice prints, etc. The ubiquity of these technologies, their ease of use, and near-instantaneous path to wide distribution bring tremendous opportunities, but also significant risk.

In recent legislative sessions, Illinois passed two significant bills  – House Bill (HB) 4762 and House Bill (HB) 4875 –  designed to protect individuals’ digital likeness and publicity rights.

HB 4875

HB 4875 amends Illinois’ existing Right of Publicity Act to protect against the unauthorized use of “digital replicas” amid the widespread adoption of artificial intelligence and generative AI technologies. A “digital replica” means:

a newly created, electronic representation of the voice, image, or likeness of an actual individual created using a computer, algorithm, software, tool, artificial intelligence, or other technology that is fixed in a sound recording or audiovisual work in which that individual did not actually perform or appear, and which a reasonable person would believe is that particular individual’s voice, image, or likeness being imitated.

Unauthorized use of a digital replica generally means doing so without consent. Indeed, the new law provides that “a person may not knowingly distribute, transmit, or make available to the general public a sound recording or audiovisual work with actual knowledge that the work contains an unauthorized digital replica.” Notably, this proscription is not contingent on there being a commercial purpose.

Importantly, in addition to holding persons liable for knowingly distributing, transmitting, or making available to the general public works containing unauthorized digital replicas, the law also holds individuals or entities liable if they materially contribute to, induce, or facilitate a violation of the law by another party, knowing that the other party is in violation.

Organizations that have obtained consent from workers regarding the use of their name and likeness may want to reconsider the language in those consents to ensure they are capturing these technologies along with the traditional photos, videos, and similar content. This law takes effect January 1, 2025.

HB 4762

HB 4762, also known as the Digital Voice and Likeness Protection Act, seeks to safeguard individuals from unauthorized use of their digital replicas. This bill addresses the growing concern over the misuse of digital likenesses created through advanced technologies, including generative AI.

The Act stipulates that a provision in an agreement between an individual and any other person for the performance of personal or professional services is unenforceable and against public policy if it satisfies all of the following:

  • allows for the creation and use of a digital replica of the individual’s voice or likeness in place of work the individual would otherwise have performed in person;
  • does not include a reasonably specific description of the intended uses of the digital replica; and
  • the individual was not either: (i) represented by counsel in negotiating the agreement that governs the use of the digital replica, or (ii) represented by a labor union where the terms of the applicable collective bargaining covers the use of the digital replicas.

This Act applies to agreements entered into after the effective date of this Act, August 9, 2024.

If you have questions about the applications of HB 4762 and HB 4875 or related issues contact a Jackson Lewis attorney to discuss.

Following laws enacted in jurisdictions such as Colorado, New York City, Tennessee, and the state’s own Artificial Intelligence Video Interview Act, on August 9, 2024, Illinois’ Governor signed House Bill (HB) 3773, also known as the “Limit Predictive Analytics Use” bill. The bill amends the Illinois Human Rights Act (Act) by adding certain uses of artificial intelligence (AI), including generative AI, to the long list of actions by covered employers that could constitute civil rights violations. 

The amendments made by HB3773 take effect January 1, 2026, and add two new definitions to the law.

“Artificial intelligence” – which according to the amendments means:

a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

The definition of AI includes “generative AI,” which has its own definition:

an automated computing system that, when prompted with human prompts, descriptions, or queries, can produce outputs that simulate human-produced content, including, but not limited to, the following: (1) textual outputs, such as short answers, essays, poetry, or longer compositions or answers; (2) image outputs, such as fine art, photographs, conceptual art, diagrams, and other images; (3) multimedia outputs, such as audio or video in the form of compositions, songs, or short-form or long-form audio or video; and (4) other content that would be otherwise produced by human means.

The plethora of AI tools available for use in the workplace continues unabated as HR professionals and managers vie to adopt effective and efficient solutions for finding the best candidates, assessing their performance, and otherwise improving decision making concerning human capital. In addition to understanding whether an organization is covered by a regulation of AI, such as HB3773, it also is important to determine whether the technology being deployed also falls within the law’s scope. Assuming the tool or application is not being developed inhouse, this analysis will require, among other things, working closely with the third-party vendor providing the tool or application to understand its capabilities and risks.

According to the amendments, covered employers can violate the Act in two ways. First, an employer that uses AI with respect to – recruitment, hiring, promotion, renewal of employment, selection for training or apprenticeship, discharge, discipline, tenure, or the terms, privileges, or conditions of employment – and which has the effect of subjecting employees to discrimination on the basis of protected classes under the Act may constitute a violation. The same may be true for employers that use zip codes as a proxy for protected classes under the Act.

Second, a covered employer that fails to provide notice to an employee that the employer is using AI for the purposes described above may be found to have violated the Act.

Unlike the Colorado or New York City laws, the amendments to the Act do not require a impact assessment or bias audit. They also do not provide any specifics concerning the notice requirement. However, the amendments require the Illinois Department of Human Rights (IDHR) to adopt regulations necessary for implementation and enforcement. These regulations will include rules concerning the notice, such as the time period and means for providing same.

We are sure to see more regulation in this space. While it is expected that some common threads will exist among the various rules and regulations concerning AI and generative AI, organizations leveraging these technologies will need to be aware of the differences and assess what additional compliance steps may be needed.

Organizations that have questions about compliance with HB 3773, or other AI measures and related issues, contact a Jackson Lewis attorney to discuss.

On June 25, 2024, Rhode Island became the 20th state to enact a comprehensive consumer data protection law, the Rhode Island Data Transparency and Privacy Protection Act (“RIDTPPA”). The state joins Kentucky, Maryland, Minnesota, Nebraska, New Hampshire, and New Jersey in passing consumer data privacy laws this year.

The RIDTPPA takes effect on January 1, 2026.

To Whom does the law apply?

The law applies to two types of organizations, defined as “controllers”:

1. For-profit  entities that conduct business in the state of Rhode Island or that produce products or services that are targeted to residents of the state and that during the preceding calendar year did any of the following:

  • Controlled or processed the personal data of not less than thirty-five thousand (35,000)

customers, excluding personal data controlled or processed solely for the purpose of completing a payment transaction, or

  • Controlled or processed the personal data of not less than ten thousand (10,000) customers and derived more than twenty percent (20%) of their gross revenue from the sale of personal data.

2. A commercial website or internet service provider conducting business in Rhode Island or with customers in Rhode Island or that is otherwise subject to Rhode Island jurisdiction and collects stores, and sells customers’ personally identifiable information.

Who is protected by the law?

Customer means an individual residing in Rhode Island who is acting in an individual or household context. The definition of customer does not include an individual acting in a commercial or employment context.

What data is protected by the law?

The law protects personal data, which is defined as any information that is linked or reasonably linkable to an identified or identifiable individual and does not include de-identified data or publicly available information.

RIDTPPA contains numerous exceptions for specific types of data including data that meets the definition of protected health information under HIPAA, personal data collected, processed, sold, or disclosed pursuant to the federal Gramm-Leach-Bliley Act, and personal data regulated by the federal Family Educations Rights and Privacy Act.

The law also provides heightened protection for sensitive data, which means personal data revealing racial or ethnic origin, religious beliefs, mental or physical health condition or diagnosis, sex life, sexual orientation, or citizenship or immigration status; the processing of genetic or biometric data for the purpose of uniquely identifying an individual; the personal data of a known child; or precise geolocation data.

What are the rights of customers?

Under the law, customers have the following rights with respect to data collected by for-profit  entities that conduct business in the state or produce products or services targeted to residents of the state and meet one of the relevant thresholds:

  • Confirm whether a controller is processing their personal data and access that data.
  • Correct inaccuracies in the data a controller is processing.
  • Have personal data deleted unless the retention of the personal data is permitted or required by law.
  • Port personal data.
  • Opt out of the processing of personal data for targeted advertising, the sale of personal data, or profiling in furtherance of automated decisions that produce legal or similarly significant effects concerning the customer.

Under the law, customers also have a right to receive notice from commercial websites or internet service providers of their data collection activities.

What obligations do controllers have?

Both categories of controllers under Rhode Island’s law are required to provide a notice of data collection activities. Controllers that are for-profit  entities conducting business in the state or producing products or services targeted to residents of the state and that meet one of the relevant thresholds have the following additional obligations:

  • Limit collection of personal data to what is adequate, relevant, and reasonably necessary in relation to the purposes for which the data are processed.
  • Establish, implement, and maintain reasonable administrative, technical, and physical data security practices to protect, the confidentiality, integrity, and accessibility of personal data.
  • Obtain consent prior to processing a customer’s sensitive personal data.
  • Conduct and document a data privacy and protection assessment for processing activities that represent heightened risk.
  • Contractually obligate any processors who will process personal data on behalf of the organization to adhere to specific data protection obligations including ensuring the security of the processing.

How is the law enforced?

The statute will be enforced by the Rhode Island Attorney General and does not provide for a right to cure. The statute does not create a private right of action.

If you have questions about Rhode Island’s privacy law or related issues please reach out to a member of our Privacy, Data, and Cybersecurity practice group to discuss.

On May 24, 2024, Minnesota’s governor signed an omnibus bill, HF4757 which included the new Consumer Data Privacy Act. The state joins Kentucky, Nebraska, New Hampshire, New Jersey, and Rhode Island in passing consumer data privacy laws this year.

Minnesota’s law takes effect July 31, 2025, except that postsecondary institutions and nonprofit corporations governed by Minnesota Statutes, chapter 317A, are not required to comply until July 31, 2029.

To who does the law apply?

The law applies to legal entities that conduct business in the state of Minnesota or that provide products or services that are targeted to residents of the state and that during the preceding calendar year did any of the following:

  • Controls or processes personal data of 100,00 consumers or more, excluding personal data controlled or processed solely for the purpose of completing a payment transaction, or,
  • Derives over 25 percent of gross revenue from the sale of personal data and processes or controls personal data of 25,000 consumers or more.

Companies that are deemed a “small business” as defined by the United States Small Business Administration under the Code of Federal Regulations, title 13, part 121, are exempt from compliance with the exception that they must not sell a consumer’s sensitive data without the consumer’s prior consent.

Who is protected by the law?

Consumer means an individual who is a resident of the State of Minnesota. The definition of consumer does not include an individual acting in a commercial or employment context.

What data is protected by the law?

The law protects personal data, which is defined as any information that is linked or reasonably linked to an identified or identifiable individual. Personal data excludes de-identified data and publicly available information.

The Consumer Data Privacy Act contains numerous exceptions for specific types of data including data that meets the definition of protected health information under HIPAA, personal data collected, processed, sold, or disclosed pursuant to the federal Gramm-Leach-Bliley Act, and personal data regulated by the federal Family Educations Rights and Privacy Act.

The law also provides heightened protection for sensitive data, which means personal data revealing racial or ethnic origin, religious beliefs, mental or physical health condition or diagnosis, sexual orientation, or citizenship or immigration status; the processing of biometric data or genetic information for the purpose of uniquely identifying an individual; the personal data of a known child; or specific geolocation data.

What are the rights of consumers?

Under the law, consumers have the following rights:

  • Confirm whether a controller is processing their personal data
  • Access to personal data a controller is processing
  • Correct inaccuracies in data a controller is processing
  • Have personal data deleted unless the retention of the personal data is required by law
  • Obtain a list of the categories of third parties to which the controller discloses personal data.
  • Port personal data
  • Opt out of the processing of personal data for targeted advertising, the sale of personal data, or profiling in furtherance of automated decisions that produce legal effects concerning a consumer or similarly significant effects concerning a consumer.

What obligations do controllers have?

Controllers under Minnesota’s law have the following obligations:

  • Provide consumers with a reasonably accessible, clear, and meaningful privacy notice.
  • Limit the collection of personal data to what is adequate, relevant, and reasonably necessary in relation to the purposes for which the data are processed.
  • Establish, implement, and maintain reasonable administrative, technical, and physical data security practices to protect, the confidentiality, integrity, and accessibility of personal data.
  • Document and maintain a description of the policies and procedures to comply with the law.
  • Conduct and document a data privacy and protection assessment for high-risk processing activities.
  • Contractually obligate service providers who will process personal data on behalf of the organization to adhere to specific data protection obligations including ensuring the security of the processing.

How is the law enforced?

The statute will be enforced by Minnesota’s attorney general. Prior to filing an enforcement action, the attorney general must provide the controller or processor with a warning letter identifying the specific provisions alleged to be violated. If after 30 days of issuance of the letter the attorney general believes the violation has not been cured, an enforcement action may be filed. The right to cure sunsets on January 31, 2026.

The statute specifies that it does not create a private right of action.

If you have questions about Minnesota’s privacy law or related issues please reach out to a member of our Privacy, Data, and Cybersecurity practice group to discuss.

In 2020, Daniel Anderl, the son of Federal Judge Esther Salas, was shot and killed by a man targeting the judge. It is believed the man found the judge’s home address online. In reaction to the murder, New Jersey enacted “Daniel’s Law” which prohibits the disclosure of the home address and unpublished telephone number of certain government officials and their immediate family members. The law took effect on January 12, 2022, and was retroactive to December 10, 2021. However, compliance with certain provisions of the law and amendments was not required until January 2023.

Though the full law has been in effect for a little over a year, 2024 saw over 100 lawsuits filed against entities that publish addresses and related information online. The complaints commonly allege individuals such as judges or police officers suffered harm, including threats made to the individual plaintiffs, because a business did not timely remove protected information when requested.

Here is what businesses need to know about complying with Daniel’s Law.

Who is protected?

Daniel’s Law provides protection to “Covered Persons” – defined as active and retired federal and state court judges, prosecutors, and law enforcement members and their immediate family members residing in the same household.

What the law requires?

Covered Persons or someone authorized by a Covered Person may seek the redaction or nondisclosure of the home address or unpublished phone number of the Covered Person from certain records and Internet postings.

Companies that disclose on the Internet or “otherwise make available” such information are required to cease disclosures within 10 business days after receiving a request from a Covered Person or their authorized agent.

What are the penalties?

Pursuant to 2023 amendments, courts may award “actual damages, but not less than liquidated damages computed at the rate of $1,000 for each violation” of the law for failure to respond to requests to remove Covered Persons’ information. Courts may also award punitive damages and reasonable attorney’s fees.

What can businesses do?

A business that maintains and publishes personal information on the Internet or otherwise makes it available should develop and implement an internal policy and processes to handle and respond to requests in a timely manner. This should include contacting vendors and service providers to whom information was disclosed to ensure it is also removed from vendor and service provider sites.

If you have questions regarding compliance with Daniel’s Law or related issues contact a Jackson Lewis attorney to discuss.

On August 2, 2024, Governor Pritzker signed Senate Bill (SB) 2979, which amends the Illinois Biometric Information Privacy Act, 740 ILCS 14/1, et seq. (BIPA). The bill, which passed both the Illinois House and Senate by an overwhelming majority, confirms that a private entity that more than once collects or discloses the same biometric identifier or biometric information from the same person via the same method of collection in violation of the Act has committed a single violation for which an aggrieved person is entitled to, at most, one recovery. SB 2979 adds the following clarifying language into Section 20 of the BIPA, which is the section of the statute that identifies the damages a prevailing party mayrecover under the Act:

(b) For purposes of subsection (b) of Section 15, a private entity that, in more than one instance, collects, captures, purchases, receives through trade, or otherwise obtains the same biometric identifier or biometric information from the same person using the same method of collection in violation of subsection (b) of Section 15 has committed a single violation of subsection (b) of Section 15 for which the aggrieved person is entitled to, at most, one recovery under this Section.

(c) For purposes of subsection (d) of Section 15, a private entity that, in more than one instance, discloses, rediscloses, or otherwise disseminates the same biometric identifier or biometric information from the same person to the same recipient using the same method of collection in violation of subsection (d) of Section 15 has committed a single violation of subsection (d) of Section 15 for which the aggrieved person is entitled to, at most, one recovery under this Section regardless of the number of times the private entity disclosed, redisclosed, or otherwise disseminated the same biometric identifier or biometric information of the same person to the same recipient.

The amendment takes effect immediately.

Background

In Cothron v. White Castle System, Inc., 2023 IL 128004, the Illinois Supreme Court held that claims under Sections 15(b) and (d) of the BIPA accrue “with every scan or transmission” of alleged biometric identifiers or biometric information.  Yet, the Illinois Supreme Court, in deciding the issue of claim accrual under Sections 15(b) and (d) of the BIPA, acknowledged that there was some ambiguity about how its holding should be construed in connection with Section 20 of the BIPA, which outlines the damages that a prevailing party may recover. Notably, the Illinois Supreme Court acknowledged, “there is no language in the Act suggesting legislative intent to authorize a damages award that would result in the financial destruction of a business,” which would be the result if the legislature intended to award statutory damages on a “per-scan” basis. The Court went on to say that “policy-based concerns about potentially excessive damage awards under the Act are best addressed by the legislature” and expressly “suggest[ed] that the legislature review these policy concerns and make clear its intent regarding the assessment of damages under the Act.”

SB 2979 was introduced in the Illinois Senate on January 31, 2024, in response to the invitation from the Illinois Supreme Court and clarifies the General Assembly’s intention regarding the assessment of damages under the BIPA.

Electronic Signatures

In addition, the bill also adds “electronic signature” to the definition of written release, clarifying that an electronic signature constitutes a valid written release under Section 15(b)(3) of the BIPA. An electronic signature is defined in SB 2979 as “an electronic sound, symbol, or process attached to or logically associated with a record and executed or adopted by a person with the intent to sign a record.”

If you have questions about SB 2979 or related issues, please contact a member of our Privacy, Data, and Cybersecurity group.