If you are looking for a high-level summary of California laws regulating artificial intelligence (AI), check out the two legal advisories issued by California Attorney General Rob Bonta. The first advisory is directed at consumers and entities about their rights and obligations under the state’s consumer protection, civil rights, competition, and data privacy laws. The second advisory focuses on healthcare entities.

“AI might be changing, innovating, and evolving quickly, but the fifth largest economy in the world is not the wild west; existing California laws apply to both the development and use of AI.” Attorney General Bonta

The advisories summarize existing California laws that may apply to entities who develop, sell, or use AI. They also address several new California AI laws that went into effect on January 1, 2025.

The first advisory points to several existing laws, such as California’s Unfair Competition Law and Civil Rights Laws, designed to protect consumers from unfair and fraudulent business practices, anticompetitive harm, discrimination and bias, and abuse of their data.

California’s Unfair Competition Law, for example, protects the state’s residents against unlawful, unfair, or fraudulent business acts or practices. The advisory notes that “AI provides new tools for businesses and consumers alike, and also creates new opportunity to deceive Californians.” Under a similar federal law, the Federal Trade Commission (FTC) recently ordered an online marketer to pay $1 million resulting from allegations concerning deceptive claims that the company’s AI product could make websites compliant with accessibility guidelines. Considering the explosive growth of AI products and services, organizations should be revisiting their procurement and vendor assessment practices to be sure they are appropriately vetting vendors of AI systems.

Additionally, the California Fair Employment and Housing Act (FEHA) protects Californians from harassment or discrimination in employment or housing based on a number of protected characteristics, including sex, race, disability, age, criminal history, and veteran or military status. These FEHA protections extend to uses of AI systems when developed for and used in the workplace. Expect new regulations soon as the California Civil Rights Counsel continues to mull proposed AI regulations under the FEHA.

Recognizing that “data is the bedrock underlying the massive growth in AI,” the advisory points to the state’s constitutional right to privacy, applicable to both government and private entities, as well as to the California Consumer Privacy Act (CCPA). Of course, California has several other privacy laws that may need to be considered when developing and deploying AI systems – the California Invasion of Privacy Act (CIPA), the Student Online Personal Information Protection Act (SOPIPA), and the Confidentiality of Medical Information Act (CMIA).

Beyond these existing laws, the advisory also summarizes new laws in California directed at AI, including:

  • Disclosure Requirements for Businesses
  • Unauthorized Use of Likeness
  • Use of AI in Election and Campaign Materials
  • Prohibition and Reporting of Exploitative Uses of AI

The second advisory recounts many of the same risks and concerns about AI as relevant to the healthcare sector. Consumer protection, anti-discrimination, patient privacy and other concerns all are challenges entities in the healthcare sector face when developing or deploying AI. The advisory provides examples of applications of AI systems in healthcare that may be unlawful, here are a couple:

  • Denying health insurance claims using AI or other automated decisionmaking systems in a manner that overrides doctors’ views about necessary treatment.
  • Use generative AI or other automated decisionmaking tools to draft patient notes, communications, or medical orders that include erroneous or misleading information, including information based on stereotypes relating to race or other protected classifications.

The advisory also addresses data privacy, reminding readers that the state’s CMIA may be more protective in some respects than the popular federal healthcare privacy law, HIPAA. It also discusses recent changes to the CMIA that require providers and electronic health records (EHR) and digital health companies enable patients to keep their reproductive and sexual health information confidential and separate from the rest of their medical records. These and other requirements need to be taken into account when incorporating AI into EHRs and related applications.

In both advisories, the Attorney General makes clear that in addition to the laws referenced above, other California laws—including tort, public nuisance, environmental and business regulation, and criminal law—apply to AI. In short:  

Conduct that is illegal if engaged in without the involvement of AI is equally unlawful if AI is involved, and the fact that AI is involved is not a defense to liability under any law.

Both advisories provide a helpful summary of laws potentially applicable to AI systems, and can be useful resources when building policies and procedures around the development and/or deployment of AI systems.  

This month, the New Jersey Attorney General’s office (NJAG) added to nationwide efforts to regulate, or at least clarify the application of existing law, in this case the NJ Law Against Discrimination, N.J.S.A. § 10:5-1 et seq. (LAD), to artificial intelligence technologies. In short, the NJAG’s guidance states:

the LAD applies to algorithmic discrimination in the same way it has long applied to other discriminatory conduct.  

If you are not familiar with it, the LAD generally applies to employers, housing providers, places of public accommodation, and certain other entities. The law prohibits discrimination on the basis of actual or perceived race, religion, color, national origin, sexual orientation, pregnancy, breastfeeding, sex, gender identity, gender expression, disability, and other protected characteristics. According to the NJAG’s guidance, the LAD protections extend to algorithmic discrimination (discrimination that results from the use of automated decision-making tools) in employment, housing, places of public accommodation, credit, and contracting.

Citing a recent Rutgers survey, the NJAG pointed to high levels of adoption of AI tools by NJ employers. According to the survey, 63% of NJ employers use one or more tools to recruit job applicants and/or make hiring decisions. These AI tools are broadly defined in the guidance to include:

any technological tool, including but not limited to, a software tool, system, or process that is used to automate all or part of the human decision-making process…such as generative AI, machine-learning models, traditional statistical tools, and decision trees.

The NJAG guidance examines some ways that AI tools may contribute to discriminatory outcomes.

  • Design. Here, the choices a developer makes in designing an AI tool could, purposefully or inadvertently, result in unlawful discrimination. The results can be influenced by the output the tool provides, the model or algorithms the tool uses, and what inputs the tool assesses which can introduce bias into the automated decision-making tool.
  • Training. As AI tools need to be trained to learn the intended correlations or rules relating to their objectives, the datasets used for such training may contain biases or institutional and systemic inequities that can affect the outcome. Thus, the datasets used in training can drive unlawful discrimination.
  • Deployment. The NJAG also observed that AI tools could be used to purposely discriminate, or to make decisions for which the tool was not designed. These and other deployment issues could lead to bias and unlawful discrimination.

The NJAG notes that its guidance does not impose any new or additional requirements that are not included in the LAD, nor does it establish any rights or obligations for any person beyond what exists under the LAD. However, the guidance makes clear that covered entities can violate the LAD even if they have no intent to discriminate (or do not understand the inner workings of the tool) and, just as noted by the EEOC in guidance the federal agency issued under Title VII, even if a third-party was responsible for developing the AI tool. Importantly, under NJ law, this includes disparate treatment/impact which may result from the design or usage of AI tools.

As we have noted, it is critical for organizations to assess, test, and regularly evaluate the AI tools they seek to deploy in their organizations for many reasons, including to avoid unlawful discrimination. The measures should include working closely with the developers to vet the design and testing of their automated decision-making tools before they are deployed. In fact, the NJAG specifically noted many of these steps as ways organizations may decrease the risk of liability under the LAD. Maintaining a well thought out governance strategy for managing this technology can go a long way to minimizing legal risk, particularly as the law develops in this area.

A massive data breach hit one of the country’s largest education software providers. According to EducationWeek, PowerSchool provides school software products to more than 16,000 customers, largely K-12 schools, that serve 50 million students in the United States. According to reports, PowerSchool informed customers that, on December 28, 2024, PowerSchool became aware of a cybersecurity incident involving unauthorized access to certain information through one of its community-focused customer support portals, PowerSource. The unauthorized access affected PowerSchool’s Student Information System (“SIS”).

According to one of its communications to customers, PowerSchool stated:

While we are unaware of and do not expect any actual or attempted misuse of personal information or any financial harm to impacted individuals as a result of this incident, PowerSchool will be providing credit monitoring to affected adults and identity protection services to affected minors in accordance with regulatory and contractual obligations. The particular information compromised will vary by impacted customer. We anticipate that only a subset of impacted customers will have notification obligations.

Needless to say, PowerSchool customers likely have lots of questions and concerns about next steps. The Q and A below are intended to help school communities and other affected entities strategize about next steps.

Is this just a PowerSchool problem?

There certainly are steps PowerSchool should be taking. As a service provider that processes the personal information of its customers, conducting a prompt investigation and informing data owners of critical information relating to the breach top the list. Additionally, each customer’s service agreement with PowerSchool may include broader obligations for the vendor. Providing ongoing support and mitigating potential harm also can reasonably be expected. But, schools and other PowerSchool customers may have obligations of their own.  

What should potentially affected PowerSchool customers be doing?

There are several items to consider:

Look at your incident response plan. If you have an incident response plan, it may provide steps to help keep your team organized and focused. If you do not have one, consider developing one in the future.

Gather information. As noted above, PowerSchool has already put out information concerning the breach, and more is likely to come. But there may be other helpful information for you online from trusted sources. For example a bleepingcomputer article provides information on (i) determining whether your school district was affected, and (ii) a link to a “detailed guide written by Romy Backus, SIS Specialist at the American School of Dubai, [that] explains how to check the PowerSchool SIS logs to determine if data was stolen.”

Be ready to communicate with your school community. Teachers, parents, students, former students, and others will have a lot of questions about the incident. According to a report by Infosecurity Magazine,

A message to parents by the Howard-Suamico School District in Wisconsin, US, seen by news outlet NBC 26, read: “PowerSchool confirmed that this was not a ransomware attack but it did pay a ransom to prevent the data from being released.

If a ransom was paid to a threat actor, there is no way to confirm that the data has not or will not be released or used for an impermissible purpose. For this and other reasons, it will be critical to have a plan for delivering prompt, consistent, and accurate messaging about the breach as soon as possible. Having a limited number of persons responsible for responding to questions can help to avoid misinformation and maintain consistent messaging.

As the investigation proceeds, PowerSchool likely will be providing more information about notifications, ID theft and credit monitoring services, and other information concerning the continued response to the incident. Affected schools and other PowerSchool customers will need to be ready to receive that information and decide how best to convey that information to their community. In the event decisions need to be made by a school’s Board, start thinking ahead to taking all the necessary steps to arrange for those meetings so decisions can be made appropriately, thoughtfully, and timely. Feel free to contact our incident response attorneys as we have helped many schools and school districts navigate challenging communications in similar incidents.

Get a handle on your legal and contractual rights and obligations. State breach notification laws generally place the obligation to notify affected persons and others on the owner of the personal information compromised in the breach, not the service provider that had the breach. In many cases, however, a vendor causing a data breach may take on the obligation to provide such notifications, but the owner of the data still will be on the hook if that process if not performed in a compliant manner.

Of course, state notification laws vary state to state. Examples of these variations include the definition of personal information, exceptions to the notification requirement, timeframes for notification, and requirements for ID theft and credit monitoring services. Reports noted above indicate that PowerSchool may be supporting the notification process. However, because the breach is affecting customers differently (e.g., different personal information affected, different state laws), PowerSchool may rely on instructions from customers about whether and how to comply with certain aspects of the notification requirements.

Note also that some states may have issued specific regulatory requirements for school districts and their vendors. For example, in New York, regulations issued by the New York State Department of Education and adopted by its Board of Regents in 2020 require school districts and state-supported schools to develop and implement robust data security and privacy programs to protect any personally identifiable information (“PII”) relating to students, teachers and principals. Among other things, the NY regulations require vendors that suffer a breach to notify the affected schools within seven (7) calendar days. The schools must in turn notify SED within ten (10) calendar days of receipt of notification of a breach from the vendor; and the schools must notify the affected individuals of the breach without unreasonable delay but in no case later than sixty (60) days of discovery or receipt of breach notification from the vendor.

Just as the law varies, the services agreement a school negotiated with PowerSchool may vary from PowerSchool’s standard form. Affected PowerSchool customers should be reviewing those agreements to assess their rights and obligations in areas such as information security, data breach response, and indemnity.

Evaluate insurance protections. Some organizations may have purchased “cyber” or “breach response” insurance which could cover some of the costs related to responding to the breach or defending litigation that may follow. PowerSchool should review their policy(ies) with their brokers to understand the potential coverage and what steps, if any, they need to take to confirm coverage.

What can individuals potentially affected by the PowerSchool breach do now?

It may take some time before notifications are sent to individuals affected by the breach. However, there are some resources that individuals could examine to consider their options now. Databreaches.net pulled together some helpful resources for potentially affected individuals, such as teachers, parents, and former students. Access that here.

When the dust clears from the PowerSchool incident, what should schools do going forward?

This is not the first vendor incident that has affected schools and it will not be the last. There are many steps schools and any organizations should consider taking following a vendor’s breach affecting the organization’s data. However, for the moment, affected schools and customers should focus on the incident at hand. When the time comes, they should consult with experienced legal counsel and information security experts to be sure they have adopted reasonable safeguards at a minimum to protect their data, and that they have assessed whether their vendors are doing the same.

* * *

For organizations large and small, incidents like this can be a significant disruption. To minimize that disruption, organizations may want and need to communicate with their applicable communities, and should do so confidently, but carefully. More information can be very helpful, but too much information and information that is repetitive can be confusing and frustrating. Organizations should involve key persons internally and possibly seek outside expertise and counsel to reach an appropriate balance in their response strategy and communications.

Ask any chief information security officer (CISO), cyber underwriter or risk manager, or cybersecurity attorney about what controls are critical for protecting an organization’s information systems, you’ll likely find multifactor authentication (MFA) at or near the top of every list. Government agencies responsible for helping to protect the U.S. and its information systems and assets (e.g., CISA, FBI, Secret Service) send the same message. But that message may be evolving a bit as criminal threat actors have started to exploit weaknesses in MFA.  

According to a recent report in Forbes, for example, threat actors are harnessing AI to break though multifactor authentication strategies designed to prevent new account fraud. “Know Your Customer” procedures are critical in certain industries for validating the identity of customers, such as financial services, telecommunications, etc. Employers increasingly face similar issues with recruiting employees, when they find, after making the hiring decision, that the person doing the work may not be the person interviewed for the position.

Threat actors have leveraged a new AI deepfake tool that can be acquired on the dark web to bypass the biometric systems that been used to stop new account fraud. According to the Forbes article, the process goes something like this:

1. Bad actors use one of the many generative AI websites to create and download a fake image of a person.

2. Next, they use the tool to synthesize a fake passport or a government-issued ID by inserting the fake photograph…

3. Malicious actors then generate a deepfake video (using the same photo) where the synthetic identity pans their head from left to right. This movement is specifically designed to match the requirements of facial recognition systems. If you pay close attention, you can certainly spot some defects. However, these are likely ignored by facial recognition because videos are prone to have distortions due to internet latency issues, buffering or just poor video conditions.

4. Threat actors then initiate a new account fraud attack where they connect a cryptocurrency exchange and proceed to upload the forged document. The account verification system then asks to perform facial recognition where the tool enables attackers to connect the video to the camera’s input.

5. Following these steps, the verification process is completed, and the attackers are notified that their account has been verified.”

Sophisticated AI tools are not the only MFA vulnerability. In December 2024, the Cybersecurity & Infrastructure Security Agency (CISA) issued best practices for mobile communications. Among its recommendations, CISA advised mobile phone users, in particular highly-targeted individuals,  

Do not use SMS as a second factor for authentication. SMS messages are not encrypted—a threat actor with access to a telecommunication provider’s network who intercepts these messages can read them. SMS MFA is not phishing-resistant and is therefore not strong authentication for accounts of highly targeted individuals.

In a 2023 FBI Internet Crime Report, the FBI reported more than 1,000 “SIM swapping” investigations. A SIM swap is just another technique by threat actors involving the “use of unsophisticated social engineering techniques against mobile service providers to transfer a victim’s phone service to a mobile device in the criminal’s possession.

In December, Infosecurity Magazine reported on another vulnerability in MFA. In fact, there are many reports about various vulnerabilities with MFA.

Are we recommending against the use of MFA. Certainly not. Our point is simply to offer a reminder that there are no silver bullets to achieving security of information systems and that AI is not only used by the good guys. An information security program, preferably one that is written (a WISP), requires continuous vigilance, and not just from the IT department, as new technologies are leveraged to bypass older technologies.

In 2024, Israel became the latest jurisdiction to enact comprehensive privacy legislation, largely inspired by the EU’s General Data Protection Regulation (“GDPR”). On August 5, 2024, Israel’s parliament, the Knesset, voted to approve the enactment of Amendment No. 13 (“the Amendment”) to the Israel Privacy Protection Law (“IPPL”). The amendment which will take effect on August 15, 2025, is considered an overhaul to the IPPL, which has been left largely untouched since the law’s enactment in 1996.

Key Features of the Amendment include:

  • Expansion of key definitions in the law
    • Personal Information – Expanded to include any “data related to an identified or identifiable person”.Highly Sensitive Information – Replaces the IPPL’s current definition of “sensitive information” and is similar in kind to the GDPR’s Special Categories of Data.  Types of information that qualify as highly sensitive information under the Amendment include biometric data, genetic data, location and traffic data, criminal records and assessment of personality types.Data Processing The Amendment broadens the definition of processing to include any operation on information, including receipt, collection, storage, copying, review, disclosure, exposure, transfer, conveyance, or granting access.Database Controller – The IPPL previously used the term “database owner”, and akin to the GDPR has changed the term to database controller, which is defined as the person or entity that determines the purpose of processing personal information in the database.
    • Database Holder – Similar to the GDPR’s “processor”, the Amendment includes the term database holder which is defined as an entity “external to the data controller that processes information on behalf of the data controller”, which due to the broad definition of data processing, captures a broad set of third-party service providers.
  • Mandatory Appointment of a Privacy Protection Officer & Data Security Officer
    • Equivalent to the GDPR’s Data Protection Officer (DPO) role, an entity that meets certain criteria based on size and industry (inclusive of both data controllers and processors), will be required to implement a new role in their organization entitled the Privacy Protection Officer, tasked with ensuring compliance with the IPPL and promoting data security and privacy protection initiatives within their organization.   Likewise, the obligation to appoint a Data Security Officer, which was a requirement for certain organizations prior to the Amendment, has now been expanded to apply to a broader set of entities.
  • Expansion of Enforcement Authority
    • The Privacy Protection Authority (“PPA”), Israel’s privacy regulator, has been given broader enforcement authority including a significant increase in financial penalties based on the number of data subjects impacted due to a violation, the type of violation and the violating entity’s financial turnover.  Financial penalties are capped at 5% of the businesses‘ annual turnover for larger organizations which could reach millions of dollars (e.g. a data processor that processes data without the controller’s permission in a database of 1,000,000 data subjects (8 ILS per data subject) can be fined 8,000,000 ILS (approx. $2.5 million USD)).  Small and micro businesses are capped at penalties of 140,000 ILS ($45,000 USD) per year. Other enhancements to the PPA’s authority include expansive investigative and supervisory powers as well as increased authority for the Head of the PPA to issue warnings and injunctions. 

Additional updates to the Amendment include expansion of the notice obligation in the case of a data breach, increased rights of data subjects, extension of the statute of limitations and exemplary damages. In following segments on the IPPL leading up to the August 2025 effective date, we will dive deeper on some of the key features of the Amendment, certain to have impact on entities with customers and/or employees in Israel.

Data privacy and security regulation is growing rapidly around the world, including in Israel. This legislative activity, combined with the growing public awareness of data privacy rights and concerns, makes the development of a meaningful data protection program an essential component of business operations.

The Indiana Attorney General Office (OAG) filed a detailed complaint on December 23, 2024 (Complaint) which arose out of the following patient complaint:

The OAG received a consumer complaint stating that the consumer had contacted Arlington Westend Dental on multiple occasions to receive copies of their x-rays, but Arlington Westend Dental stated it no longer had the x-rays because someone “hacked” their systems.

Under both federal and state law, patients generally have rights to their medical records. In fact, over the last several years, the federal Office for Civil Rights (OCR), which enforces the HIPAA Privacy and Security Rules, has vigorously enforced these rights. In October 2024, the agency announced its 50th enforcement action, touting a $70,000 settlement, coincidentally with another dental practice.

It should be no surprise that the patient sought redress from the OAG, particularly after being told the reason for the lack of records was a “hacking” to the dental practice’s systems. At that point, according to Complaint, the patient had not received notice of the incident. However, the facts that follow in the Complaint may be surprising for some.

According to the Complaint:

  • A ransomware attack occurred in October 2020. Because no forensic investigation was performed, scope of the incident could not be determined.
  • The ransomware attack was not reported to the OAG when required by law. It was discovered during the investigation. When it was ultimately reported, the report indicated that the incident was not an intrusion, “but an incident of data being lost when the on-site internal hard drive of the server got formatted by mistake.”
  • The OAG obtained recordings of customer service calls from the dental practices software vendor that told a different story about the incident, confirming facts consistent with a ransomware attack, encryption of all records on the impacted server, and the existence of a ransom note.

The OAG’s findings about the ransomware incident prompted further investigation into the practice’s compliance with HIPAA generally. According to the Complaint, the practice had one set of HIPAA policies located at one of its six locations, with no evidence of implementation. No risk assessment had been conducted. In addition to a lack of evidence of regulatory compliance with policy and procedure obligations under HIPAA, the OAG also learned that the practice “repeatedly disclosed PHI in public replies to online patient reviews and made public posts disclosing PHI and identifying individuals, including minor children, as patients of [the practice] without patient authorization.”

The OAG included in the Complaint examples of the photographs of patients made public by the practice and some of the responses to online reviews. Here is one of those responses:

Ms. [redacted] I am sorry to hear that you are upset with the treatment that your husband received at our office. We strive for nothing but the best care for our patients. And let me assure you that your husband got very good dental care. Your husband came in as an emergency because of pain and infection and wanted to have the tooth extracted. We took time out of our busy schedule to take care of him and provide the same-day treatment, for which most people are grateful. He was already in so much pain as you stated when he came in, which means he already had severe infection. We treated the infection by extracting the tooth which was the source of the infection. The doctor also prescribed antibiotics and pain medication. I don’t understand why you would say that we did not take the whole tooth out. We have a post-op X-ray that shows the entire tooth has been extracted. Perhaps you should seek professional opinion of another dentist rather than giving us an unfair review based upon your vague and uninformed assumptions.

Clearly, a lot went wrong here, and there are some serious allegations by the OAG about how this incident and the investigation were handled by the practice. But there are some recurring lessons for providers, particularly smaller and midsized practices, that include:

  • Having a set of HIPAA policies in a draw that no one in the practice sees will do little to support an argument for HIPAA compliance.
  • Complaints about timely and adequate responses to requests for patient records will get the attention of federal and state agencies, and if true likely lead to penalties.
  • While they can be upsetting and possibly disruptive to the practice, responding to patient reviews online and in social media can be serious traps for the unwary. We have seen it play out badly for providers here, here, and here.

We have helped many small to midsized providers, including dental practices, work through the issues and avoid these kinds of settlements and enforcement actions.

On November 8, 2024, the California Privacy Protection Agency (CPPA) voted to advance proposed regulations concerning automated decisionmaking technology. While the comment period is ongoing and we do not have final rules, we are taking a look at some key provisions to help businesses begin to assess the potential effects of these rules if made final as is. In this post, we look at what “automated decisionmaking technology” means.

What is automated decisionmaking technology (ADMT)?

According to the proposed regulation, automated decisionmaking technology (ADMT) would mean:  

any technology that processes personal information and uses computation to execute a decision, replace human decisionmaking, or substantially facilitate human decisionmaking.

So, the first thing to note is that, for purposes of these proposed regulations, an ADMT under the CCPA proposed rules must involve the processing of personal information. Under the CCPA, however, while personal information is defined broadly, there are several exceptions. One is that neither deidentified nor aggregate consumer information constitute personal information. Another is protected health information covered by the Health Insurance Portability and Accountability Act (HIPAA) is not considered personal information. And, there are other exceptions to consider.

Understanding these exceptions may help business narrow the impact of these regulations on their organizations. For example, technology facilitating human decisionmaking to process claims under a HIPAA-covered group health plan might fall outside of these regulations.

The proposed regulations also would define what it means to “substantially facilitate human decisionmaking.” We encounter a similar concept in some other AI regulation, such as Local Law 144 in New York City and the Colorado Artificial Intelligence Act (CAIA). Under these proposed regulations, if the technology’s output is a key factor in a human’s decisionmaking, it will be considered to be substantially facilitating human decisionmaking. The proposed regulations provide the following example,

using automated decisionmaking technology to generate a score about a consumer that the human reviewer uses as a primary factor to make a significant decision about them.

(emphasis added). Note the score need not be “the” primary factor, only “a” primary factor. Perhaps this will be clarified in the final rule. But one can read this language as similar to the “substantial factor” description when assessing “high-risk artificial intelligence systems” under the CAIA. However, under the NYC law, substantially assisting or replacing discretionary decisionmaking requires relying solely on the output, weighting the output more than any other factor, or using the output to overrule conclusions derived from other factors including human decision-making. This is a small but potentially significant distinction affecting the potential application of AI regulation across jurisdictions that organizations will have to track.

ADMTs Include Profiling

The proposed regulations would make clear that ADMTs include profiling, defined as:

any form of automated processing of personal information to evaluate certain personal aspects relating to a natural person and in particular to analyze or predict aspects concerning that natural person’s intelligence, ability, aptitude, performance at work, economic situation; health, including mental health; personal preferences, interests, reliability, predispositions, behavior, location, or movements.

Over the last few years, many employers have deployed a range of devices and applications that may include “technologies” (under the proposed regulations – “software or programs, including those derived from machine learning, statistics, other data-processing techniques, or artificial intelligence”) that may constitute “profiling.” These devices and applications help support employers’ efforts to source, recruit, monitor, track, and assess the performance of employees, applicants, and others. Examples include (i) dashcams deployed throughout company fleets to promote safety, improve performance, and reduce costs, and (ii) performance management platforms that, among other things, are used to evaluate employee productivity.

Technologies that are NOT ADMTs

Technologies that do not execute a decision, replace human decisionmaking, or substantially facilitate human decisionmaking would not be ADMTs, according to the proposed regulations, such as: web hosting, domain registration, networking, caching, website-loading, data storage, firewalls, anti-virus, anti-malware, spam- and robocall-filtering, spellchecking, calculators, databases, spreadsheets, or similar technologies.

Businesses would need to be careful applying these exceptions. Using a spreadsheet to run regression analyses on top-performing managers to determine their common characteristics which then are used to make promotion decisions concerning more junior employees would be a use of an ADMT. That would not be the case if the spreadsheet were merely used to tabulate final scores on performance evaluations.

There certainly will be more to come concerning the regulation of AI, including under the CCPA. Organizations using these technologies will need to monitor these developments.

Governor Kathy Hochul signed several bills last month designed to strengthen protections for the personal data of consumers. One of those bills (S2659B) makes important changes to the notification timing requirements under the Empire State’s breach notification law, Section 899-aa of the New York General Business Law. The bill was effective immediately when signed, or December 21, 2024.

All fifty states have enacted at least one data breach notification law. Some states, such as California, have more than one statute – a generally applicable statute and one applying to certain health care entities. Over the years, many of these states have updated their laws in different respects. For example, some have expanded the definition of personal information, resulting in broader categories of personal information triggering a potential notification requirement if breached. Others have added requirements to notify one or more state agency. While some states have modified the specific notification requirements, such as the timing of notification. That is one of the changes New York made to its law.

Prior to the change, a business subject to the New York statute that experienced a covered breach would be required to provide notification to affected individuals:

in the most expedient time possible and without unreasonable delay.

There was no outside time frame by which the notice must be provided. The bill added a 30 day deadline. So, now, the law requires the breached entity to provide notification

in the most expedient time possible and without unreasonable delay, provided that such notification shall be made within thirty days after the breach has been discovered

Notably, prior to the change, the law excluded from this timing requirement the legitimate needs of law enforcement and “any measures necessary to determine the scope of the breach and restore the integrity of the systems.” The legitimate needs of law enforcement exception remains in the law, determining the scope of the breach and restoring system integrity do not.

S2659B also made a change to the state agencies that must be notified in the event of a breach under the statute. Under the prior law, if any New York residents were to be notified under the State’s breach notification law, the state attorney general, the department of state and, the division of state police all needed to be notified. The new law adds the Department of Financial Services to the list.

With breach notification requirements under federal law, the laws in all states and several localities, and increasingly embedded in contract obligations, it can be difficult stay up to date, particularly if the company is in the middle of handling the breach. In addition to it being required in some scenarios, this is one more reason why we recommend maintaining an incident response plan. Such a plan is a good place to track these kinds of developments for the company’s incident response team.

As the healthcare sector continues to be a top target for cyber criminals, the Office for Civil Rights (OCR) issued proposed updates to the HIPAA Security Rule (scheduled to be published in the Federal Register January 6). It looks like substantial changes are in store for covered entities and business associates alike, including healthcare providers, health plans, and their business associates.

According to the OCR, cyberattacks against the U.S. health care and public health sectors continue to grow and threaten the provision of health care, the payment for health care, and the privacy of patients and others. In 2023, the OCR has reported that over 167 million people were affected by large breaches of health information, a 1002% increase from 2018. Further, seventy nine percent of the large breaches reported to the OCR in 2023 were caused by hacking. Since 2019, large breaches caused by successful hacking and ransomware attacks have increased 89% and 102%.

The proposed Security Rule changes are numerous and include some of the following items:

  • All Security Rule policies, procedures, plans, and analyses will need to be in writing.
  • Create, maintain a technology asset inventory and network map that illustrates the movement of ePHI throughout the regulated entity’s information systems on an ongoing basis, but at least once every 12 months.
  • More specificity needed for risk analysis. For example, risk assessments must be in writing and include action items such as identification of all reasonably anticipated threats to ePHI confidentiality, integrity, and availability and potential vulnerabilities to information systems.
  • 24 hour notice to regulated entities when a workforce member’s access to ePHI or certain information systems is changed or terminated.
  • Stronger incident response procedures, including: (I) written procedures to restore the loss of certain relevant information systems and data within 72 hours, (II) written security incident response plans and procedures, including testing and revising plans.
  • Conduct compliance audit every 12 months.
  • Business associates to verify Security Rule compliance to covered entities by a subject matter expert at least once every 12 months.
  • Require encryption of ePHI at rest and in transit, with limited exceptions.
  • New express requirements would include: (I) deploying anti-malware protection, and (II) removing extraneous software from relevant electronic information systems.
  • Require the use of multi-factor authentication, with limited exceptions.
  • Require review and testing of the effectiveness of certain security measures at least once every 12 months.
  • Business associates to notify covered entities upon activation of their contingency plans without unreasonable delay, but no later than 24 hours after activation.
  • Group health plans must include in plan documents certain requirements for plan sponsors: comply with the Security Rule; ensure that any agent to whom they provide ePHI agrees to implement the administrative, physical, and technical safeguards of the Security Rule; and notify their group health plans upon activation of their contingency plans without unreasonable delay, but no later than 24 hours after activation.

After reviewing the proposed changes, concerned stakeholders may submit comments to OCR for consideration within 60 days after January 6, by following the instructions outlined in the proposed rule. We support clients with respect to developing and submitting comments they wish to communicate to help shape the final rule, as well as complying with the requirements under the rule once made final.

As the year comes to a close here are some of the highlights from the Workplace Privacy, Data Management & Security Report with our most popular topics and posts from 2024.

Expanding State Privacy Laws

This year saw a further expansion of state comprehensive consumer data privacy laws. These legislative measures aim to enhance the protection of consumer data, ensuring greater transparency and accountability for businesses that collect and process personal information. Several states introduced robust frameworks designed to safeguard consumer privacy. Whether you are an attorney, an executive, or a leader in human resources, marketing, operations, risk management, and of course IT, it is vital to stay informed about these evolving legal standards and their implications for both businesses and consumers.

Read more on these developments:

Bluegrass State Becomes Third State to Pass a Comprehensive Consumer Privacy Data Law in 2024

Maryland Passes Comprehensive Data Privacy Law, Joining the Swelling State Ranks

Minnesota Passes a Comprehensive Consumer Data Privacy Law

Nebraska Adds to the List of States That Have Enacted a Comprehensive Consumer Data Privacy Law

New Hampshire Passes Comprehensive Consumer Data Privacy Law

New Jersey Legislature Enacts the First Consumer Privacy Law of 2024

Rhode Island Passes a Comprehensive Consumer Data Privacy Law

Growing AI Regulation

In 2024, the landscape of artificial intelligence (AI) regulation experienced significant changes, reflecting the rapid advancements and widespread adoption of AI technologies across various industries. Regulators have increasingly focused on addressing the ethical, legal, and privacy implications of AI, leading to new laws and amendments aimed at safeguarding individuals’ rights and ensuring transparency in AI deployment. One example at the federal level is the use of AI when conducting background checks and potential Fair Credit Reporting Act (FCRA) implications. A notable example at the state level is Illinois which made significant amendments to its Human Rights Act, setting a precedent for other states by incorporating specific provisions related to AI.

Read more about these developments:

AI Regulation Continues to Grow as Illinois Amends its Human Rights Act

AI Notetakers – Evaluating the Risks Along with the Benefits

3 Key Risks When Using AI for Performance Management and Ways to Mitigate Them

AI and Other Decision-Making Tools: Does the Fair Credit Reporting Act Apply?

Data Breach Risks Escalate

Businesses faced significant regulatory and legislative developments pertaining to data breaches in 2024, reflecting the growing need to protect sensitive information in an increasingly digital world. Key updates include the strengthening of breach notification requirements by multiple states, such as Utah, and the emphasis on multi-factor authentication to prevent unauthorized access. The rising scrutiny and evolving legal landscape underscore the necessity for businesses to implement robust cybersecurity measures and comply with updated data breach notification laws to mitigate risks and avoid severe penalties.

Read more about these developments:

Utah Updates to Breach Notification Requirements Take Effect

Multi-factor Authentication (MFA) Bypassed to Permit Data Breach

Website Tracking Concerns for Business

In 2024, the scrutiny surrounding website tracking technologies has intensified significantly. It has become critical for businesses to understand the evolving legal landscape of online tracking practices. Increased regulatory pressure and new legislative measures across different states have highlighted the need for businesses to implement robust privacy policies. These policies must comply not only with state-specific regulations but also with broader federal guidelines, ensuring the protection of consumer data and transparency in data collection. Moreover, recent guidance from the New York Attorney General and other regulatory bodies has emphasized that non-compliance can lead to severe penalties, making it imperative for online retailers and all businesses employing website tracking technologies to stay abreast of the latest legal requirements and best practices.

Read more about these developments:

California Invasion of Privacy Act Violations Aimed at Online Retailers

The Spotlight Shines Even Brighter: New York Attorney General Publishes Guidance On Businesses’ Use Of Website Tracking Technologies

Litigation Under Wiretap Law and What Website Owners Need to Know

Administrative Guidance on Cybersecurity

This year several administrative agencies issued guidance on cybersecurity, emphasizing the critical importance of protecting sensitive data and ensuring robust security measures across various sectors. This year, the Department of Labor (DOL) expanded fiduciary obligations to include cybersecurity for health and welfare plans, reflecting a growing recognition of the vulnerabilities and risks associated with inadequate cybersecurity practices. When plan fiduciaries set out to assess their plan service providers, they might consider amendments the Securities and Exchange Commission (SEC) made in 2024 to Regulation S-P which regulates many of those same service providers. If the service provider is subject to S-P, confirming they comply with the SEC requirements for an incident response plan and other cybersecurity policy and procedure requirements, would help the fiduciaries satisfy their obligation to make prudent selections.

Read more about these developments:

DOL Expands Fiduciary Obligations for Cybersecurity to Health and Welfare Plans

Why Retirement Plan Sponsors and Fiduciaries Need to Know about the SEC Cybersecurity Amendments

The Broadening Data Security Mandate: SEC Incident Response Plan and Data Breach Notification Requirements

Jackson Lewis will continue to track important developments in privacy, data management, and cybersecurity in the new year. If you have questions about these or other related issues, contact a Jackson Lewis attorney to discuss.