On November 8, 2024, the California Privacy Protection Agency (CPPA) voted to advance proposed regulations concerning automated decisionmaking technology. While the comment period is ongoing and we do not have final rules, we are taking a look at some key provisions to help businesses begin to assess the potential effects of these rules if made final as is. In this post, we look at what “automated decisionmaking technology” means.

What is automated decisionmaking technology (ADMT)?

According to the proposed regulation, automated decisionmaking technology (ADMT) would mean:  

any technology that processes personal information and uses computation to execute a decision, replace human decisionmaking, or substantially facilitate human decisionmaking.

So, the first thing to note is that, for purposes of these proposed regulations, an ADMT under the CCPA proposed rules must involve the processing of personal information. Under the CCPA, however, while personal information is defined broadly, there are several exceptions. One is that neither deidentified nor aggregate consumer information constitute personal information. Another is protected health information covered by the Health Insurance Portability and Accountability Act (HIPAA) is not considered personal information. And, there are other exceptions to consider.

Understanding these exceptions may help business narrow the impact of these regulations on their organizations. For example, technology facilitating human decisionmaking to process claims under a HIPAA-covered group health plan might fall outside of these regulations.

The proposed regulations also would define what it means to “substantially facilitate human decisionmaking.” We encounter a similar concept in some other AI regulation, such as Local Law 144 in New York City and the Colorado Artificial Intelligence Act (CAIA). Under these proposed regulations, if the technology’s output is a key factor in a human’s decisionmaking, it will be considered to be substantially facilitating human decisionmaking. The proposed regulations provide the following example,

using automated decisionmaking technology to generate a score about a consumer that the human reviewer uses as a primary factor to make a significant decision about them.

(emphasis added). Note the score need not be “the” primary factor, only “a” primary factor. Perhaps this will be clarified in the final rule. But one can read this language as similar to the “substantial factor” description when assessing “high-risk artificial intelligence systems” under the CAIA. However, under the NYC law, substantially assisting or replacing discretionary decisionmaking requires relying solely on the output, weighting the output more than any other factor, or using the output to overrule conclusions derived from other factors including human decision-making. This is a small but potentially significant distinction affecting the potential application of AI regulation across jurisdictions that organizations will have to track.

ADMTs Include Profiling

The proposed regulations would make clear that ADMTs include profiling, defined as:

any form of automated processing of personal information to evaluate certain personal aspects relating to a natural person and in particular to analyze or predict aspects concerning that natural person’s intelligence, ability, aptitude, performance at work, economic situation; health, including mental health; personal preferences, interests, reliability, predispositions, behavior, location, or movements.

Over the last few years, many employers have deployed a range of devices and applications that may include “technologies” (under the proposed regulations – “software or programs, including those derived from machine learning, statistics, other data-processing techniques, or artificial intelligence”) that may constitute “profiling.” These devices and applications help support employers’ efforts to source, recruit, monitor, track, and assess the performance of employees, applicants, and others. Examples include (i) dashcams deployed throughout company fleets to promote safety, improve performance, and reduce costs, and (ii) performance management platforms that, among other things, are used to evaluate employee productivity.

Technologies that are NOT ADMTs

Technologies that do not execute a decision, replace human decisionmaking, or substantially facilitate human decisionmaking would not be ADMTs, according to the proposed regulations, such as: web hosting, domain registration, networking, caching, website-loading, data storage, firewalls, anti-virus, anti-malware, spam- and robocall-filtering, spellchecking, calculators, databases, spreadsheets, or similar technologies.

Businesses would need to be careful applying these exceptions. Using a spreadsheet to run regression analyses on top-performing managers to determine their common characteristics which then are used to make promotion decisions concerning more junior employees would be a use of an ADMT. That would not be the case if the spreadsheet were merely used to tabulate final scores on performance evaluations.

There certainly will be more to come concerning the regulation of AI, including under the CCPA. Organizations using these technologies will need to monitor these developments.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Joseph J. Lazzarotti Joseph J. Lazzarotti

Joseph J. Lazzarotti is a principal in the Berkeley Heights, New Jersey, office of Jackson Lewis P.C. He founded and currently co-leads the firm’s Privacy, Data and Cybersecurity practice group, edits the firm’s Privacy Blog, and is a Certified Information Privacy Professional (CIPP)…

Joseph J. Lazzarotti is a principal in the Berkeley Heights, New Jersey, office of Jackson Lewis P.C. He founded and currently co-leads the firm’s Privacy, Data and Cybersecurity practice group, edits the firm’s Privacy Blog, and is a Certified Information Privacy Professional (CIPP) with the International Association of Privacy Professionals. Trained as an employee benefits lawyer, focused on compliance, Joe also is a member of the firm’s Employee Benefits practice group.

In short, his practice focuses on the matrix of laws governing the privacy, security, and management of data, as well as the impact and regulation of social media. He also counsels companies on compliance, fiduciary, taxation, and administrative matters with respect to employee benefit plans.

Privacy and cybersecurity experience – Joe counsels multinational, national and regional companies in all industries on the broad array of laws, regulations, best practices, and preventive safeguards. The following are examples of areas of focus in his practice:

  • Advising health care providers, business associates, and group health plan sponsors concerning HIPAA/HITECH compliance, including risk assessments, policies and procedures, incident response plan development, vendor assessment and management programs, and training.
  • Coached hundreds of companies through the investigation, remediation, notification, and overall response to data breaches of all kinds – PHI, PII, payment card, etc.
  • Helping organizations address questions about the application, implementation, and overall compliance with European Union’s General Data Protection Regulation (GDPR) and, in particular, its implications in the U.S., together with preparing for the California Consumer Privacy Act.
  • Working with organizations to develop and implement video, audio, and data-driven monitoring and surveillance programs. For instance, in the transportation and related industries, Joe has worked with numerous clients on fleet management programs involving the use of telematics, dash-cams, event data recorders (EDR), and related technologies. He also has advised many clients in the use of biometrics including with regard to consent, data security, and retention issues under BIPA and other laws.
  • Assisting clients with growing state data security mandates to safeguard personal information, including steering clients through detailed risk assessments and converting those assessments into practical “best practice” risk management solutions, including written information security programs (WISPs). Related work includes compliance advice concerning FTC Act, Regulation S-P, GLBA, and New York Reg. 500.
  • Advising clients about best practices for electronic communications, including in social media, as well as when communicating under a “bring your own device” (BYOD) or “company owned personally enabled device” (COPE) environment.
  • Conducting various levels of privacy and data security training for executives and employees
  • Supports organizations through mergers, acquisitions, and reorganizations with regard to the handling of employee and customer data, and the safeguarding of that data during the transaction.
  • Representing organizations in matters involving inquiries into privacy and data security compliance before federal and state agencies including the HHS Office of Civil Rights, Federal Trade Commission, and various state Attorneys General.

Benefits counseling experience – Joe’s work in the benefits counseling area covers many areas of employee benefits law. Below are some examples of that work:

  • As part of the Firm’s Health Care Reform Team, he advises employers and plan sponsors regarding the establishment, administration and operation of fully insured and self-funded health and welfare plans to comply with ERISA, IRC, ACA/PPACA, HIPAA, COBRA, ADA, GINA, and other related laws.
  • Guiding clients through the selection of plan service providers, along with negotiating service agreements with vendors to address plan compliance and operations, while leveraging data security experience to ensure plan data is safeguarded.
  • Counsels plan sponsors on day-to-day compliance and administrative issues affecting plans.
  • Assists in the design and drafting of benefit plan documents, including severance and fringe benefit plans.
  • Advises plan sponsors concerning employee benefit plan operation, administration and correcting errors in operation.

Joe speaks and writes regularly on current employee benefits and data privacy and cybersecurity topics and his work has been published in leading business and legal journals and media outlets, such as The Washington Post, Inside Counsel, Bloomberg, The National Law Journal, Financial Times, Business Insurance, HR Magazine and NPR, as well as the ABA Journal, The American Lawyer, Law360, Bender’s Labor and Employment Bulletin, the Australian Privacy Law Bulletin and the Privacy, and Data Security Law Journal.

Joe served as a judicial law clerk for the Honorable Laura Denvir Stith on the Missouri Court of Appeals.