On November 8, 2024, the California Privacy Protection Agency (CPPA) voted to advance proposed regulations concerning automated decisionmaking technology. While the comment period is ongoing and we do not have final rules, we are taking a look at some key provisions to help businesses begin to assess the potential effects of these rules if made
Artificial Intelligence
2024 Wrap-Up of the Workplace Privacy, Data Management & Security Report
As the year comes to a close here are some of the highlights from the Workplace Privacy, Data Management & Security Report with our most popular topics and posts from 2024.
Expanding State Privacy Laws
This year saw a further expansion of state comprehensive consumer data privacy laws. These legislative measures aim to enhance the…
AI and Other Decision-Making Tools: Does the Fair Credit Reporting Act Apply?
The Consumer Financial Protection Board (CFPB) recently issued guidances titled Consumer Financial Protection Circular 2024-06: Background Dossiers and Algorithmic Scores for Hiring, Promotion, and Other Employment Decisions | Consumer Financial Protection Bureau and Consumer Financial Protection Circular 2023-03: Adverse action notification requirements and the proper use of the CFPB’s sample forms provided in Regulation…
Automated Decision Making Changes Coming to California’s FEHA Regulations
The California Civil Rights Council published its most recent version proposed revisions to Fair Employment and Housing Act (FEHA) regulations that include automated decision-making and extended the comment period to 30 days. You can read more about the proposed revisions here from Jackson Lewis Attorneys Sayaka Karitani and Robert Yang.
California Passes Legislation Protecting Performers’ Digital Rights
Governor Newsom recently signed two significant bills focused on protecting digital likeness rights: Assembly Bill (AB)1836 and Assembly Bill (AB) 2602. These legislative measures aim to address the complex issues surrounding the commercial use of an individual’s digital rights and establish guidelines for responsible AI use in the digital age.
California AB 1836 addresses…
California Seeks to Have Consistent Definition of Artificial Intelligence
Artificial Intelligence (AI) has created numerous opportunities for growth and economic development throughout California. However, the unregulated use of AI can lead to a Pandora’s Box of undesirable consequences. A regulatory framework that leads to inconsistent results likely will lead to other problems. Acknowledging this, the most recent California legislature included a bevy of bills…
Investigation of AI Training by Australian Radiology Provider Provides Important Reminder for U.S. Healthcare Providers
If there is one thing artificial intelligence (AI) systems need is data and lots of it as training AI is essential for achieving success for a given use case. A recent investigation by Australia’s privacy regulator into the country’s largest medical imaging provider, I-MED Radiology Network, illustrates concerns about the use of medical data to…
California Establishes AI Transparency Act
According to the California legislature, audio recordings, video recordings, and still images can be compelling evidence of the truth. However, the proliferation of Artificial Intelligence (AI), specifically, generative AI, has made it drastically easier to create fake content that is almost impossible to distinguish from authentic content. To address this concern, California’s Governor signed Senate…
Exploring AI Risks Reported in SEC Filings Can Be Helpful For Many Organizations, Including SMBs
One of our recent posts discussed the uptick in AI risks reported in SEC filings, as analyzed by Arize AI. There, we highlighted the importance of strong governance for mitigating some of these risks, but we didn’t address the specific risks identified in those SEC filings. We discuss them briefly here as they are risks…
Can Your AI Model Collapse?
A recent Forbes article summarizes a potentially problematic aspect of AI which highlights the importance of governance and the quality of data when training AI models. It is called “model collapse.” It turns out that over time, when AI models use data that earlier AI models created (rather than data created by humans), something is…