After years of development and extensive stakeholder engagement, California has finalized groundbreaking cybersecurity audit regulations under the California Consumer Privacy Act (CCPA). These new requirements may significantly impact how covered businesses protect consumer data.

The New Regulations

The California Privacy Protection Agency (CPPA) Board approved comprehensive amendments to CCPA regulations covering cybersecurity audits, risk assessments, and automated decision-making technology (ADMT), among other things. The regulations were subsequently approved by the California Office of Administrative Law on September 23, 2025, marking the completion of a rulemaking process that began in November 2024.

When Does the Audit Requirement Apply?

Not all businesses subject to the CCPA must conduct cybersecurity audits. According to the regulations, the requirement applies only to businesses whose data processing presents a “significant risk” to consumer security, defined by specific thresholds:

Businesses must conduct annual cybersecurity audits if they fall into one of two buckets:

  1. They derive 50% or more of their annual revenue in the preceding calendar year from selling or sharing consumers’ personal information, OR
  2. They have over $25 million in annual gross revenue (adjusted every two years; currently $26,625,000) AND process in the preceding calendar year the either:
    • Personal information of more than 250,000 California consumers or households, OR
    • Sensitive personal information of more than 50,000 California consumers or households.

These thresholds ensure that the audit requirement focuses on businesses handling substantial volumes of consumer data or those whose business models center on data monetization.

Effective Dates and Compliance Deadlines

The regulations officially take effect on January 1, 2026. However, businesses have staggered deadlines for submitting their first cybersecurity audit certifications to the CPPA based on their revenue size:

  • April 1, 2028: Businesses with annual revenues over $100 million for 2026.
  • April 1, 2029: Businesses with annual revenues between $50-100 million for 2027.
  • April 1, 2030: Businesses with annual revenues under $50 million for 2028.

This phased approach gives businesses time to establish robust audit processes and implement necessary cybersecurity improvements before their first submission deadline.

What the Audit Requirement Entails

The regulations establish detailed requirements for conducting comprehensive cybersecurity audits, the results of which must be provided to a member of the business’s executive management team who has direct responsibility for the business’s cybersecurity program. Here’s a summary of what businesses must do:

Auditor Qualifications: Audits must be conducted by qualified, objective, independent professionals—either internal or external—using recognized auditing standards such as those adopted by the American Institute of CPAs. Auditors must possess expertise in cybersecurity and auditing methodologies.

Audit Scope: The cybersecurity audit must comprehensively evaluate the business’s cybersecurity program across 18 key areas, including:

  • Secure user authentication and access controls
  • Encryption of personal information
  • Account management systems
  • Personal information inventory and management
  • Secure hardware and software configuration
  • Vulnerability scanning and penetration testing
  • Audit-log management and network monitoring
  • Network defenses and segmentation
  • Antivirus and anti-malware protections
  • Vendor and third-party risk management
  • Data retention schedules and secure disposal
  • Incident response capabilities
  • Cybersecurity training programs
  • Breach and incident review for the audit period

Even businesses not subject to the mandatory audit requirement should view the 18 standards as a framework for evaluating their own cybersecurity programs, as the CPPA may use these criteria when assessing CCPA compliance more broadly.

Documentation Requirements: Businesses must prepare detailed audit reports documenting the review scope, policies assessed, evaluation criteria, supporting documentation, identified compliance gaps, and remediation plans. All audit records must be retained for five years.

Annual Certification: Companies must submit written certifications of compliance to the CPPA on an annual basis, signed under penalty of perjury by appropriate executive leadership.

Flexibility for Existing Audits: Importantly, businesses may leverage cybersecurity audits conducted for other regulatory purposes—such as NIST Cybersecurity Framework 2.0 assessments—provided they meet all CCPA requirements. This allows companies to avoid duplicative efforts where existing audits are sufficiently comprehensive.

What This Means for Your Business

Businesses subject to the audit requirement should begin preparation now by identifying qualified audit personnel, establishing appropriate internal reporting structures, conducting comprehensive inventories of personal information processing activities, and documenting current cybersecurity practices. The clock is ticking toward those first compliance deadlines in 2028.

As artificial intelligence (AI), particularly generative AI, becomes increasingly woven into our professional and personal lives—from personalized travel itineraries to reviewing resumes to summarizing investigation notes and reports—questions about who or what controls our data and how it’s used are ever present. AI systems survive and thrive on information and that intersection of AI and privacy elevates the need for data protection.

Recent regulations issued by the California Privacy Protection Agency (CPPA) under the California Consumer Privacy Act (CCPA) begin to erect those protections. Among its various provisions, the CCPA now specifically addresses automated decision-making technologies (ADMT), attempting to bring transparency and consumer rights to, among other things, push back on algorithms making significant decisions about them.

As a starting point, it is important to define ADMT. Under the CCPA, it means any technology that processes personal information and uses computation to replace human decision-making or substantially replace human decision-making. For this purpose, “replace” means to make decision without human involvement. To be considered human involvement, a human must:

  1. know how to interpret and use the technology’s output to make the decision;
  2. review and analyze the output of the technology, and any other information that is relevant to make or change the decision; and
  3. have the authority to make or change the decision based on their analysis in (B).

CCPA-covered businesses that use ADMT to make “significant decisions” about consumers have several new compliance obligations to navigate. A “significant decision” is defined as a decision that has important consequences for a consumer’s life, opportunities, or access to essential services. CCPA regulations define these decisions as those that result in the provision or denial of:

  • Financial or lending services (e.g., credit approval, loan eligibility)
  • Housing (e.g., rental applications, mortgage decisions)
  • Education enrollment or opportunities (e.g., admissions decisions)
  • Employment or independent contracting opportunities or compensation (e.g., hiring, promotions, work assignments)
  • Healthcare services (e.g., treatment eligibility, insurance coverage)

These decisions are considered “significant” because they directly affect a consumer’s economic, health, or personal well-being.

When such businesses use ADMT to make significant decisions, they generally must do the following:

  • Provide an opt-out right for consumers.
  • Provide a pre-use notice that clearly explains the business’s use of ADMT, in plain language.
  • Provide consumers with the ability to request information about the business’s use of ADMT.

Businesses using ADMT for significant decisions before January 1, 2027, must comply by January 1, 2027. Businesses that begin using ADMT after January 1, 2027, must comply immediately when the use begins.

Businesses will need to examine these new requirements carefully, including how they fit into the existing CCPA compliance framework, along with exceptions that may apply. For example, in the case of a consumer’s right to opt-out of ADMT, a business may not be required to make that right available.

If a business provides consumers with a method to appeal the ADMT decision to a human reviewer who has the authority to overturn the decision, opt-out is not required. Additionally, the right to opt-out of ADMT in connection with certain admission, acceptance, or hiring decisions, is not required if the following are satisfied:

  • the business uses ADMT solely for the business’s assessment of the consumer’s ability to perform at work or in an educational program to determine whether to admit, accept, or hire them; and
  • the ADMT works as intended for the business’s proposed use and does not unlawfully discriminate based upon protected characteristics.

Likewise, the right to opt-out of ADMT is not required for certain allocation/assignment of work and compensation decisions, if the business:

  • uses the ADMT solely for the business’s allocation/assignment of work or compensation; and
  • the ADMT works for the business’s purpose and does not unlawfully discriminate based upon protected characteristics.

As many businesses are realizing, successfully deploying AI requires a coordinated approach to achieve more than getting the desired output. It includes understanding a complex regulatory environment of which data privacy and security is a significant part.

A new Senate bill, the AI-Related Job Impacts Clarity Act (S. 3108), would create a federal reporting framework for how artificial intelligence (AI) is affecting employment in the United States.

The aim is to produce timely, public data on AI-driven layoffs, hiring, unfilled roles, and retraining, with the Department of Labor (through the Bureau of Labor Statistics) responsible for collecting and publishing regular reports.

The bill is only in its early stages, but the following is an overview of the proposed law to date.

Who is covered?

Initially, “covered entities” include publicly traded companies and federal agencies. The bill also contemplates bringing certain non-publicly traded companies into scope through rulemaking within 180 days of enactment. That rulemaking must consult the U.S. Securities and Exchange Commission (SEC) and Treasury and consider factors such as workforce size, revenue, industry classification, and overall employment impact, while ensuring any requirements are proportionate and protect proprietary or personally identifiable information. 

What must be reported and when?

Under the proposed law, covered entities would make quarterly disclosures to the Secretary of Labor no later than 30 days after each quarter’s end. The required content focuses on AI-related job impacts in the United States (including territories), specifically:

  • The number of individuals laid off has substantially increased due to AI replacing or automating their job functions.
  • The number of individuals hired has substantially increased due to AI incorporation.
  • The number of previously occupied positions the company decided not to refill substantially due to AI automation.
  • The number of individuals being retrained or assisted in retraining has substantially increased due to AI.

For each disclosure item, companies must include the relevant North American Industry Classification System (NAICS) codes. 

How will reporting be collected and used?

The Secretary would be permitted to integrate these disclosures into existing Department of Labor or Census Bureau surveys and allow companies to comply via those surveys. If the Census Bureau runs the survey independently, it must share the AI-impact data with Labor each quarter to enable reporting. Labor must publish quarterly summaries and an annual year-end rollup, plus every other quarter a net-impact analysis that combines disclosure data with other relevant information. Reports and underlying data must be published on the BLS website and submitted to Congress within 60 days after each quarter’s end. 

Jackson Lewis will continue to track this and other legislation related to AI. If you have questions about this bill or related issues, contact a Jackson Lewis attorney to discuss.

Leaders charged with safeguarding data privacy and cybersecurity often assume that size equates to security—that large, well-resourced organizations must have airtight defenses against cyberattacks and data breaches. It’s a natural assumption: mature enterprises tend to have robust policies, advanced technology, and deep security teams. Yet, as recent events remind us, even the biggest organizations can be compromised. Sophistication and scale do not guarantee immunity.

On October 21, 2025, the New York Department of Financial Services (DFS) issued guidance on managing risks associated with third-party service providers, urging the entities they regulate to take a more active role in assessing and monitoring their vendors’ cybersecurity practices.

The message is clear: strong internal controls are only as good as the weakest external connection. An organization’s exposure to risk extends well beyond its own systems and policies. Its a message that entities beyond those regulated by DFS should heed. Consider, for example, the DOL mandate that affects any organization sponsoring an ERISA-covered employee benefit plan – fiduciaries must assess the cybersecurity of plan service providers.

DFS emphasizes that third-party relationships—whether for data hosting, software development, cloud services, or payment processing—must be governed by a structured risk-management framework. The guidance highlights several key components: thorough vendor due diligence before onboarding, contractual provisions addressing cybersecurity responsibilities, ongoing monitoring of vendors’ controls, and incident-response coordination. These expectations are not new, but DFS’s renewed attention signals that regulators continue to see third-party risk as a critical vulnerability.

Importantly, the guidance reminds organizations that performing these steps is not just a compliance exercise—it’s a form of self-protection. Even when a company has invested heavily in its own cybersecurity defenses, it can still be affected by a breach through a vendor’s compromised system or careless employee. The reputational and financial fallout from such an event can be just as severe as if the company’s own network had been directly attacked.

Organizations can take several practical steps in response:

  • Assess vendor criticality and data access. Identify which vendors handle sensitive information or provide essential services. DFS suggests that entities classify vendors based on the vendor’s risk profile, considering factors such as system access, data sensitivity, location, and how critical the services is to its operations. Again, this is a step all organizations should consider when evaluating their vendors. 
  • Require detailed cybersecurity questionnaires or certifications. Review vendors’ security controls, policies, and incident-response plans.
  • Incorporate strong contract provisions. Ensure that agreements specify breach notification timelines, audit rights, and responsibilities for remediation costs. The DFS guidance includes several examples of baseline contract provisions, including how AI may be used in the course of performing services. There also are other important provisions DFS does not specifically call out, such as indemnity, insurance requirements, limitation of liability. Organizations should have qualified counsel review these critical provisions to help ensure contract terms do not stray too far from initial proposals and assurances.
  • Monitor continuously. Risk assessments should not be one-time exercises; regular reviews and periodic attestations help keep oversight current. Third party service provides have personnel changes, system updates, new offerings, as well as financial challenges during the term of a services agreement. These and other factors are likely to have an impact on data privacy and cybersecurity efforts.
  • Plan for the worst. Integrate vendors into incident-response exercises so all parties understand roles and communication channels in a breach.

By taking these steps, organizations not only strengthen their own resilience but also strengthen a defensible position if litigation follows a third-party breach. Courts and regulators increasingly look for evidence that a company acted reasonably in selecting and managing its vendors.

The DFS guidance serves as a reminder that in today’s interconnected environment, no organization can outsource accountability for cybersecurity. Vigilant oversight of third-party relationships is not simply a best practice—it’s an operational necessity.

Key Takeaways

  • Outlines basic steps to determine whether a business may need to perform a risk assessment under the California Consumer Privacy Act (CCPA) in connection with its use of dashcams
  • Provide a resource for exploring the basic requirements for conducting and reporting risk assessments

If you have not reviewed the recently approved, updated CCPA regulations, you might want to soon. There are several new requirements, along with many modifications and clarifications to existing rules. In this post, we discuss a new requirement – performing risk assessments – in the context of dashcam and related fleet management technologies.

In short, when performing a risk assessment, the business needs to assess whether the risk to consumer privacy from the processing of personal information outweighs the benefits to consumers, the business, others, and the public, and, if so, restricting or prohibiting that processing, as appropriate.

Of course, the first step to determine whether a business needs to perform a risk assessment under the CCPA is to determine whether the CCPA applies to the business. We discussed those basic requirements in Part 1 of our post on risk assessments under the CCPA.

If you are still reading, you have probably determined that your organization is a “business” covered by the CCPA and, possibly, your business is using certain fleet management technologies, such as dashcam or other vehicle tracking technologies. Even if that is not the case, the remainder of this post may be of interest for “businesses” under the CCPA that are curious about examples applying the new risk assessment requirement.

As discussed in Part 1 of our post on the basics of CCPA risk assessments, businesses are required to perform risk assessments when their processing of personal information presents “significant risk” to consumer privacy. The regulations set out certain types of processing activities involving personal information that would trigger a risk assessment. Depending on the nature and scope of the dashcam technology deployed, a business should consider whether a risk assessment is required.

Dashcams and similar devices increasingly come with an array of features. As the name suggests, these devices include cameras that can record activity inside and outside the vehicle. They also can be equipped with audio recording capabilities permitting the recording of voice in and outside the vehicle. Additionally, dashcams can play a role in logistics, as they often include GPS technology, and they can contribute significantly to worker and public safety through telematics. In general, telematics help businesses understand how the vehicle is being driven – acceleration, hard stops, swerving, etc. More recently, dashcams can have biometrics and AI technologies embedded in them. A facial scan can help determine if the driver is authorized to be driving that vehicle. AI technology also might be used to help determine if the driver is driving safely – is the driver falling asleep, eating, using their phone, wearing a seatbelt, and so on.

Depending on how a dashcam is equipped or configured, businesses subject to the CCPA should consider whether the dashcam involves the processing of personal information that requires a risk assessment.

For instance, a risk assessment is required when processing “sensitive personal information.” Remember that sensitive personal information includes, among other elements, precise geolocation data and biometric information for identifying an individuals. While the regulations include an exception for certain employment-related processing, businesses would have to assess whether those apply.

Another example of processing personal information that requires a risk assessment is profiling a consumer through “systematic observation” of that consumer when they are acting in their capacity as an educational program applicant, job applicant, student, employee, or independent contractor for the business. The regulations define “systematic observation” to mean:

methodical and regular or continuous observation. This includes, for example, methodical and regular or continuous observation using Wi-Fi or Bluetooth tracking, radio frequency identification, drones, video or audio recording or live-streaming, technologies that enable physical or biological identification or profiling; and geofencing, location trackers, or license-plate recognition.

The regulation also defines profiling as:

any form of automated processing of personal information to evaluate certain personal aspects (including intelligence, ability, aptitude, predispositions) relating to a natural person and in particular to analyze or predict aspects concerning that natural person’s performance at work, economic situation, health (including mental health), personal preferences, interests, reliability, predispositions, behavior, location, or movements.

Considering the range of use cases for vehicle/fleet tracking technologies, and depending on their capabilities and configurations, it is conceivable that in some cases the processing of personal information by such technology could be considered a “significant risk,” requiring a risk assessment under the CCPA.

In that case, Part 2 of our post on risk assessments outlines the steps a business needs to take to conduct a risk assessment, including what must be included in the required risk assessment report, and timely certifying the assessment to the California Privacy Protection Agency.

It is important to note that this is only one of a myriad of potential processing activities that businesses engage in that might trigger a risk assessment requirement. Businesses will need to identify those activities and assess next steps. If the business finds comparable activities, it may be able to minimize the risk assessment burden, by conducting a single assessment for those comparable activities.

Again, the new CCPA regulations represent a fundamental shift toward proactive privacy governance under the CCPA. Rather than simply reacting to consumer requests and data breaches, covered businesses must now systematically evaluate and document the privacy implications of their data processing activities before they begin. With compliance deadlines approaching in 2026, organizations should begin now to establish the cross-functional processes, documentation practices, and governance structures necessary to meet these new obligations.

test

As we discussed in Part 1 of this post, the California Privacy Protection Agency (CPPA) has approved significant updates to California Consumer Privacy Act (CCPA) regulations, which were formally approved by the California Office of Administrative Law on September 23, 2025. We began to outline the requirements for a significant new obligation under the CCPA – namely, the obligation to conduct a risk assessment for certain activities involving the processing of personal information.

In Part 1, we summarized the rules that determine when a risk assessment requirement would apply – that is, when covered businesses process personal information that presents a “significant risk.” In this Part 2, we will summarize the requirements for conducting a compliant risk assessment. These include:

  • Determining which stakeholders should be involved in the risk assessment process and how
  • Establishing appropriate purposes and objectives for conducting the risk assessment
  • Satisfying timing and record keeping obligations
  • Preparing risk assessment reports that meet certain content requirements
  • Timely submitting certifications of required risk assessments to the CPPA

Who Must Be Involved in the Risk Assessment?

The regulations emphasize a collaborative, multi-stakeholder approach to risk assessments. Businesses must involve relevant stakeholders whose duties include the specific processing activity that necessitated the risk assessment. For example, a business should include the person who determined how to collect the personal information for the processing that triggered the risk assessment obligation. A business also may include third parties involved in the risk assessment process, such as experts in detecting and mitigating bias in automated decision-making tools (ADMT).  

Establishing appropriate purposes and objectives for conducting the risk assessment

According to the new regulations:

The goal of a risk assessment is restricting or prohibiting the processing of personal information if the risks to consumer privacy outweighs the benefits resulting from processing to the consumer, the business, other stakeholders, and the public.

In working toward that goal, businesses need to identify the purpose of the risk assessment. That purpose cannot be generic – “we are conducting this risk assessment to improve our services.” Rather, the stated purpose must be more specific. Suppose a business would like to systematically observe an employee when processing store purchases (whether physically at the register or online as a call center employee) in an effort to decrease consumer wait times. The business would need to do more than simply state the purpose as “improving service,” it might identify decreasing consumer wait times for processing purchases as the relevant purpose.

Satisfying timing and record keeping obligations.

In general, risk assessments must be completed before initiating the processing activity that triggers the requirement. This proactive approach ensures that businesses evaluate privacy risks before they materialize rather than retrofitting assessments after the fact.

Note that businesses may need to conduct a risk assessment for activities they initiated prior to January 1, 2026. More specifically, in the case of processing activities triggering a risk assessment requirement (see Part 1) that the business initiated prior to January 1, 2026 and that continues after January 1, 2026, the business must conduct and document a risk assessment no later than December 31, 2027.

Once completed, risk assessments must be reviewed and updated at least every three years. However, if material changes occur to the processing activity, businesses must update the assessment within 45 days of the change. Material changes might include significant increases in the volume of personal information processed, new uses of the data, or changes to the technologies employed.

Businesses must retain risk assessment documentation for as long as the processing continues or for five years after completing the assessment, whichever is longer. This extended retention period recognizes that risk assessments may be relevant to future enforcement actions or litigation.

Preparing risk assessment reports that meet certain content requirements.

Importantly, risk assessments must result in documented reports that reflect the input and analysis of diverse perspectives. The regulations require identifying the individuals who provided information for the assessment (excluding legal counsel to preserve attorney-client privilege) as well as the date, names, and positions of those who reviewed and approved the assessment. This documentation requirement ensures accountability and demonstrates that the assessment received appropriate organizational attention.

Specifically, the regulations prescribe detailed content requirements for risk assessment reports. Each assessment must document the following elements:

  • The specific purpose of processing in concrete terms rather than generic descriptions. As noted above, businesses cannot simply state that they process data “for business purposes” but must articulate the precise objectives, such as “to provide personalized product recommendations based on browsing history and purchase patterns.”
  • The categories of personal and sensitive personal information processed, including documentation of the minimum necessary information required to achieve the stated purpose. This requirement operationalizes data minimization principles by forcing businesses to justify each category of data collected.
  • The operational elements of the processing, including the method of collecting personal information, retention periods, the number of consumers affected, and any disclosures to consumers about the processing. This provides a comprehensive view of the data lifecycle. In the case of ADMT, any assumptions or limitation on the logic and how the business will use the ADMT output need to be included.
  • The benefits from the processing to both the business and consumers. Businesses must articulate what value the processing creates, whether through improved services, enhanced security, cost savings, or other outcomes.
  • The negative impacts to consumers’ privacy associated with the processing. This critical element requires honest assessment of risks such as unauthorized access, discriminatory outcomes, loss of autonomy, surveillance concerns, or reputational harm.
  • Safeguards the business will implement to mitigate identified negative impacts. These might include technical controls like encryption and access restrictions; organizational measures like privacy training and incident response plans; or procedural safeguards like human review of automated decisions.
  • Whether the business will proceed with the processing after weighing the benefits against the risks. The CPPA has explicitly stated that the goal of risk assessments is to restrict or prohibit processing when risks to consumer privacy outweigh the benefits. This represents a substantive requirement, not merely a documentation exercise.
  • The individuals who provided information for the assessment (excluding legal counsel), along with the date, names, and positions of those who reviewed and approved it. This creates an audit trail demonstrating organizational engagement with the process.

Note that businesses may leverage risk assessments prepared for other regulatory frameworks, such as data protection impact assessments under the GDPR or privacy threshold analyses for federal agencies. However, those other assessments must contain the required information or be supplemented with any outstanding elements.

Timely submitting certifications of required risk assessments to the CPPA

Businesses required to complete a risk assessment must submit certain information to the CPPA. The submission requirements to the CPPA follow a phased schedule. For risk assessments conducted in 2026 and 2027, businesses must submit required information to the CPPA by April 1, 2028. For assessments conducted after 2027, submissions are due by April 1 of the following year. These submissions must include a point of contact, timing of the risk assessment, categories of personal and sensitive personal information covered, and identification of the executive management team member responsible for the assessment’s compliance.

As noted in Part 1, the new CCPA regulations represent a fundamental shift toward proactive privacy governance under the CCPA. Rather than simply reacting to consumer requests and data breaches, covered businesses must now systematically evaluate and document the privacy implications of their data processing activities before they begin. With compliance deadlines approaching in 2026, organizations should begin now to establish the cross-functional processes, documentation practices, and governance structures necessary to meet these new obligations.

The California Privacy Protection Agency (CPPA) has adopted significant updates to the California Consumer Privacy Act (CCPA) regulations, which were formally approved by the California Office of Administrative Law on September 23, 2025. These comprehensive regulations address automated decision-making technology, cybersecurity audits, and risk assessments, with compliance deadlines beginning in 2026. Among these updates, the risk assessment requirements represent a substantial new compliance obligation for many businesses subject to the CCPA.

Of course, as a threshold matter, businesses must first determine whether they are subject to the CCPA. For businesses that are not sure of whether the CCPA applies to them, our earlier discussion here may be helpful. If your business is subject to the CCPA, read on.

When Is a Risk Assessment Required?

The new regulations require businesses to conduct risk assessments when their processing of personal information presents “significant risks” to consumer privacy. The CPPA has defined specific processing activities that trigger this requirement:

  • Selling or sharing personal information.
  • Processing “sensitive personal information.” However, there is a narrow exception for limited human resources-related uses such as payroll, benefits administration, and legally mandated reporting. Employers will have to examine carefully which activities are excluded and which are not. Sensitive personal information under the CCPA includes precise geolocation, racial or ethnic origin, religious beliefs, genetic data, biometric information, health information, sexual orientation, and citizenship status, among other categories.
  • Using automated decision-making technology (ADMT) to make significant decisions about consumers. Significant decisions include those resulting in the provision or denial of financial services, lending, housing, education enrollment, employment opportunities, compensation, or healthcare services. More on ADMT to come.
  • Profiling a consumer through “systematic observation” when they are acting in their capacity as an educational program applicant, job applicant, student, employee, or independent contractor for the business. Systematic observation means methodical and regular or continuous observation, such as through Wi-Fi or Bluetooth tracking, radio frequency identification, drones, video or audio recording or live-streaming, technologies that enable physical or biological identification or profiling; and geofencing, location trackers, or license-plate recognition. Businesses engaged in workplace monitoring and using performance management applications may need to consider those activities under this provision.
  • Profiling a consumer based upon their presence in a “sensitive location.” A sensitive location means the following physical places: healthcare facilities including hospitals, doctors’ offices, urgent care facilities, and community health clinics; pharmacies; domestic violence shelters; food pantries; housing/emergency shelters; educational institutions; political party offices; legal services offices; union offices; and places of worship.
  • Processing personal information to train ADMT for a significant decisions, or train  facial recognition, biometric, or other technology to verify identity. This recognizes the heightened privacy risks associated with developing systems that may later be deployed at scale.

What is Involved in Completing a Risk Assessment?

For businesses engaged in activities with personal information that will require a risk assessment, it is important to note that there are a number of steps set forth in the new CCPA regulations for performing those assessments. These include:

  • Determining which stakeholders should be involved in the risk assessment process and the n nature of that involvement.
  • Establishing appropriate purposes and objectives for conducting the risk assessment
  • Satisfying timing and record keeping obligations.
  • Preparing risk assessment reports that meet certain content requirements.
  • Timely submitting certifications of required risk assessments to the CPPA

In Part 2 of this post we will discuss the requirements above to help businesses that have to perform one or more risk assessments develop a process for doing so.

The new CCPA regulations represent a fundamental shift toward proactive privacy governance under the CCPA. Rather than simply reacting to consumer requests and data breaches, covered businesses must now systematically evaluate and document the privacy implications of their data processing activities before they begin. With compliance deadlines approaching in 2026, organizations should begin now to establish the cross-functional processes, documentation practices, and governance structures necessary to meet these new obligations.

According to Cybersecurity Dive, artificial intelligence is no longer experimental technology as more than 70% of S&P 500 companies now identify AI as a material risk in their public disclosures, according to a recent report from The Conference Board. In 2023, that percentage was 12%.

The article reports that major companies are no longer just testing AI in isolated pilots; they’re embedding it across core business systems including product design, logistics, credit modeling, and customer-facing interfaces. At the same time, it is important to note, these companies acknowledge confronting significant security and privacy challenges, among others, in their public disclosures.

  • Reputational Risk: Leading the way is reputational risk, with more than a third of companies worried about potential brand damage. This concern centers on scenarios like service breakdowns, mishandling of consumer privacy, or customer-facing AI tools that fail to meet expectations.
  • Cybersecurity Risk: One in five S&P 500 companies explicitly cite cybersecurity concerns related to AI deployment. According to Cybersecurity Dive, AI technology expands the attack surface, creating new vulnerabilities that malicious actors can exploit. Compounding these risks, companies face dual exposure—both from their own AI implementations and from third-party AI applications.
  • Regulatory Risk: Companies are also navigating a rapidly shifting legal landscape as state and federal governments scramble to establish guardrails while supporting continued innovation.

One of the biggest drivers of these risks, perhaps, is a lack of governance. PwC’s 2025 Annual Corporate Director’s Survey reveals that only 35% of corporate boards have formally integrated AI into their oversight responsibilities—a clear indication that governance structures are struggling to keep pace with technological deployment.

Not surprisingly, innovation seems to be moving quite a bit faster than governance. That gap is contributing to various risks identified by most of the S&P 500. Extrapolating that reality, there is a good chance that small and mid-sized companies are in a similar position. Enhancing governance, such as through sensible risk assessment, robust security frameworks, training, etc., may help to narrow that gap.

Governor Gavin Newsom recently signed SB 446 into law, introducing significant changes to California’s data breach notification requirements. The bill establishes deadlines for notifying consumers and the state’s Attorney General when personal information of California residents has been involved in a data breach.

What’s Changed Under SB 446

Previously, California law required businesses to notify affected individuals of data breaches “without unreasonable delay.” Under SB 446, businesses must notify affected individuals within 30 calendar days of discovering or being notified of a data breach. However, the law includes some flexibility to accommodate the practical realities of incident response. Specifically, businesses may delay notification when necessary for legitimate law enforcement purposes or to determine the full scope of the breach and restore the integrity of data systems.

For breaches affecting more than 500 California residents, existing law requires businesses to notify the California Attorney General. SB 446 adds a deadline for those notifications. Specifically, the California Attorney General must be notified within 15 calendar days of notifying affected consumers of a security breach (again, for breaches affecting more than 500 California residents).

Considerations for Businesses

All 50 states and several cities have breach notification laws, as well as notification requirements under federal law, such as HIPAA and banking regulations. Over the years, many of those laws have been updated in several respects – notification deadlines, definitions of personal information, requirements to provide ID theft services and credit monitoring, etc.  It is imperative to stay on top of these legal and compliance obligations in order to help maintain preparedness.

SB 446 takes effect January 1, 2026, giving businesses a few months to review and update their incident response plans. Organizations handling California residents’ personal information should act now to ensure they can meet the 30-day notification requirement. This includes establishing clear internal procedures for breach detection, assessment, documentation, and notification.

test