As we explored in Part 1 of this series, AI-enabled smart glasses are rapidly evolving from niche wearables into powerful tools with broad workplace appeal — but their innovative capabilities bring equally significant legal and privacy concerns. Modern smart glasses blend high-resolution cameras, always-on microphones, and real-time AI assistants into a hands-free wearable that can capture, analyze, and even transcribe ambient information around the wearer. These features — from continuous audio capture to automated transcription — create scenarios where bystanders (co-workers, customers, etc.) may be recorded or have their conversations documented without ever knowing it, raising fundamental questions about consent and the boundaries of lawful observation.

Part 2 shifts focus to how these core capabilities intersect with consent requirements and note-taking practices under U.S. and state wiretapping and recording laws. In many jurisdictions, recording or transcribing a conversation without the express permission of all participants — particularly where devices can run discreetly in the background — can potentially trigger two-party (or all-party) consent obligations and potential statutory violations. Likewise, the promise of AI-assisted note taking — where every spoken word in a meeting could be saved, indexed, and shared — brings not just operational benefits but significant legal and business risk. Understanding how the unique sensing and recording features of smart glasses intersect with these consent and notetaking issues is essential for any organization contemplating deploy­­ment.

The Risk

AI glasses with continuous recording, AI note-taking, or voice transcription capabilities can easily violate state wiretapping laws. Twelve states require all parties to consent to audio recording of confidential communications, including California, Florida, Illinois, Maryland, Massachusetts, Connecticut, Montana, New Hampshire, Pennsylvania, and Washington. Even in one-party consent states, recording in locations where individuals have reasonable expectations of privacy violates surveillance laws. Going one step further, consider the possibility of the user being close enough to record a conversation between two unrelated persons.

The rise of AI note-taking capabilities in smart glasses makes this risk particularly acute. Unlike traditional recording that often requires deliberate action, AI glasses can passively capture and transcribe conversations throughout the day, creating permanent searchable records of discussions that participants never knew were being documented. Smart glasses that record continuously with no visible indicator, amplify this concern.

Relevant Use Cases

  • Sales representatives wearing AI glasses that automatically transcribe client meetings without explicit consent from all parties
  • Managers using glasses with AI note-taking features during performance reviews, disciplinary meetings, or interviews
  • Medical professionals recording patient consultations through smart glasses for AI-generated documentation
  • Employees wearing glasses during phone calls where the other party is in a two-party consent state
  • Anyone wearing recording-capable glasses in restrooms, locker rooms, medical facilities, or other areas with heightened privacy expectations
  • Workers using AI transcription features during confidential business discussions or trade secret conversations
  • OSHA inspectors using AI glasses (announced for expanded deployment in 2025) to record workplace inspections without proper protocols

Why It Matters

Violations of two-party consent laws carry criminal penalties, including potential jail time, as well as civil liability. The fact that many AI glasses lack obvious recording indicators—or have only tiny LED lights that are easily missed—compounds the risk. AI-generated transcripts created without consent or even awareness raise a myriad of issues, some of which are outlined here. The ease with which these devices could continuously record and transcribe conversations raises particular concerns relating to increasing emphasis and regulation directed at data minimization.

Practical Compliance Considerations

The compliance challenges surrounding AI glasses are significant, but manageable with proper planning:

  • Implement clear policies: Develop clear policies about when and where AI glasses with recording capabilities can be worn
  • Get consent: Obtain explicit verbal or written consent from all parties before activating recording features—consent banners on video calls may not suffice for glasses
  • Provide notice: Provide visible notification that recording is occurring (though many AI glasses lack adequate indicators)
  • Establish technical limits and safeguard: Implement geofencing or technical controls to automatically disable recording features in prohibited areas
  • Monitor usage: Maintain detailed logs of when recording features are activated and by whom
  • Train users: Train employees on state-specific wiretapping laws, especially when traveling or conducting interstate communications
  • Increase awareness of device features and capabilities: For AI note-taking features, ensure participants know transcription is occurring and can opt out
  • Leverage existing policies: Apply existing privacy and security controls, such as access and retention, relating to transcripts generated from the wearables.

Conclusion

AI glasses represent transformative technology with genuine business value, from hands-free information access to enhanced productivity and innovative customer experiences. The 210% growth in smart glasses shipments in 2024 demonstrates their appeal. But the legal risks are real and growing.

The key is to approach the deployment of AI glasses (and deployment of similar technologies) with eyes wide open—understanding both the capabilities of the technology and the complex legal frameworks that govern its use. With thoughtful policies, robust technical controls, ongoing compliance monitoring, and respect for privacy rights, organizations can harness the benefits of AI glasses while managing the risks.

Following failed congressional attempts to limit state AI laws, on December 11, 2025, the President issued an Executive Order titled Ensuring a National Policy Framework for Artificial Intelligence. The Order represents federal intervention into the growing landscape of state-level AI regulation. According to the Administration, a patchwork of state laws has created inconsistent and burdensome compliance obligations, particularly for startups and organizations operating across multiple jurisdictions. The Order claims that certain current state AI laws not only restrict innovation but could also force AI developers to incorporate “ideological bias.”

The EO provides the following example:

a new Colorado law banning “algorithmic discrimination” may even force AI models to produce false results in order to avoid a “differential treatment or impact” on protected groups.

To address these concerns, the Executive Order establishes a new AI Litigation Task Force within the Department of Justice. This group is charged with challenging state AI laws that conflict with the federal policy of promoting minimally burdensome, innovation-focused AI governance.

The Administration anticipates litigation against states whose laws it believes unconstitutionally regulate interstate commerce, impose unlawfully compelled speech, or require model outputs to be modified in ways that conflict with federal law. Within 90 days, the Department of Commerce must also publish a public evaluation identifying specific state laws considered “onerous” or inconsistent with the national policy framework—the Colorado AI Act and California Consumer Privacy Act’s ADMT Regulations will very likely make the list, including those that require disclosures or reporting obligations the Administration argues may infringe the First Amendment.

The Order further ties compliance with federal AI policy to federal funding. States that maintain AI laws deemed inconsistent with federal objectives may become ineligible for certain Broadband Equity, Access, and Deployment (BEAD) funds, and federal agencies are directed to explore conditioning other discretionary grants on a state’s willingness to refrain from enforcing its AI regulations during funding periods. This introduces a significant financial dimension to federal-state tensions and may influence how aggressively states choose to regulate AI going forward.

In addition, the Order directs federal agencies to begin steps that lay the groundwork for federal preemption. The Federal Communications Commission must consider creating a national reporting and disclosure standard that would override conflicting state requirements, while the Federal Trade Commission is instructed to clarify that state laws compelling alterations to truthful AI outputs may be preempted under federal prohibitions on deceptive practices. These efforts suggest a shift toward a unified federal approach that could substantially reshape or displace existing state obligations.

The effects of the EO remain uncertain. Organizations have been grappling with a rapid proliferation of state AI laws governing areas such as notice, transparency, nondiscrimination, fairness, safety, accuracy, and vendor management stemming from automated decision-making. For covered organizations, these AI developments also intersect with long-standing civil rights laws, like Title VII and similar state laws, and well-established guardrails to prevent employment discrimination, like the Uniform Guidelines on Employee Selection Procedures, which continue to shape how AI-enabled selection tools must be assessed for compliance. 

If federal litigation succeeds or preemptive standards emerge, some existing obligations may shrink or change. At the same time, organizations should expect a period of regulatory instability as states and the federal government contest the limits of their respective authority. Organizations that have invested heavily in state-specific compliance frameworks may need to revisit or revise them, while AI developers could face shifting expectations around disclosure, output modification, and fairness-related requirements.

The Executive Order also directs federal advisors to prepare legislative recommendations for a uniform federal AI framework. Although the Administration proposes broad federal preemption, it indicates that certain topics—such as child safety protections and state AI procurement rules—should remain within state authority. This signals a coming debate in Congress over how much room states should retain to regulate AI-related issues.

Finally, the Order is almost certain to face legal challenges from states, which may argue that the Administration is exceeding its authority, infringing on state sovereignty, or coercively attaching conditions to federal funding. Litigation could take years to resolve, leaving covered organizations to navigate an evolving legal environment where both federal and state rules remain in flux—underscoring the importance of developing AI governance approaches that are flexible, regularly revisited, and attentive to how AI tools interact with existing employment discrimination laws and privacy requirements, for example. The bottom line is that the Executive Order marks the beginning of an aggressive federal push to standardize AI regulation nationwide, with substantial consequences for compliance, risk management, and future governance. Covered organizations should monitor developments closely and prepare for a shifting regulatory landscape.

Smart glasses with AI capabilities have evolved from futuristic concept to everyday reality. The market exploded in 2024, with global smart glasses shipments surging 210% year-over-year, driven primarily by Meta’s Ray-Ban smart glasses. From the consumer-focused Meta Ray-Ban Display (featuring a built-in heads-up display announced in September 2025) to Meta’s partnership with Oakley for athletic glasses, enterprise solutions like RealWear and Vuzix for industrial use, and developer-focused options like Brilliant Labs’ Frame glasses, these devices promise to revolutionize how we interact with the world.

But with innovation comes risk. Modern AI glasses can record video and audio, process conversations in real-time with AI assistants, perform visual analysis of everything you see, generate meeting summaries, create searchable transcripts, and transmit data to cloud servers—often without obvious visual indicators. For businesses deploying these technologies and individuals using them in professional settings, the compliance landscape is treacherous.

In Part 1 of this series, we address biometric data collection.

The Risk

AI glasses increasingly incorporate biometric data collection capabilities that trigger strict privacy regulations. This includes facial recognition through camera feeds, voiceprint capture through AI transcription (see upcoming Part 2 in this series for AI specific risks), eye tracking and gaze analysis, and even the processing of images that could be used to identify individuals. Under laws like California’s Consumer Privacy Act (CCPA), Illinois’ Biometric Information Privacy Act (BIPA), and the EU’s General Data Protection Regulation (GDPR), biometric data receives heightened protection.

The 2024 Charlotte Tilbury settlement established that virtual try-on features using facial geometry may constitute biometric data collection under BIPA, potentially requiring separate notifications and annual consent reaffirmation. This and other precedents extend directly to AI glasses that process visual and audio data that can constitute biometric information.

Relevant Use Cases

  • Retail employees using AI glasses that analyze customer faces or body language for personalized service recommendations
  • Security personnel deploying glasses with facial recognition capabilities for identification
  • Healthcare providers using glasses that process patient images, potentially capturing biometric identifiers
  • Any workplace use where AI processes images or voices of employees, customers, or the public
  • Industrial workers whose AI glasses capture and analyze faces or voices of colleagues during recorded training sessions

Why It Matters

BIPA provides for statutory damages of $1,000 to $5,000 per violation, along with attorneys’ fees. Following the Illinois Supreme Court’s 2023 Cothron decision, each scan or transmission could constitute a separate violation—though a 2024 amendment limited this to one violation per person per collection method. The $51.75 million Clearview AI settlement in 2025 demonstrates the scale of exposure: with biometric data from millions of individuals, companies face bankruptcy-level liability.

While BIPA may be the most popular of the biometric laws in the United States, it certainly is not the only one. Measures to regulate the collection, use, and disclosure of biometric information exist in states such as California, Colorado, Texas, and Washington, as well as several cities including New York City and Portland OR.

For a summary of these requirements, see our Biometrics white paper.

Practical Compliance Considerations

The compliance challenges surrounding AI glasses are significant, but manageable with proper planning:

  • Address Applicable Notice, Consent, and Policy Requirements: Organizations may need to create detailed, written policies governing when, where, and how AI glasses may be used. Address recording features, AI processing, data transmission, and specify prohibited uses. Include clear guidance on consumer versus enterprise devices. And, of course, consider applicable notice, consent, and record retention policies.
  • Conduct Privacy Impact Assessments: Before deploying AI glasses, evaluate privacy risks specific to your industry, geography, and use cases. Consider biometric data collection, workplace surveillance, third-party AI processing, and cross-border data transfers. Note such risk assessments may be required, see here and here.
  • Implement Technical Controls: Use device management solutions to control which features can be activated in which locations. Consider geofencing to automatically disable recording in sensitive areas like bathrooms, break rooms, confidential meeting spaces, and healthcare facilities.
  • Vet Vendors and AI Services: Understand where data goes, who processes it, how long it’s retained, what security controls exist, and whether vendors will sign appropriate agreements (BAAs for HIPAA, DPAs for GDPR, etc.). Negotiate contracts that protect your organization and comply with your obligations.
  • Train Rigorously: Ensure all users understand the legal implications of AI glasses, including consent requirements, prohibited uses, data handling obligations, and discovery implications. Training should be role-specific and regularly updated.
  • Monitor Regulatory Developments: Regulation is evolving rapidly concerning biometrics, as well as AI tools that leverage that information for additional capabilities. The EU AI Act took effect in 2024, California increased its AI-regulatory environment in 2024-2025, and federal AI legislation is under consideration. State workplace surveillance laws are proliferating. Stay current with legal developments.
  • Establish Clear Lines of Responsibility: Designate who is responsible for AI glasses compliance, including legal review, privacy assessment, security controls, HR considerations, policy enforcement, and incident response.
  • Consult Legal Counsel: Given the complexity and variability of the regulatory environment, work with attorneys familiar with privacy, employment, biometric, and AI regulations before rolling out these wearables.

Conclusion

AI glasses represent transformative technology with genuine business value, from hands-free information access to enhanced productivity and innovative customer experiences. The 210% growth in smart glasses shipments in 2024 demonstrates their appeal. But the legal risks are real and growing.

Organizations that fail to address these compliance concerns face not just regulatory penalties, but class action litigation (BIPA damages alone can reach millions), reputational harm, loss of customer trust, and the erosion of employee confidence.

The key is to approach the deployment of AI glasses (and deployment of similar technologies) with eyes wide open—understanding both the capabilities of the technology and the complex legal frameworks that govern its use. With thoughtful policies, robust technical controls, ongoing compliance monitoring, and respect for privacy rights, organizations can harness the benefits of AI glasses while managing the risks.

test

After years of development and extensive stakeholder engagement, California has finalized groundbreaking cybersecurity audit regulations under the California Consumer Privacy Act (CCPA). These new requirements may significantly impact how covered businesses protect consumer data.

The New Regulations

The California Privacy Protection Agency (CPPA) Board approved comprehensive amendments to CCPA regulations covering cybersecurity audits, risk assessments, and automated decision-making technology (ADMT), among other things. The regulations were subsequently approved by the California Office of Administrative Law on September 23, 2025, marking the completion of a rulemaking process that began in November 2024.

When Does the Audit Requirement Apply?

Not all businesses subject to the CCPA must conduct cybersecurity audits. According to the regulations, the requirement applies only to businesses whose data processing presents a “significant risk” to consumer security, defined by specific thresholds:

Businesses must conduct annual cybersecurity audits if they fall into one of two buckets:

  1. They derive 50% or more of their annual revenue in the preceding calendar year from selling or sharing consumers’ personal information, OR
  2. They have over $25 million in annual gross revenue (adjusted every two years; currently $26,625,000) AND process in the preceding calendar year the either:
    • Personal information of more than 250,000 California consumers or households, OR
    • Sensitive personal information of more than 50,000 California consumers or households.

These thresholds ensure that the audit requirement focuses on businesses handling substantial volumes of consumer data or those whose business models center on data monetization.

Effective Dates and Compliance Deadlines

The regulations officially take effect on January 1, 2026. However, businesses have staggered deadlines for submitting their first cybersecurity audit certifications to the CPPA based on their revenue size:

  • April 1, 2028: Businesses with annual revenues over $100 million for 2026.
  • April 1, 2029: Businesses with annual revenues between $50-100 million for 2027.
  • April 1, 2030: Businesses with annual revenues under $50 million for 2028.

This phased approach gives businesses time to establish robust audit processes and implement necessary cybersecurity improvements before their first submission deadline.

What the Audit Requirement Entails

The regulations establish detailed requirements for conducting comprehensive cybersecurity audits, the results of which must be provided to a member of the business’s executive management team who has direct responsibility for the business’s cybersecurity program. Here’s a summary of what businesses must do:

Auditor Qualifications: Audits must be conducted by qualified, objective, independent professionals—either internal or external—using recognized auditing standards such as those adopted by the American Institute of CPAs. Auditors must possess expertise in cybersecurity and auditing methodologies.

Audit Scope: The cybersecurity audit must comprehensively evaluate the business’s cybersecurity program across 18 key areas, including:

  • Secure user authentication and access controls
  • Encryption of personal information
  • Account management systems
  • Personal information inventory and management
  • Secure hardware and software configuration
  • Vulnerability scanning and penetration testing
  • Audit-log management and network monitoring
  • Network defenses and segmentation
  • Antivirus and anti-malware protections
  • Vendor and third-party risk management
  • Data retention schedules and secure disposal
  • Incident response capabilities
  • Cybersecurity training programs
  • Breach and incident review for the audit period

Even businesses not subject to the mandatory audit requirement should view the 18 standards as a framework for evaluating their own cybersecurity programs, as the CPPA may use these criteria when assessing CCPA compliance more broadly.

Documentation Requirements: Businesses must prepare detailed audit reports documenting the review scope, policies assessed, evaluation criteria, supporting documentation, identified compliance gaps, and remediation plans. All audit records must be retained for five years.

Annual Certification: Companies must submit written certifications of compliance to the CPPA on an annual basis, signed under penalty of perjury by appropriate executive leadership.

Flexibility for Existing Audits: Importantly, businesses may leverage cybersecurity audits conducted for other regulatory purposes—such as NIST Cybersecurity Framework 2.0 assessments—provided they meet all CCPA requirements. This allows companies to avoid duplicative efforts where existing audits are sufficiently comprehensive.

What This Means for Your Business

Businesses subject to the audit requirement should begin preparation now by identifying qualified audit personnel, establishing appropriate internal reporting structures, conducting comprehensive inventories of personal information processing activities, and documenting current cybersecurity practices. The clock is ticking toward those first compliance deadlines in 2028.

As artificial intelligence (AI), particularly generative AI, becomes increasingly woven into our professional and personal lives—from personalized travel itineraries to reviewing resumes to summarizing investigation notes and reports—questions about who or what controls our data and how it’s used are ever present. AI systems survive and thrive on information and that intersection of AI and privacy elevates the need for data protection.

Recent regulations issued by the California Privacy Protection Agency (CPPA) under the California Consumer Privacy Act (CCPA) begin to erect those protections. Among its various provisions, the CCPA now specifically addresses automated decision-making technologies (ADMT), attempting to bring transparency and consumer rights to, among other things, push back on algorithms making significant decisions about them.

As a starting point, it is important to define ADMT. Under the CCPA, it means any technology that processes personal information and uses computation to replace human decision-making or substantially replace human decision-making. For this purpose, “replace” means to make decision without human involvement. To be considered human involvement, a human must:

  1. know how to interpret and use the technology’s output to make the decision;
  2. review and analyze the output of the technology, and any other information that is relevant to make or change the decision; and
  3. have the authority to make or change the decision based on their analysis in (B).

CCPA-covered businesses that use ADMT to make “significant decisions” about consumers have several new compliance obligations to navigate. A “significant decision” is defined as a decision that has important consequences for a consumer’s life, opportunities, or access to essential services. CCPA regulations define these decisions as those that result in the provision or denial of:

  • Financial or lending services (e.g., credit approval, loan eligibility)
  • Housing (e.g., rental applications, mortgage decisions)
  • Education enrollment or opportunities (e.g., admissions decisions)
  • Employment or independent contracting opportunities or compensation (e.g., hiring, promotions, work assignments)
  • Healthcare services (e.g., treatment eligibility, insurance coverage)

These decisions are considered “significant” because they directly affect a consumer’s economic, health, or personal well-being.

When such businesses use ADMT to make significant decisions, they generally must do the following:

  • Provide an opt-out right for consumers.
  • Provide a pre-use notice that clearly explains the business’s use of ADMT, in plain language.
  • Provide consumers with the ability to request information about the business’s use of ADMT.

Businesses using ADMT for significant decisions before January 1, 2027, must comply by January 1, 2027. Businesses that begin using ADMT after January 1, 2027, must comply immediately when the use begins.

Businesses will need to examine these new requirements carefully, including how they fit into the existing CCPA compliance framework, along with exceptions that may apply. For example, in the case of a consumer’s right to opt-out of ADMT, a business may not be required to make that right available.

If a business provides consumers with a method to appeal the ADMT decision to a human reviewer who has the authority to overturn the decision, opt-out is not required. Additionally, the right to opt-out of ADMT in connection with certain admission, acceptance, or hiring decisions, is not required if the following are satisfied:

  • the business uses ADMT solely for the business’s assessment of the consumer’s ability to perform at work or in an educational program to determine whether to admit, accept, or hire them; and
  • the ADMT works as intended for the business’s proposed use and does not unlawfully discriminate based upon protected characteristics.

Likewise, the right to opt-out of ADMT is not required for certain allocation/assignment of work and compensation decisions, if the business:

  • uses the ADMT solely for the business’s allocation/assignment of work or compensation; and
  • the ADMT works for the business’s purpose and does not unlawfully discriminate based upon protected characteristics.

As many businesses are realizing, successfully deploying AI requires a coordinated approach to achieve more than getting the desired output. It includes understanding a complex regulatory environment of which data privacy and security is a significant part.

A new Senate bill, the AI-Related Job Impacts Clarity Act (S. 3108), would create a federal reporting framework for how artificial intelligence (AI) is affecting employment in the United States.

The aim is to produce timely, public data on AI-driven layoffs, hiring, unfilled roles, and retraining, with the Department of Labor (through the Bureau of Labor Statistics) responsible for collecting and publishing regular reports.

The bill is only in its early stages, but the following is an overview of the proposed law to date.

Who is covered?

Initially, “covered entities” include publicly traded companies and federal agencies. The bill also contemplates bringing certain non-publicly traded companies into scope through rulemaking within 180 days of enactment. That rulemaking must consult the U.S. Securities and Exchange Commission (SEC) and Treasury and consider factors such as workforce size, revenue, industry classification, and overall employment impact, while ensuring any requirements are proportionate and protect proprietary or personally identifiable information. 

What must be reported and when?

Under the proposed law, covered entities would make quarterly disclosures to the Secretary of Labor no later than 30 days after each quarter’s end. The required content focuses on AI-related job impacts in the United States (including territories), specifically:

  • The number of individuals laid off has substantially increased due to AI replacing or automating their job functions.
  • The number of individuals hired has substantially increased due to AI incorporation.
  • The number of previously occupied positions the company decided not to refill substantially due to AI automation.
  • The number of individuals being retrained or assisted in retraining has substantially increased due to AI.

For each disclosure item, companies must include the relevant North American Industry Classification System (NAICS) codes. 

How will reporting be collected and used?

The Secretary would be permitted to integrate these disclosures into existing Department of Labor or Census Bureau surveys and allow companies to comply via those surveys. If the Census Bureau runs the survey independently, it must share the AI-impact data with Labor each quarter to enable reporting. Labor must publish quarterly summaries and an annual year-end rollup, plus every other quarter a net-impact analysis that combines disclosure data with other relevant information. Reports and underlying data must be published on the BLS website and submitted to Congress within 60 days after each quarter’s end. 

Jackson Lewis will continue to track this and other legislation related to AI. If you have questions about this bill or related issues, contact a Jackson Lewis attorney to discuss.

Leaders charged with safeguarding data privacy and cybersecurity often assume that size equates to security—that large, well-resourced organizations must have airtight defenses against cyberattacks and data breaches. It’s a natural assumption: mature enterprises tend to have robust policies, advanced technology, and deep security teams. Yet, as recent events remind us, even the biggest organizations can be compromised. Sophistication and scale do not guarantee immunity.

On October 21, 2025, the New York Department of Financial Services (DFS) issued guidance on managing risks associated with third-party service providers, urging the entities they regulate to take a more active role in assessing and monitoring their vendors’ cybersecurity practices.

The message is clear: strong internal controls are only as good as the weakest external connection. An organization’s exposure to risk extends well beyond its own systems and policies. Its a message that entities beyond those regulated by DFS should heed. Consider, for example, the DOL mandate that affects any organization sponsoring an ERISA-covered employee benefit plan – fiduciaries must assess the cybersecurity of plan service providers.

DFS emphasizes that third-party relationships—whether for data hosting, software development, cloud services, or payment processing—must be governed by a structured risk-management framework. The guidance highlights several key components: thorough vendor due diligence before onboarding, contractual provisions addressing cybersecurity responsibilities, ongoing monitoring of vendors’ controls, and incident-response coordination. These expectations are not new, but DFS’s renewed attention signals that regulators continue to see third-party risk as a critical vulnerability.

Importantly, the guidance reminds organizations that performing these steps is not just a compliance exercise—it’s a form of self-protection. Even when a company has invested heavily in its own cybersecurity defenses, it can still be affected by a breach through a vendor’s compromised system or careless employee. The reputational and financial fallout from such an event can be just as severe as if the company’s own network had been directly attacked.

Organizations can take several practical steps in response:

  • Assess vendor criticality and data access. Identify which vendors handle sensitive information or provide essential services. DFS suggests that entities classify vendors based on the vendor’s risk profile, considering factors such as system access, data sensitivity, location, and how critical the services is to its operations. Again, this is a step all organizations should consider when evaluating their vendors. 
  • Require detailed cybersecurity questionnaires or certifications. Review vendors’ security controls, policies, and incident-response plans.
  • Incorporate strong contract provisions. Ensure that agreements specify breach notification timelines, audit rights, and responsibilities for remediation costs. The DFS guidance includes several examples of baseline contract provisions, including how AI may be used in the course of performing services. There also are other important provisions DFS does not specifically call out, such as indemnity, insurance requirements, limitation of liability. Organizations should have qualified counsel review these critical provisions to help ensure contract terms do not stray too far from initial proposals and assurances.
  • Monitor continuously. Risk assessments should not be one-time exercises; regular reviews and periodic attestations help keep oversight current. Third party service provides have personnel changes, system updates, new offerings, as well as financial challenges during the term of a services agreement. These and other factors are likely to have an impact on data privacy and cybersecurity efforts.
  • Plan for the worst. Integrate vendors into incident-response exercises so all parties understand roles and communication channels in a breach.

By taking these steps, organizations not only strengthen their own resilience but also strengthen a defensible position if litigation follows a third-party breach. Courts and regulators increasingly look for evidence that a company acted reasonably in selecting and managing its vendors.

The DFS guidance serves as a reminder that in today’s interconnected environment, no organization can outsource accountability for cybersecurity. Vigilant oversight of third-party relationships is not simply a best practice—it’s an operational necessity.

Key Takeaways

  • Outlines basic steps to determine whether a business may need to perform a risk assessment under the California Consumer Privacy Act (CCPA) in connection with its use of dashcams
  • Provide a resource for exploring the basic requirements for conducting and reporting risk assessments

If you have not reviewed the recently approved, updated CCPA regulations, you might want to soon. There are several new requirements, along with many modifications and clarifications to existing rules. In this post, we discuss a new requirement – performing risk assessments – in the context of dashcam and related fleet management technologies.

In short, when performing a risk assessment, the business needs to assess whether the risk to consumer privacy from the processing of personal information outweighs the benefits to consumers, the business, others, and the public, and, if so, restricting or prohibiting that processing, as appropriate.

Of course, the first step to determine whether a business needs to perform a risk assessment under the CCPA is to determine whether the CCPA applies to the business. We discussed those basic requirements in Part 1 of our post on risk assessments under the CCPA.

If you are still reading, you have probably determined that your organization is a “business” covered by the CCPA and, possibly, your business is using certain fleet management technologies, such as dashcam or other vehicle tracking technologies. Even if that is not the case, the remainder of this post may be of interest for “businesses” under the CCPA that are curious about examples applying the new risk assessment requirement.

As discussed in Part 1 of our post on the basics of CCPA risk assessments, businesses are required to perform risk assessments when their processing of personal information presents “significant risk” to consumer privacy. The regulations set out certain types of processing activities involving personal information that would trigger a risk assessment. Depending on the nature and scope of the dashcam technology deployed, a business should consider whether a risk assessment is required.

Dashcams and similar devices increasingly come with an array of features. As the name suggests, these devices include cameras that can record activity inside and outside the vehicle. They also can be equipped with audio recording capabilities permitting the recording of voice in and outside the vehicle. Additionally, dashcams can play a role in logistics, as they often include GPS technology, and they can contribute significantly to worker and public safety through telematics. In general, telematics help businesses understand how the vehicle is being driven – acceleration, hard stops, swerving, etc. More recently, dashcams can have biometrics and AI technologies embedded in them. A facial scan can help determine if the driver is authorized to be driving that vehicle. AI technology also might be used to help determine if the driver is driving safely – is the driver falling asleep, eating, using their phone, wearing a seatbelt, and so on.

Depending on how a dashcam is equipped or configured, businesses subject to the CCPA should consider whether the dashcam involves the processing of personal information that requires a risk assessment.

For instance, a risk assessment is required when processing “sensitive personal information.” Remember that sensitive personal information includes, among other elements, precise geolocation data and biometric information for identifying an individuals. While the regulations include an exception for certain employment-related processing, businesses would have to assess whether those apply.

Another example of processing personal information that requires a risk assessment is profiling a consumer through “systematic observation” of that consumer when they are acting in their capacity as an educational program applicant, job applicant, student, employee, or independent contractor for the business. The regulations define “systematic observation” to mean:

methodical and regular or continuous observation. This includes, for example, methodical and regular or continuous observation using Wi-Fi or Bluetooth tracking, radio frequency identification, drones, video or audio recording or live-streaming, technologies that enable physical or biological identification or profiling; and geofencing, location trackers, or license-plate recognition.

The regulation also defines profiling as:

any form of automated processing of personal information to evaluate certain personal aspects (including intelligence, ability, aptitude, predispositions) relating to a natural person and in particular to analyze or predict aspects concerning that natural person’s performance at work, economic situation, health (including mental health), personal preferences, interests, reliability, predispositions, behavior, location, or movements.

Considering the range of use cases for vehicle/fleet tracking technologies, and depending on their capabilities and configurations, it is conceivable that in some cases the processing of personal information by such technology could be considered a “significant risk,” requiring a risk assessment under the CCPA.

In that case, Part 2 of our post on risk assessments outlines the steps a business needs to take to conduct a risk assessment, including what must be included in the required risk assessment report, and timely certifying the assessment to the California Privacy Protection Agency.

It is important to note that this is only one of a myriad of potential processing activities that businesses engage in that might trigger a risk assessment requirement. Businesses will need to identify those activities and assess next steps. If the business finds comparable activities, it may be able to minimize the risk assessment burden, by conducting a single assessment for those comparable activities.

Again, the new CCPA regulations represent a fundamental shift toward proactive privacy governance under the CCPA. Rather than simply reacting to consumer requests and data breaches, covered businesses must now systematically evaluate and document the privacy implications of their data processing activities before they begin. With compliance deadlines approaching in 2026, organizations should begin now to establish the cross-functional processes, documentation practices, and governance structures necessary to meet these new obligations.

test

As we discussed in Part 1 of this post, the California Privacy Protection Agency (CPPA) has approved significant updates to California Consumer Privacy Act (CCPA) regulations, which were formally approved by the California Office of Administrative Law on September 23, 2025. We began to outline the requirements for a significant new obligation under the CCPA – namely, the obligation to conduct a risk assessment for certain activities involving the processing of personal information.

In Part 1, we summarized the rules that determine when a risk assessment requirement would apply – that is, when covered businesses process personal information that presents a “significant risk.” In this Part 2, we will summarize the requirements for conducting a compliant risk assessment. These include:

  • Determining which stakeholders should be involved in the risk assessment process and how
  • Establishing appropriate purposes and objectives for conducting the risk assessment
  • Satisfying timing and record keeping obligations
  • Preparing risk assessment reports that meet certain content requirements
  • Timely submitting certifications of required risk assessments to the CPPA

Who Must Be Involved in the Risk Assessment?

The regulations emphasize a collaborative, multi-stakeholder approach to risk assessments. Businesses must involve relevant stakeholders whose duties include the specific processing activity that necessitated the risk assessment. For example, a business should include the person who determined how to collect the personal information for the processing that triggered the risk assessment obligation. A business also may include third parties involved in the risk assessment process, such as experts in detecting and mitigating bias in automated decision-making tools (ADMT).  

Establishing appropriate purposes and objectives for conducting the risk assessment

According to the new regulations:

The goal of a risk assessment is restricting or prohibiting the processing of personal information if the risks to consumer privacy outweighs the benefits resulting from processing to the consumer, the business, other stakeholders, and the public.

In working toward that goal, businesses need to identify the purpose of the risk assessment. That purpose cannot be generic – “we are conducting this risk assessment to improve our services.” Rather, the stated purpose must be more specific. Suppose a business would like to systematically observe an employee when processing store purchases (whether physically at the register or online as a call center employee) in an effort to decrease consumer wait times. The business would need to do more than simply state the purpose as “improving service,” it might identify decreasing consumer wait times for processing purchases as the relevant purpose.

Satisfying timing and record keeping obligations.

In general, risk assessments must be completed before initiating the processing activity that triggers the requirement. This proactive approach ensures that businesses evaluate privacy risks before they materialize rather than retrofitting assessments after the fact.

Note that businesses may need to conduct a risk assessment for activities they initiated prior to January 1, 2026. More specifically, in the case of processing activities triggering a risk assessment requirement (see Part 1) that the business initiated prior to January 1, 2026 and that continues after January 1, 2026, the business must conduct and document a risk assessment no later than December 31, 2027.

Once completed, risk assessments must be reviewed and updated at least every three years. However, if material changes occur to the processing activity, businesses must update the assessment within 45 days of the change. Material changes might include significant increases in the volume of personal information processed, new uses of the data, or changes to the technologies employed.

Businesses must retain risk assessment documentation for as long as the processing continues or for five years after completing the assessment, whichever is longer. This extended retention period recognizes that risk assessments may be relevant to future enforcement actions or litigation.

Preparing risk assessment reports that meet certain content requirements.

Importantly, risk assessments must result in documented reports that reflect the input and analysis of diverse perspectives. The regulations require identifying the individuals who provided information for the assessment (excluding legal counsel to preserve attorney-client privilege) as well as the date, names, and positions of those who reviewed and approved the assessment. This documentation requirement ensures accountability and demonstrates that the assessment received appropriate organizational attention.

Specifically, the regulations prescribe detailed content requirements for risk assessment reports. Each assessment must document the following elements:

  • The specific purpose of processing in concrete terms rather than generic descriptions. As noted above, businesses cannot simply state that they process data “for business purposes” but must articulate the precise objectives, such as “to provide personalized product recommendations based on browsing history and purchase patterns.”
  • The categories of personal and sensitive personal information processed, including documentation of the minimum necessary information required to achieve the stated purpose. This requirement operationalizes data minimization principles by forcing businesses to justify each category of data collected.
  • The operational elements of the processing, including the method of collecting personal information, retention periods, the number of consumers affected, and any disclosures to consumers about the processing. This provides a comprehensive view of the data lifecycle. In the case of ADMT, any assumptions or limitation on the logic and how the business will use the ADMT output need to be included.
  • The benefits from the processing to both the business and consumers. Businesses must articulate what value the processing creates, whether through improved services, enhanced security, cost savings, or other outcomes.
  • The negative impacts to consumers’ privacy associated with the processing. This critical element requires honest assessment of risks such as unauthorized access, discriminatory outcomes, loss of autonomy, surveillance concerns, or reputational harm.
  • Safeguards the business will implement to mitigate identified negative impacts. These might include technical controls like encryption and access restrictions; organizational measures like privacy training and incident response plans; or procedural safeguards like human review of automated decisions.
  • Whether the business will proceed with the processing after weighing the benefits against the risks. The CPPA has explicitly stated that the goal of risk assessments is to restrict or prohibit processing when risks to consumer privacy outweigh the benefits. This represents a substantive requirement, not merely a documentation exercise.
  • The individuals who provided information for the assessment (excluding legal counsel), along with the date, names, and positions of those who reviewed and approved it. This creates an audit trail demonstrating organizational engagement with the process.

Note that businesses may leverage risk assessments prepared for other regulatory frameworks, such as data protection impact assessments under the GDPR or privacy threshold analyses for federal agencies. However, those other assessments must contain the required information or be supplemented with any outstanding elements.

Timely submitting certifications of required risk assessments to the CPPA

Businesses required to complete a risk assessment must submit certain information to the CPPA. The submission requirements to the CPPA follow a phased schedule. For risk assessments conducted in 2026 and 2027, businesses must submit required information to the CPPA by April 1, 2028. For assessments conducted after 2027, submissions are due by April 1 of the following year. These submissions must include a point of contact, timing of the risk assessment, categories of personal and sensitive personal information covered, and identification of the executive management team member responsible for the assessment’s compliance.

As noted in Part 1, the new CCPA regulations represent a fundamental shift toward proactive privacy governance under the CCPA. Rather than simply reacting to consumer requests and data breaches, covered businesses must now systematically evaluate and document the privacy implications of their data processing activities before they begin. With compliance deadlines approaching in 2026, organizations should begin now to establish the cross-functional processes, documentation practices, and governance structures necessary to meet these new obligations.