U.S. organizations have long focused on federal requirements governing international data transfers. But a growing wave of state enforcement—particularly in Florida and Texas—signals that regulators are increasingly scrutinizing how companies move sensitive data outside the United States, especially when foreign adversaries may be involved. Recent developments suggest organizations should reassess their data flows, vendor relationships, and ownership structures to understand where sensitive information may ultimately land.

Federal Rule Raises the Stakes on Cross-Border Data Transfers

The Department of Justice (DOJ) took a significant step in 2024 when it began implementing regulations restricting certain outbound transfers of sensitive U.S. personal data to entities linked to “countries of concern,” including China, Iran, and North Korea. The rule targets transfers of large volumes of sensitive data—such as precise location data, biometric identifiers, genomic data, and other categories—where access by foreign adversaries could pose national security risks.

As discussed in our earlier analysis of the rule, the framework focuses on transactions involving “covered data” and “covered persons,” and in some cases prohibits transfers outright or requires companies to implement security controls, diligence processes, and recordkeeping obligations. Organizations subject to the rule must examine their vendor relationships, data brokerage arrangements, and service provider agreements to determine whether the transfers fall within the regulation’s scope.

Yet while the DOJ rule represents a significant federal development, enforcement activity suggests that federal regulators are only part of the story.

States Filling the Enforcement Gap

States are increasingly stepping into what some see as a federal enforcement gap. According to recent reports, states have launched more than a dozen investigations or lawsuits related to U.S. consumer data transfers to China or other foreign actors. These actions have targeted companies across multiple sectors—not just traditional data brokers, but also firms handling consumer electronics, genetic data, and online marketplaces.

State regulators often lack explicit authority over national security concerns. As a result, they are using other tools, including consumer protection laws, unfair or deceptive practices statutes, and state privacy statutes, to investigate companies whose data practices may expose Americans’ information to foreign entities.

Texas has been among the most aggressive jurisdictions, filing actions against several companies, illustrating how states may combine allegations related to privacy practices with broader consumer protection claims. Florida, meanwhile, is emerging as another focal point for state enforcement.

Florida Launches Dedicated Unit Targeting Foreign Data Risks

In February 2026, Florida Attorney General James Uthmeier announced the creation of a new enforcement team dedicated to investigating foreign access to Americans’ data. The initiative—called the Consumer Harm from International and Nefarious Actors (CHINA) unit—will pursue both civil and criminal investigations involving foreign corporations’ data practices.

The new unit plans to focus heavily on companies that collect sensitive personal information, including biometric and demographic data. Health care organizations, in particular, may face heightened scrutiny given the sensitivity of the information they handle.

According to the attorney general’s office, the unit will ramp up subpoenas, investigations, and lawsuits under Florida consumer protection laws. The effort is designed not only to address potential risks within Florida but also to serve as a model for other states considering similar initiatives.

Florida’s Investigation Into Lorex Signals Broader Scrutiny

Florida has already begun investigating companies suspected of exposing consumer data to foreign surveillance risks. One notable example is Lorex Corp., a surveillance camera manufacturer that has faced investigations and litigation in several states over alleged connections to Chinese ownership.

As part of Florida’s inquiry, authorities reportedly compelled the company to produce extensive information about its corporate structure, contracts, and software architecture. The investigation highlights a growing focus on how foreign ownership structures or technological dependencies could create pathways for sensitive data to leave the United States.

For organizations, the Lorex matter underscores a key compliance issue: regulators are looking beyond privacy notices and security practices to evaluate who ultimately has access to data—including corporate affiliates, overseas vendors, and parent companies.

Florida’s Offshore Data Law Adds Another Layer

Florida has also enacted legislation restricting certain transfers of health data outside the United States, sometimes referred to as the state’s “Offshore Data” restrictions. The law prohibits the storage of personal health information by healthcare providers using certified electronic health record technology (CEHRT) outside the United States, its territories, or Canada.

When combined with the DOJ rule and the state’s new enforcement unit, these laws create a regulatory environment in which organizations operating in Florida—or handling data about Florida residents—may face multiple overlapping compliance obligations.

Practical Takeaways for Organizations

These developments highlight a critical shift in how regulators view cross-border data transfers. Organizations should consider taking several steps:

  • Map data flows. Companies should understand where sensitive data is stored, processed, and transmitted—including by vendors and subcontractors.
  • Assess vendor and ownership risks. Regulators are paying closer attention to foreign ownership interests, corporate affiliations, and data access rights.
  • Review contracts and technical controls. Agreements with service providers should address cross-border data transfers and incorporate appropriate safeguards.
  • Monitor state developments. State enforcement efforts are expanding rapidly and may reach companies that previously focused primarily on federal requirements.

The combined pressure from federal regulators and an increasingly active group of state attorneys general suggests that scrutiny of foreign data transfers is likely to intensify. As states continue to explore creative ways to regulate cross-border data flows, organizations may find that compliance requires not only understanding where their data goes—but also who ultimately controls it.

As Data Privacy Day 2026 approaches, organizations face an inflection point in privacy, artificial intelligence, and cybersecurity compliance. The pace of technological adoption, in particular AI tools, continues to outstrip legal, governance, and risk frameworks. At the same time, regulators, plaintiffs, and businesses are increasingly focused on how data is collected, used, monitored, and safeguarded.

Below are our Top 10 Privacy, AI, and Cybersecurity Issues for 2026.

1. AI Governance Becomes Operational and Enforceable

AI governance in 2026 will be judged less by aspirational principles and more by documented processes, controls, and accountability. Organizations using AI for recruiting, managing performance, improving efficiency and security, and creating content, among a myriad of other use cases, will be expected to demonstrate how AI systems are developed, deployed, and governed, considering a global patchwork of existing and emerging laws and regulations affecting AI and related technologies.

Action items for 2026:

  • Maintain an enterprise AI inventory, including shadow or embedded AI features.
  • Classify AI systems by risk and use case (HR, monitoring, security, consumer-facing)
  • Establish cross-functional AI governance (legal, privacy/infosec, HR, marketing, finance, operations)
  • Implement documentation and review processes for high-risk AI systems.

Learn More:

2. AI-Driven Workplace Monitoring Under Scrutiny

AI-enabled monitoring tools (dashcams, performance management solutions, wearables, etc.) are increasingly used to track productivity, behavior, communications, and engagement. These tools raise heightened concerns around employee privacy, fairness, transparency, and proportionality, especially when AI generates insights or scores that influence employment decisions.

Regulators and plaintiffs are paying closer attention to whether monitoring is over-collection by design, and whether AI outputs are explainable and defensible.

Action items for 2026:

  • Audit existing monitoring and productivity tools for AI functionality.
  • Assess whether monitoring practices align with data minimization principles.
  • Update employee notices and policies to clearly explain AI-driven monitoring.
  • Ensure human review and appeal mechanisms for AI-influenced decisions.

Learn More:

3. Biometrics Expand and So Does Legal Exposure

Biometric data collection continues to expand beyond fingerprints and facial recognition to include voiceprints, behavioral identifiers, and AI-derived biometric inferences. Litigation under Illinois’ Biometric Information Privacy Act (BIPA) remains active, but risk is spreading through broader definitions of sensitive data in state privacy laws.

Action items for 2026:

  • Identify all biometric and biometric-adjacent data collected directly or indirectly.
  • Review vendor tools to ensure compliance.
  • Update biometric notices, consent processes, and retention schedules.
  • Align biometric compliance efforts with broader privacy programs.

Learn More:

4. CIPA Litigation and Website Tracking Technologies Continue to Evolve

California Invasion of Privacy Act (CIPA) litigation related to session replay tools, chat features, analytics platforms, and tracking pixels remains a major risk area, even as legal theories evolve. AI-enhanced tracking tools that capture richer interactions only heighten exposure. Organizations often underestimate the privacy implications of seemingly routine website and chatbot technologies.

Action items for 2026:

  • Conduct a comprehensive audit of website and app tracking technologies.
  • Reassess consent banners, disclosures, and opt-out mechanisms.
  • Evaluate AI-enabled chatbots and analytics for interception risks.
  • Monitor litigation trends and adjust risk tolerance accordingly.

Learn More:

5. State Comprehensive Privacy Laws Enter an Implementation and Enforcement Phase

Organizations are no longer preparing for state privacy laws, but they are living under them. The California Consumer Privacy Act (CCPA), along with other state laws, imposes increasing operational obligations.

California’s risk assessment requirements, cybersecurity audit mandates, and automated decision-making technology (ADMT) regulations represent a significant shift toward proactive compliance.

Action items for 2026:

  • Comply with annual review and update requirements.
  • Conduct CCPA-mandated risk assessments for high-risk processing.
  • Prepare for cybersecurity audit obligations and documentation expectations.
  • Inventory and assess ADMT used in employment, monitoring, and consumer contexts.

Learn More:

6. Data Minimization Becomes One of the Most Challenging Compliance Obligations

Data minimization has moved from an abstract compliance principle to a central operational challenge. Modern AI systems, monitoring tools, and security platforms are frequently architected to collect and retain expansive datasets by default, even when narrower data sets would suffice. This design approach increasingly conflicts with legal obligations that require organizations to limit data collection to what is necessary, proportionate, and purpose-specific, not only in terms of retention, but at the point of collection itself. As regulatory scrutiny intensifies, organizations must be prepared to explain why specific categories of data were collected, how those decisions align with defined business purposes, and whether less intrusive alternatives were reasonably available.

Action items for 2026:

  • Reassess data collection across AI, HR, and security systems.
  • Implement retention limits and transfer restrictions tied to business necessity and legal risk.
  • Challenge “collect now, justify later” deployments that rely on large-scale or continuous data exports.
  • Integrate data minimization and Bulk Data Transfer rule analysis into AI governance and system design reviews.

Learn More:

7. Importance of the DOJ Bulk Transfer Rule

In 2026, bulk sensitive data transfers are no longer a background compliance issue but a regulated risk category in their own right. Under the Department of Justice’s Bulk Data Transfer Rule, which took effect in 2025, organizations must closely assess whether large-scale transfers or access to U.S. sensitive personal or government-related data involve countries of concern or covered persons. The rule reaches a wide range of transactions, including vendor, employment, and service arrangements, and imposes affirmative obligations around due diligence, access controls, and ongoing monitoring.

Action items for 2026:

  • Update data mapping activities to include sensitive data collection and data storage.
  • Catalog where bulk data transfers occur, including transfers between internal systems, vendors, and cross-border environments. Develop a compliance program that includes due diligence steps, vendor agreement language, and internal access controls.
  • Evaluate the purpose of each bulk transfer.

Learn More:

8. UK and EU Data Protection Laws Reforms

Recent and proposed amendments to UK and EU data protection laws are designed to clarify or simplify compliance obligations for organizations, regardless of sector. Changes will impact both commercial and workplace data handling practices.   

UK: Data Use and Access Act (DUAA)

The UK has enacted the Data Use and Access Act, which amends key provisions of the UK General Data Protection Regulation (UK GDPR) and the Privacy and Electronic Communications Regulations (PECR). These reforms relate to subject access requests and complaints, automated processing, the lawful basis to process, cookies, direct marketing, and cross-border transfers, among others. Implementation is occurring in stages, with changes relating to subject access requests, complaints, and automated decision-making taking effect over the next few months.

EU: Digital Omnibus Regulation

The European Commission has proposed a Digital Omnibus Regulation, which introduces amendments to the EU General Data Protection Regulation. Proposed changes include redefining “personal data”, simplifying the personal data breach notification process, clarifying the data subject access process, and managing cookies.

Action items for 2026:

  • Review forthcoming guidance from the UK Information Commissioner’s Office.
    • Implement a data subject complaint process.
    • Review existing lawful bases and purposes for processing.
    • Prepare any necessary updates for employee training.
  • Monitor the progress of the proposed Digital Omnibus Regulation.
    • Review data inventories in the event the definition of personal data is revised.
    • Update data subject access response processes.
    • Review the use and nature of any cookies deployed on the organization’s website.

Learn More:

9. Vendor and Third-Party AI Risk Management Intensifies

Most organizations buy rather than build AI technologies. They buy from vendors such as recruiting platforms, notetaking tools, monitoring applications, cybersecurity providers, and analytics services—whose systems depend on large-scale data ingestion. From procurement to MSA negotiation to record retention obligations, novel and challenging issues as organizations seek to minimize third-party and fourth-party service provider risk. Importantly, vendor contracts have not kept pace with the nature of AI models or how to allocate risk.

Action items for 2026:

  • Update vendor diligence to include privacy, security, and AI-specific risk assessments.
  • Revise contracts to address AI training data, secondary use, audit rights, and allocation of liability.
  • Monitor downstream data sharing, model updates, and cross-border or large-scale data movements.

Learn More:

10. Privacy, AI, and Cybersecurity Fully Converge

In 2026, the lines between privacy, cybersecurity, and AI will continue to blur, leaving organizations that silo these disciplines to face increasing regulatory, litigation, and operational risk.

Action items for 2026:

  • Integrate privacy, AI governance, and cybersecurity leadership.
  • Harmonize risk assessments and reporting structures.
  • Align training and compliance messaging across functions.
  • Treating privacy and AI governance as enterprise risk issues.

Learn More:

As Data Privacy Day 2026 highlights, the challenge is no longer identifying emerging risks, but it is managing them at scale, across systems, and in real time. AI, biometrics, monitoring technologies, and expanding privacy laws demand a more mature, integrated approach to compliance and governance.

A blend of evolving judicial interpretation, aggressive plaintiffs’ counsel, and decades-old statutory language has brought new life to the Florida Security of Communications Act (FSCA) as a vehicle for challenging commonplace website technologies.

At its core, the FSCA was enactedto protect privacy by prohibiting the unauthorized interception of wire, oral, or electronic communications — with far stricter requirements than federal law. Unlike the federal Wiretap Act (which allows one-party consent), Florida typically requires all-party consent before recording or intercepting electronic communications. The FSCA also generally prohibits the interception of any wire, oral, or electronic communications, as well as the use and disclosure of unlawfully intercepted communications “knowing or having reason to know that the information was obtained through the interception of a wire, oral, or electronic communication.”

The New Wave of FSCA Claims

For plaintiffs, an attractive provision of the FSCA is that actual damages need not be established to recover for violations. Under the FSCA, a plaintiff can recover liquidated damages of at least $1,000 for violations without a showing of actual harm, as well as punitive damages and attorneys’ fees. One need only examine the explosion of litigation under other laws with similar damages provisions (e.g., the California Invasion of Privacy Act (CIPA), Telephone Consumer Protection Act (TCPA), Illinois Biometric Information Privacy Act (BIPA), the Illinois Genetic Information Privacy Act (GIPA)) to see this model in action.

For years, courts were reluctant to apply the FSCA to digital technologies like website trackers or analytics tools. Courts routinely dismissed early FSCA lawsuits targeting session-replay software and cookies—finding that these tools didn’t intercept the “contents” of communications in a manner the statute was meant to reach. See Jacome v. Spirit Airlines, Inc., No. 2021-000947-CA-01 (Fla. 11th Cir. Ct. June 17, 2021). This view may be shifting.

Recent cases suggest courts may be more open to digital wiretapping-type claims brought in Florida that previously indicated.

  • A nationwide class action pending in the Southern District of Florida, Cobbs v. PetMed Express, Inc.,  alleges that PedMed Express,  an online veterinary pharmacy, used embedded tracking technologies that enabled third-party companies to capture information about consumers’ prescription-related browsing and purchase activity  on its website.   The tracking tools allegedly intercepted URLs, search queries, and personally identifiable information such as email addresses and phone numbers.   This case highlights the growing litigation risks associated with embedded website tracking technologies – particularly when sensitive data such as prescription or health-related information is involved.
  • In Magenheim v. Nike, Inc., filed in December 2025  in the Southern District of Florida, the plaintiffs allege that Nike triggered undisclosed tracking technologies on visitors’ web browsers immediately upon visiting the website – before users could review privacy disclosures or provide consent – and even when users enabled Global Privacy Control (GPC) signals or selected do not share my data on the site.   This lawsuit seeks class certification to include all Florida visitors to Nike’s website over the past two years.  This case underscores the increasing litigation risk surrounding online privacy expectations and the handling of browser-based tracking data.
  • In a lawsuit filed against a large health system in Florida and pending before the U.S. District Court for the Middle District of Florida, the plaintiff, a patient of that health system, alleges that the hospital system embedded tracking technologies within its website and patient portal.   As plead in the putative class action,  the tracking tools allegedly intercepted patients’ online queries regarding symptoms, treatments and other health related content.   The FSCA claims and the federal Wiretap Act survived a motion to dismiss, inline with the growing trend of courts scrutinizing the use of tracking technologies – particularly in the health care context.

What Courts Are Grappling With

At the heart of these disputes are questions that courts nationwide are wrestling with:

  • What constitutes an “interception” under an analog-era statute when applied to digital data?
  • Do URLs, clicks, form inputs, and other web interactions qualify as the “contents” of communications protected by wiretapping laws?
  • When (and whether) consent provided via privacy notices or cookie banners is sufficient to defeat a statutory wiretapping claim?

Courts have reached different answers, leaving Florida business in limbo with the uncertainty driving increasing claims from plaintiffs.

What This Means for Your Business

Whether you operate a website, mobile app, or digital marketing campaign, the Florida FSCA litigation trend shows no signs of slowing. To mitigate risks and avoid becoming a target of wiretapping claims, consider the following practical steps:

1. Audit All Tracking Technologies

Inventory all third-party pixels, session-replay tools, analytics scripts, and email tracking. Understand what data they capture, when it’s transmitted, and what third parties receive it.

2. Reevaluate Your Consent Mechanisms

Passive privacy disclosures may not be enough. Use clear, affirmative consent mechanisms (e.g., click-to-accept banners) that disclose what is collected and how it is used before any tracking occurs.

3. Limit Data to What’s Necessary – Minimization

Where possible, restrict the capture of high-risk data (e.g., URLs revealing sensitive information or form content) and weigh whether aggressive tracking is essential for business purposes.

4. Update Privacy Policies and Terms

Make your data collection and sharing practices transparent and easily accessible. Regularly update legal disclosures to mirror how tools actually function.

5. Tighten Vendor Contracts

Ensure contracts with analytics, marketing, and tracking vendors allocate compliance responsibility and include indemnification clauses where appropriate.

6. Monitor Legal Developments

Florida’s legal landscape is shifting rapidly. Maintain awareness of new decisions and legislative changes that may clarify or expand FSCA applicability.

Conclusion

The surge of digital wiretapping claims under the Florida Security of Communications Act illustrates how old statutes can take on new life in an era of ubiquitous data collection. What once was a niche privacy theory now threatens to expose businesses — large and small — to class action exposure and costly litigation.

By understanding the evolving legal landscape and implementing proactive compliance strategies, companies can better safeguard their digital practices and reduce the risk of costly FSCA claims.

As we have discussed in prior posts, AI-enabled smart glasses are rapidly evolving from niche wearables into powerful tools with broad workplace appeal — but their innovative capabilities bring equally significant legal and privacy concerns.

  • In Part 1, we addressed compliance issues that arise when these wearables collect biometric information.
  • In Part 2, we covered all-party consent requirements and AI notetaking technologies.
  • In Part 3, we considered broader privacy and surveillance issues, including from a labor law perspective.

In this Part 4, we consider the potentially vast amount of personal and other confidential data that may be collected, visually and audibly, through everyday use of this technology. Cybersecurity and data security risk more broadly pose another major and often underestimated exposure from this technology.

The Risk

AI smart glasses collect, analyze, and transmit enormous volumes of sensitive data—often continuously, and typically transmitting it to cloud-based servers operated by third parties. This creates a perfect storm of cybersecurity risk, regulatory exposure, and breach notification obligations under laws in all 50 states, as well as the CCPA, GDPR, and numerous sector-specific regulations, such as HIPAA for the healthcare industry.

Unlike traditional cameras or recording devices, AI glasses are designed to collect and process data in real time. Even when users believe they are not “recording,” the devices may still be capturing visual, audio, and contextual information for AI analysis, transcription, translation, or object recognition. That data is frequently transmitted to third-party AI providers with unclear security controls, retention practices, and secondary-use restrictions.

Many AI glasses explicitly rely on third-party AI services. For example, Brilliant Labs’ Frame glasses use ChatGPT to power their AI assistant, Noa, and disclose that multiple large language models may be involved in processing. In practice, this means sensitive business conversations, images, and metadata may leave the organization entirely—often without IT, security, or legal teams fully understanding where the data goes or how it is protected.

Use Cases at Risk

  • Hospital workers going on rounds with their team equipped with AI glasses that access, capture, view, and record patients, charts, wounds, family members, in electronic format, triggering the HIPAA Security Rule and state law obligations
  • Financial services employees wearing AI glasses that capture customer financial data, account numbers, or investment information
  • Any workplace use involving personally identifiable information (PII), such as Social Security numbers, credit card data, or medical information, as well as confidential business of the company and/or its customers
  • Attorneys and legal professionals using AI glasses during privileged communications, potentially risking waiver of attorney-client privilege
  • Employees connecting AI glasses to unsecured or public Wi-Fi networks, creating man-in-the-middle attack risks
  • Lost or stolen AI glasses that store unencrypted audio, video, or contextual data

Why It Matters

Data breaches involving biometric data, health information, or financial data carry outsized legal and financial consequences. With AI glasses, as a practical matter, an entity generally is less likely to face a large-scale data breach affecting hundreds of thousands or millions of people. However, a breach and exposure of sensitive patient images, discussions, or other data captured with AI glasses could be just as, if not more, harmful to the reputation of a health system, for example, than an attack by a criminal threat actor. Beyond reputational harm, incident response costs, litigation, and regulatory penalties also remain a significant risk factor.

Shadow AI (the unauthorized use of artificial intelligence tools by employees in the workplace) also poses a potential data security, breach, and third-party risks. Many devices sync automatically to consumer cloud accounts with security practices that employers neither control nor audit. When an employee uses personal AI glasses for work, fundamental questions often go unanswered: Where is the data stored? Is it encrypted? Who has access? How long is it retained? Is it used to train AI models?

Finally, the use of AI glasses can diminish the effects of a powerful data security tool – data minimization. Businesses will need to grapple with the question whether the constant, ambient data collection and recording aligns with the principles of data minimization, a principle that is woven into data privacy laws, such as the California Consumer Privacy Act.

Practical Compliance Considerations

  • Implement clear policies: Be deliberate about whether to permit these wearables in the workplace. And, if so, establish policies limiting when and where they may be used, and what recording features can be activated and under what circumstances.
  • Perform an assessment: Conduct security and privacy assessments of specific AI glasses models before deployment
  • Understand third-party service provider risks: Review security documentation, including encryption practices, access controls, and incident response commitments
  • Understand obligations to customers: Review services agreements concerning the collection, processing, and security obligations for handling customer personal and confidential business information
  • Update incident response plans: Factor in wearable device compromises
  • For HIPAA Covered Entities and Business Associates: Confirm that AI glasses meet HIPAA requirements
  • Evaluate cyber insurance coverage: Assess whether your policy (assuming you have a cyber policy!) covers breaches involving wearable technology and AI-related risks

Conclusion

AI smart glasses may feel futuristic and convenient, but from a data security and compliance perspective, they dramatically expand an organization’s attack surface. Without careful controls, these devices can quietly introduce breach risks, third-party data sharing, and regulatory exposure that outweigh their perceived benefits.

The key is to approach the deployment of AI glasses (and deployment of similar technologies) with eyes wide open—understanding both the capabilities of the technology and the complex legal frameworks that govern their use. With thoughtful policies, robust technical controls, ongoing compliance monitoring, and respect for privacy rights, organizations can harness the benefits of AI glasses while managing the risks.

As we have discussed in prior posts, AI-enabled smart glasses are rapidly evolving from niche wearables into powerful tools with broad workplace appeal — but their innovative capabilities bring equally significant legal and privacy concerns. In Part 1, we addressed compliance issues that arise when these wearables collect biometric information. In Part 2, we covered all-party consent requirements and AI notetaking technologies.

In this Part 3, we consider broader privacy and surveillance issues, including from a labor law perspective. Left uncontrolled, the nature and capabilities of AI smart glasses open the door to a range of circumstances in which legal requirements as well as societal norms could be violated, even inadvertently. At the same time, a pervasive surveillance environment fueled by the technologies such as AI smart glasses may spur arguments by some employees that their right to engage in protected concerted activity has been infringed.

The Risk

When employers provide AI glasses to employees or permit their use in the workplace, they can potentially create continuous and/or intrusive surveillance conditions that may violate the privacy rights of individuals they encounter, including employees, customers, and others. Various state statutory and common law limit surveillance, and new laws are emerging that would target workplace surveillance technologies. For example, California Assembly Bill 1331, introduced in early 2025, sought to limit employer surveillance and enhance employee privacy. The bill would have banned monitoring in private off-duty spaces (like bathrooms, lactation rooms) and prohibited surveillance of homes or personal vehicles. California Governor Newsom vetoed this bill in October.

However, other law in California, notably the California Consumer Privacy Act (CCPA), seeks to regulate surveillance that would involve certain personal information. Under the CCPA, continuous surveillance may trigger a risk assessment obligation. See more about that here. The CCPA and several other states that have adopted a comprehensive privacy law require covered entities to communicate about the personal information they collect from residents of those states. Covered entities that permit employees to use these devices in the course of their employment may nee to better understand the type of personal information those employees’ glasses are collecting.

The National Labor Relations Board (NLRB) generally establishes a right of employees to act with co-workers to address work-related issues. Widespread surveillance and recording could chill protected concerted activity – employees might be less likely to engage with other employees about working conditions under such circumstances. Of course, introducing AI glasses in the workplace may trigger an obligation to bargain under the NLRA.

Relevant Use Cases

  • Warehouse workers using AI glasses for inventory management that also track movement patterns, productivity metrics, and conversations of coworkers
  • School employees that use AI glasses while interacting with minor students in a range of circumstances
  • Field service technicians wearing glasses that record all customer interactions as well as communications with coworkers
  • Office workers using AI glasses with note-taking features during internal meetings, capturing discussions among employees
  • Healthcare workers in a variety of settings, purposefully or inadvertently, capturing images or data of patients and their families
  • Manufacturing employees whose glasses document work processes while also recording conversations with coworkers

Why It Matters:

Connecticut, Delaware, and New York require employers to notify employees of certain electronic monitoring. California’s CCPA gives employees specific rights over their personal information, including the right to know what’s collected and the right to deletion. These protections were strengthened in recently updated regulations under the California Privacy Rights Act which created, among other things, an obligation to conduct and report on risk assessments performed in connection with certain surveillance activities.

Union environments face additional scrutiny. Surveillance may constitute an unfair labor practice requiring collective bargaining. The NLRB has issued guidance limiting employers’ ability to ban workplace recordings because such bans can interfere with protected rights. However, continuous AI-powered surveillance could still create a chilling effect that violates labor law.

Practical Compliance Considerations:

  • Implement clear policies: Be deliberate about whether to permit these wearables in the workplace. And, if so, establish policies limiting when and where they may be used, and what recording features can be activated and under what circumstances.
  • Provide notice: Providing written notice about AI glasses capabilities, including what data is collected, how it’s processed, and how it may be used.
  • Perform an assessment: Conduct privacy impact/risk assessments before deploying AI glasses in the workplace, including when interacting with customers.
  • Consider bargaining obligations, protected concerted activity rights: If deploying AI glasses in union environments, engage in collective bargaining about their use, assess PCA rights.
  • Establish technical limits and safeguard: Consider implementing technical controls like automatic disabling of recording in break rooms, bathrooms, and areas designated for private conversations.

Conclusion

AI glasses represent transformative technology with genuine business value, from hands-free information access to enhanced productivity and innovative customer experiences. The 210% growth in smart glasses shipments in 2024 demonstrates their appeal. But the legal risks are real and growing.

The key is to approach the deployment of AI glasses (and deployment of similar technologies) with eyes wide open—understanding both the capabilities of the technology and the complex legal frameworks that govern its use. With thoughtful policies, robust technical controls, ongoing compliance monitoring, and respect for privacy rights, organizations can harness the benefits of AI glasses while managing the risks.

As we explored in Part 1 of this series, AI-enabled smart glasses are rapidly evolving from niche wearables into powerful tools with broad workplace appeal — but their innovative capabilities bring equally significant legal and privacy concerns. Modern smart glasses blend high-resolution cameras, always-on microphones, and real-time AI assistants into a hands-free wearable that can capture, analyze, and even transcribe ambient information around the wearer. These features — from continuous audio capture to automated transcription — create scenarios where bystanders (co-workers, customers, etc.) may be recorded or have their conversations documented without ever knowing it, raising fundamental questions about consent and the boundaries of lawful observation.

Part 2 shifts focus to how these core capabilities intersect with consent requirements and note-taking practices under U.S. and state wiretapping and recording laws. In many jurisdictions, recording or transcribing a conversation without the express permission of all participants — particularly where devices can run discreetly in the background — can potentially trigger two-party (or all-party) consent obligations and potential statutory violations. Likewise, the promise of AI-assisted note taking — where every spoken word in a meeting could be saved, indexed, and shared — brings not just operational benefits but significant legal and business risk. Understanding how the unique sensing and recording features of smart glasses intersect with these consent and notetaking issues is essential for any organization contemplating deploy­­ment.

The Risk

AI glasses with continuous recording, AI note-taking, or voice transcription capabilities can easily violate state wiretapping laws. Twelve states require all parties to consent to audio recording of confidential communications, including California, Florida, Illinois, Maryland, Massachusetts, Connecticut, Montana, New Hampshire, Pennsylvania, and Washington. Even in one-party consent states, recording in locations where individuals have reasonable expectations of privacy violates surveillance laws. Going one step further, consider the possibility of the user being close enough to record a conversation between two unrelated persons.

The rise of AI note-taking capabilities in smart glasses makes this risk particularly acute. Unlike traditional recording that often requires deliberate action, AI glasses can passively capture and transcribe conversations throughout the day, creating permanent searchable records of discussions that participants never knew were being documented. Smart glasses that record continuously with no visible indicator, amplify this concern.

Relevant Use Cases

  • Sales representatives wearing AI glasses that automatically transcribe client meetings without explicit consent from all parties
  • Managers using glasses with AI note-taking features during performance reviews, disciplinary meetings, or interviews
  • Medical professionals recording patient consultations through smart glasses for AI-generated documentation
  • Employees wearing glasses during phone calls where the other party is in a two-party consent state
  • Anyone wearing recording-capable glasses in restrooms, locker rooms, medical facilities, or other areas with heightened privacy expectations
  • Workers using AI transcription features during confidential business discussions or trade secret conversations
  • OSHA inspectors using AI glasses (announced for expanded deployment in 2025) to record workplace inspections without proper protocols

Why It Matters

Violations of two-party consent laws carry criminal penalties, including potential jail time, as well as civil liability. The fact that many AI glasses lack obvious recording indicators—or have only tiny LED lights that are easily missed—compounds the risk. AI-generated transcripts created without consent or even awareness raise a myriad of issues, some of which are outlined here. The ease with which these devices could continuously record and transcribe conversations raises particular concerns relating to increasing emphasis and regulation directed at data minimization.

Practical Compliance Considerations

The compliance challenges surrounding AI glasses are significant, but manageable with proper planning:

  • Implement clear policies: Develop clear policies about when and where AI glasses with recording capabilities can be worn
  • Get consent: Obtain explicit verbal or written consent from all parties before activating recording features—consent banners on video calls may not suffice for glasses
  • Provide notice: Provide visible notification that recording is occurring (though many AI glasses lack adequate indicators)
  • Establish technical limits and safeguard: Implement geofencing or technical controls to automatically disable recording features in prohibited areas
  • Monitor usage: Maintain detailed logs of when recording features are activated and by whom
  • Train users: Train employees on state-specific wiretapping laws, especially when traveling or conducting interstate communications
  • Increase awareness of device features and capabilities: For AI note-taking features, ensure participants know transcription is occurring and can opt out
  • Leverage existing policies: Apply existing privacy and security controls, such as access and retention, relating to transcripts generated from the wearables.

Conclusion

AI glasses represent transformative technology with genuine business value, from hands-free information access to enhanced productivity and innovative customer experiences. The 210% growth in smart glasses shipments in 2024 demonstrates their appeal. But the legal risks are real and growing.

The key is to approach the deployment of AI glasses (and deployment of similar technologies) with eyes wide open—understanding both the capabilities of the technology and the complex legal frameworks that govern its use. With thoughtful policies, robust technical controls, ongoing compliance monitoring, and respect for privacy rights, organizations can harness the benefits of AI glasses while managing the risks.

Key Takeaways

  • Outlines basic steps to determine whether a business may need to perform a risk assessment under the California Consumer Privacy Act (CCPA) in connection with its use of dashcams
  • Provide a resource for exploring the basic requirements for conducting and reporting risk assessments

If you have not reviewed the recently approved, updated CCPA regulations, you might want to soon. There are several new requirements, along with many modifications and clarifications to existing rules. In this post, we discuss a new requirement – performing risk assessments – in the context of dashcam and related fleet management technologies.

In short, when performing a risk assessment, the business needs to assess whether the risk to consumer privacy from the processing of personal information outweighs the benefits to consumers, the business, others, and the public, and, if so, restricting or prohibiting that processing, as appropriate.

Of course, the first step to determine whether a business needs to perform a risk assessment under the CCPA is to determine whether the CCPA applies to the business. We discussed those basic requirements in Part 1 of our post on risk assessments under the CCPA.

If you are still reading, you have probably determined that your organization is a “business” covered by the CCPA and, possibly, your business is using certain fleet management technologies, such as dashcam or other vehicle tracking technologies. Even if that is not the case, the remainder of this post may be of interest for “businesses” under the CCPA that are curious about examples applying the new risk assessment requirement.

As discussed in Part 1 of our post on the basics of CCPA risk assessments, businesses are required to perform risk assessments when their processing of personal information presents “significant risk” to consumer privacy. The regulations set out certain types of processing activities involving personal information that would trigger a risk assessment. Depending on the nature and scope of the dashcam technology deployed, a business should consider whether a risk assessment is required.

Dashcams and similar devices increasingly come with an array of features. As the name suggests, these devices include cameras that can record activity inside and outside the vehicle. They also can be equipped with audio recording capabilities permitting the recording of voice in and outside the vehicle. Additionally, dashcams can play a role in logistics, as they often include GPS technology, and they can contribute significantly to worker and public safety through telematics. In general, telematics help businesses understand how the vehicle is being driven – acceleration, hard stops, swerving, etc. More recently, dashcams can have biometrics and AI technologies embedded in them. A facial scan can help determine if the driver is authorized to be driving that vehicle. AI technology also might be used to help determine if the driver is driving safely – is the driver falling asleep, eating, using their phone, wearing a seatbelt, and so on.

Depending on how a dashcam is equipped or configured, businesses subject to the CCPA should consider whether the dashcam involves the processing of personal information that requires a risk assessment.

For instance, a risk assessment is required when processing “sensitive personal information.” Remember that sensitive personal information includes, among other elements, precise geolocation data and biometric information for identifying an individuals. While the regulations include an exception for certain employment-related processing, businesses would have to assess whether those apply.

Another example of processing personal information that requires a risk assessment is profiling a consumer through “systematic observation” of that consumer when they are acting in their capacity as an educational program applicant, job applicant, student, employee, or independent contractor for the business. The regulations define “systematic observation” to mean:

methodical and regular or continuous observation. This includes, for example, methodical and regular or continuous observation using Wi-Fi or Bluetooth tracking, radio frequency identification, drones, video or audio recording or live-streaming, technologies that enable physical or biological identification or profiling; and geofencing, location trackers, or license-plate recognition.

The regulation also defines profiling as:

any form of automated processing of personal information to evaluate certain personal aspects (including intelligence, ability, aptitude, predispositions) relating to a natural person and in particular to analyze or predict aspects concerning that natural person’s performance at work, economic situation, health (including mental health), personal preferences, interests, reliability, predispositions, behavior, location, or movements.

Considering the range of use cases for vehicle/fleet tracking technologies, and depending on their capabilities and configurations, it is conceivable that in some cases the processing of personal information by such technology could be considered a “significant risk,” requiring a risk assessment under the CCPA.

In that case, Part 2 of our post on risk assessments outlines the steps a business needs to take to conduct a risk assessment, including what must be included in the required risk assessment report, and timely certifying the assessment to the California Privacy Protection Agency.

It is important to note that this is only one of a myriad of potential processing activities that businesses engage in that might trigger a risk assessment requirement. Businesses will need to identify those activities and assess next steps. If the business finds comparable activities, it may be able to minimize the risk assessment burden, by conducting a single assessment for those comparable activities.

Again, the new CCPA regulations represent a fundamental shift toward proactive privacy governance under the CCPA. Rather than simply reacting to consumer requests and data breaches, covered businesses must now systematically evaluate and document the privacy implications of their data processing activities before they begin. With compliance deadlines approaching in 2026, organizations should begin now to establish the cross-functional processes, documentation practices, and governance structures necessary to meet these new obligations.

test

The California Privacy Protection Agency (CPPA) has adopted significant updates to the California Consumer Privacy Act (CCPA) regulations, which were formally approved by the California Office of Administrative Law on September 23, 2025. These comprehensive regulations address automated decision-making technology, cybersecurity audits, and risk assessments, with compliance deadlines beginning in 2026. Among these updates, the risk assessment requirements represent a substantial new compliance obligation for many businesses subject to the CCPA.

Of course, as a threshold matter, businesses must first determine whether they are subject to the CCPA. For businesses that are not sure of whether the CCPA applies to them, our earlier discussion here may be helpful. If your business is subject to the CCPA, read on.

When Is a Risk Assessment Required?

The new regulations require businesses to conduct risk assessments when their processing of personal information presents “significant risks” to consumer privacy. The CPPA has defined specific processing activities that trigger this requirement:

  • Selling or sharing personal information.
  • Processing “sensitive personal information.” However, there is a narrow exception for limited human resources-related uses such as payroll, benefits administration, and legally mandated reporting. Employers will have to examine carefully which activities are excluded and which are not. Sensitive personal information under the CCPA includes precise geolocation, racial or ethnic origin, religious beliefs, genetic data, biometric information, health information, sexual orientation, and citizenship status, among other categories.
  • Using automated decision-making technology (ADMT) to make significant decisions about consumers. Significant decisions include those resulting in the provision or denial of financial services, lending, housing, education enrollment, employment opportunities, compensation, or healthcare services. More on ADMT to come.
  • Profiling a consumer through “systematic observation” when they are acting in their capacity as an educational program applicant, job applicant, student, employee, or independent contractor for the business. Systematic observation means methodical and regular or continuous observation, such as through Wi-Fi or Bluetooth tracking, radio frequency identification, drones, video or audio recording or live-streaming, technologies that enable physical or biological identification or profiling; and geofencing, location trackers, or license-plate recognition. Businesses engaged in workplace monitoring and using performance management applications may need to consider those activities under this provision.
  • Profiling a consumer based upon their presence in a “sensitive location.” A sensitive location means the following physical places: healthcare facilities including hospitals, doctors’ offices, urgent care facilities, and community health clinics; pharmacies; domestic violence shelters; food pantries; housing/emergency shelters; educational institutions; political party offices; legal services offices; union offices; and places of worship.
  • Processing personal information to train ADMT for a significant decisions, or train  facial recognition, biometric, or other technology to verify identity. This recognizes the heightened privacy risks associated with developing systems that may later be deployed at scale.

What is Involved in Completing a Risk Assessment?

For businesses engaged in activities with personal information that will require a risk assessment, it is important to note that there are a number of steps set forth in the new CCPA regulations for performing those assessments. These include:

  • Determining which stakeholders should be involved in the risk assessment process and the n nature of that involvement.
  • Establishing appropriate purposes and objectives for conducting the risk assessment
  • Satisfying timing and record keeping obligations.
  • Preparing risk assessment reports that meet certain content requirements.
  • Timely submitting certifications of required risk assessments to the CPPA

In Part 2 of this post we will discuss the requirements above to help businesses that have to perform one or more risk assessments develop a process for doing so.

The new CCPA regulations represent a fundamental shift toward proactive privacy governance under the CCPA. Rather than simply reacting to consumer requests and data breaches, covered businesses must now systematically evaluate and document the privacy implications of their data processing activities before they begin. With compliance deadlines approaching in 2026, organizations should begin now to establish the cross-functional processes, documentation practices, and governance structures necessary to meet these new obligations.

Businesses across many industries naturally want to showcase their satisfied customers. Whether it’s a university featuring successful graduates, a retailer highlighting happy shoppers, or a healthcare facility showcasing thriving patients, these real-world testimonials can be powerful marketing tools. However, when it comes to healthcare providers subject to HIPAA, using patient images and information for promotional purposes requires careful navigation of both federal privacy rules and state law requirements.

In a recent case, the failure to comply with these requirements resulted in a $182,000 fine and a two year compliance program for a Delaware nursing home, according to the resolution agreement.

The Office for Civil Rights (OCR), which enforces the HIPAA Privacy and Security Rules, recently announced an enforcement action that serves as an important reminder of these obligations. The case involved a nursing home that posted photographs of approximately 150 facility residents over a period of time to its social media page. These postings were part of a campaign to highlight the success residents were achieving at the nursing home. When a resident complained to OCR, the agency investigated and found the covered entity had not obtained the required HIPAA authorizations or complied with breach notification requirements. The enforcement actions that followed underscore that even seemingly benign marketing practices can trigger significant compliance issues under HIPAA.

Understanding HIPAA’s Authorization Requirements

Under HIPAA, covered entities may generally use and disclose protected health information (PHI) for treatment, payment, and healthcare operations, and certain other purposes, without patient authorization. Marketing activities, however, fall outside these permissible uses. In the OCR investigation, the covered entity didn’t simply share photographs—it also disclosed information about residents’ care to tell “success stories” of patients at their facilities. This combination of visual identification and health information, according to the OCR, constituted a use of PHI requiring express patient authorization under HIPAA.

The authorization requirement isn’t merely a technicality. HIPAA authorizations must meet specific regulatory standards, such as a clear description of the information to be disclosed, the purpose of the disclosure, and a date or event after which the authorization will cease to be valid. A patient’s informal agreement or willingness to participate doesn’t satisfy these requirements.

The Breach Notification Complication

The OCR investigation revealed another compliance failure: not providing the required breach notification. Under HIPAA’s Breach Notification Rule, a disclosure not permitted under the Privacy Rule can constitute a reportable breach requiring notification to affected individuals and potentially to OCR and the media. This means that a marketing misstep can go beyond just failing to get an authorization.

Lessons from Social Media Cases

This isn’t an isolated concern. Similar issues have arisen when healthcare providers, such as dentists and other practitioners, responded to patient complaints on platforms like Google and Yelp. Well-intentioned responses that acknowledge treating a patient or try to resolve the patient’s concerns can violate HIPAA. These cases make clear that covered entities must think carefully about any use or disclosure of patient information outside the core functions of treatment, payment, and healthcare operations, even when the patient may have disclosed the same information already.

State Law Adds Another Layer, Including for Regulation of AI and Biometrics

HIPAA compliance alone may not be sufficient, particularly when potentially more stringent protections exist at state law. Many states have laws and common law obligations requiring consent before using a person’s image or likeness for commercial purposes, as well as specifics concerning what that consent should look like. Covered entities must ensure they’re meeting both HIPAA authorization requirements and any applicable state law consent requirements. They also should be sure to understand the technologies they are using, including whether they are inadvertently collecting biometric data.

Looking ahead, covered entities should be aware that several states have begun enacting or amending laws addressing how businesses can use digital replicas of individuals, particularly in the AI context. As healthcare organizations increasingly adopt AI technologies, questions about using patient images or data to create or train AI systems, will require careful analysis under both existing HIPAA rules and these emerging state laws.

The Bottom Line

The message for HIPAA covered entities is clear: think before you post, promote, or publicize to good work you do for your patients. Even when patients are willing participants in marketing efforts, formal HIPAA authorizations and state law consents may be required. The cost of non-compliance—including financial settlements, required corrective action plans, and reputational harm—far exceeds the investment in proper authorization processes. When in doubt about whether patient information can be used for a particular purpose, covered entities should consult with privacy counsel to ensure full compliance with both federal and state requirements.

For businesses subject to the California Consumer Privacy Act (CCPA), a compliance step often overlooked is the requirement to annually update the businesses online privacy policy. Under Cal. Civ. Code § 1798.130(a)(5), CCPA-covered businesses must among other things update their online privacy policies at least once every 12 months. Note that CCPA regulations establish content requirements for online privacy policies, one of which is that the policy must include “the date the privacy policy was last updated.” See 11 CCR § 7011(e)(4).

As businesses continue to grow, evolve, adopt new technologies, or otherwise make online and offline changes in their business, practices, and/or operations, CCPA required privacy policies may no longer accurately or completely reflect the collection and processing of personal information. Consider, for example, the adoption of emerging technologies, such as so-called “artificial intelligence” tools. These tools may be collecting, inferring, or processing personal information in ways that were not contemplated when preparing the organization’s last privacy policy update.

The business also may have service providers that collect and process personal information on behalf of the business in ways that are different than they did when they began providing services to the business.

Simply put: If your business (or its service providers) has adopted any new technologies or otherwise changed how it collects or processes personal information, your privacy policy may need an update.

Practical Action Items for Businesses

Here are some steps businesses can take to comply with the annual privacy policy review and update requirement under the CCPA:

  • Inventory Personal Information
    Reassess what categories of personal information your organization collects, processes, sells, and shares. Consider whether new categories—such as biometric, geolocation, or video —have been added.
  • Review Data Use Practices
    Confirm whether your uses of personal information have changed since the last policy update. This includes whether you are profiling, targeting, or automating decisions based on the data.
  • Assess adoption of new technologies, such as AI and New Tech Tools
    Has your business adopted any new technologies or systems, such as AI applications? Examples may include:
    • AI notetakers, transcription, or summarization tools for use in meetings (e.g., Otter, Fireflies)
    • AI used for chatbots, personalized recommendations, or hiring assessments
  • Evaluate Third Parties and Service Providers
    Are you sharing or selling information to new third parties? Has your use of service providers changed, or have service providers changed their practices around the collection or processing of personal information?
  • Review Your Consumer Rights Mechanisms
    Are the methods for consumers to submit access, deletion, correction, or opt-out requests clearly stated and functioning properly?

These are only a few of the potential recent developments that may drive changes in an existing privacy policy. There may be additional considerations for businesses in certain industries and departments within those businesses that should be considered as well. Here are a few examples:

Retail Businesses

  • Loyalty programs collecting purchase history and predictive analytics data.
  • More advanced in-store cameras and mobile apps collecting biometric or geolocation information.
  • AI-driven customer service bots that gather interaction data.

Law Firms

  • Use of AI notetakers or transcription tools during client calls.
  • Remote collaboration tools that collect device or location data.
  • Marketing platforms that profile client interests based on website use.

HR Departments (Across All Industries)

  • AI tools used for resume screening and candidate profiling.
  • Digital onboarding platforms collecting sensitive identity data.
  • Employee productivity and monitoring software that tracks usage, productivity, or location.

The online privacy policy is not just a static compliance document—it’s a dynamic reflection of your organization’s data privacy practices. As technologies evolve and regulations expand, taking time once a year to reassess and update your privacy disclosures is not only a legal obligation in California but a strategic risk management step. And, while we have focused on the CCPA in this article, inaccurate or incomplete online privacy policies can elevate compliance and litigation risks under other laws, including the Federal Trade Commission Act and state protections against deceptive and unfair business practices.