As we have discussed in prior posts, AI-enabled smart glasses are rapidly evolving from niche wearables into powerful tools with broad workplace appeal — but their innovative capabilities bring equally significant legal and privacy concerns.

  • In Part 1, we addressed compliance issues that arise when these wearables collect biometric information.
  • In Part 2, we covered all-party consent requirements and AI notetaking technologies.
  • In Part 3, we considered broader privacy and surveillance issues, including from a labor law perspective.

In this Part 4, we consider the potentially vast amount of personal and other confidential data that may be collected, visually and audibly, through everyday use of this technology. Cybersecurity and data security risk more broadly pose another major and often underestimated exposure from this technology.

The Risk

AI smart glasses collect, analyze, and transmit enormous volumes of sensitive data—often continuously, and typically transmitting it to cloud-based servers operated by third parties. This creates a perfect storm of cybersecurity risk, regulatory exposure, and breach notification obligations under laws in all 50 states, as well as the CCPA, GDPR, and numerous sector-specific regulations, such as HIPAA for the healthcare industry.

Unlike traditional cameras or recording devices, AI glasses are designed to collect and process data in real time. Even when users believe they are not “recording,” the devices may still be capturing visual, audio, and contextual information for AI analysis, transcription, translation, or object recognition. That data is frequently transmitted to third-party AI providers with unclear security controls, retention practices, and secondary-use restrictions.

Many AI glasses explicitly rely on third-party AI services. For example, Brilliant Labs’ Frame glasses use ChatGPT to power their AI assistant, Noa, and disclose that multiple large language models may be involved in processing. In practice, this means sensitive business conversations, images, and metadata may leave the organization entirely—often without IT, security, or legal teams fully understanding where the data goes or how it is protected.

Use Cases at Risk

  • Hospital workers going on rounds with their team equipped with AI glasses that access, capture, view, and record patients, charts, wounds, family members, in electronic format, triggering the HIPAA Security Rule and state law obligations
  • Financial services employees wearing AI glasses that capture customer financial data, account numbers, or investment information
  • Any workplace use involving personally identifiable information (PII), such as Social Security numbers, credit card data, or medical information, as well as confidential business of the company and/or its customers
  • Attorneys and legal professionals using AI glasses during privileged communications, potentially risking waiver of attorney-client privilege
  • Employees connecting AI glasses to unsecured or public Wi-Fi networks, creating man-in-the-middle attack risks
  • Lost or stolen AI glasses that store unencrypted audio, video, or contextual data

Why It Matters

Data breaches involving biometric data, health information, or financial data carry outsized legal and financial consequences. With AI glasses, as a practical matter, an entity generally is less likely to face a large-scale data breach affecting hundreds of thousands or millions of people. However, a breach and exposure of sensitive patient images, discussions, or other data captured with AI glasses could be just as, if not more, harmful to the reputation of a health system, for example, than an attack by a criminal threat actor. Beyond reputational harm, incident response costs, litigation, and regulatory penalties also remain a significant risk factor.

Shadow AI (the unauthorized use of artificial intelligence tools by employees in the workplace) also poses a potential data security, breach, and third-party risks. Many devices sync automatically to consumer cloud accounts with security practices that employers neither control nor audit. When an employee uses personal AI glasses for work, fundamental questions often go unanswered: Where is the data stored? Is it encrypted? Who has access? How long is it retained? Is it used to train AI models?

Finally, the use of AI glasses can diminish the effects of a powerful data security tool – data minimization. Businesses will need to grapple with the question whether the constant, ambient data collection and recording aligns with the principles of data minimization, a principle that is woven into data privacy laws, such as the California Consumer Privacy Act.

Practical Compliance Considerations

  • Implement clear policies: Be deliberate about whether to permit these wearables in the workplace. And, if so, establish policies limiting when and where they may be used, and what recording features can be activated and under what circumstances.
  • Perform an assessment: Conduct security and privacy assessments of specific AI glasses models before deployment
  • Understand third-party service provider risks: Review security documentation, including encryption practices, access controls, and incident response commitments
  • Understand obligations to customers: Review services agreements concerning the collection, processing, and security obligations for handling customer personal and confidential business information
  • Update incident response plans: Factor in wearable device compromises
  • For HIPAA Covered Entities and Business Associates: Confirm that AI glasses meet HIPAA requirements
  • Evaluate cyber insurance coverage: Assess whether your policy (assuming you have a cyber policy!) covers breaches involving wearable technology and AI-related risks

Conclusion

AI smart glasses may feel futuristic and convenient, but from a data security and compliance perspective, they dramatically expand an organization’s attack surface. Without careful controls, these devices can quietly introduce breach risks, third-party data sharing, and regulatory exposure that outweigh their perceived benefits.

The key is to approach the deployment of AI glasses (and deployment of similar technologies) with eyes wide open—understanding both the capabilities of the technology and the complex legal frameworks that govern their use. With thoughtful policies, robust technical controls, ongoing compliance monitoring, and respect for privacy rights, organizations can harness the benefits of AI glasses while managing the risks.

As we have discussed in prior posts, AI-enabled smart glasses are rapidly evolving from niche wearables into powerful tools with broad workplace appeal — but their innovative capabilities bring equally significant legal and privacy concerns. In Part 1, we addressed compliance issues that arise when these wearables collect biometric information. In Part 2, we covered all-party consent requirements and AI notetaking technologies.

In this Part 3, we consider broader privacy and surveillance issues, including from a labor law perspective. Left uncontrolled, the nature and capabilities of AI smart glasses open the door to a range of circumstances in which legal requirements as well as societal norms could be violated, even inadvertently. At the same time, a pervasive surveillance environment fueled by the technologies such as AI smart glasses may spur arguments by some employees that their right to engage in protected concerted activity has been infringed.

The Risk

When employers provide AI glasses to employees or permit their use in the workplace, they can potentially create continuous and/or intrusive surveillance conditions that may violate the privacy rights of individuals they encounter, including employees, customers, and others. Various state statutory and common law limit surveillance, and new laws are emerging that would target workplace surveillance technologies. For example, California Assembly Bill 1331, introduced in early 2025, sought to limit employer surveillance and enhance employee privacy. The bill would have banned monitoring in private off-duty spaces (like bathrooms, lactation rooms) and prohibited surveillance of homes or personal vehicles. California Governor Newsom vetoed this bill in October.

However, other law in California, notably the California Consumer Privacy Act (CCPA), seeks to regulate surveillance that would involve certain personal information. Under the CCPA, continuous surveillance may trigger a risk assessment obligation. See more about that here. The CCPA and several other states that have adopted a comprehensive privacy law require covered entities to communicate about the personal information they collect from residents of those states. Covered entities that permit employees to use these devices in the course of their employment may nee to better understand the type of personal information those employees’ glasses are collecting.

The National Labor Relations Board (NLRB) generally establishes a right of employees to act with co-workers to address work-related issues. Widespread surveillance and recording could chill protected concerted activity – employees might be less likely to engage with other employees about working conditions under such circumstances. Of course, introducing AI glasses in the workplace may trigger an obligation to bargain under the NLRA.

Relevant Use Cases

  • Warehouse workers using AI glasses for inventory management that also track movement patterns, productivity metrics, and conversations of coworkers
  • School employees that use AI glasses while interacting with minor students in a range of circumstances
  • Field service technicians wearing glasses that record all customer interactions as well as communications with coworkers
  • Office workers using AI glasses with note-taking features during internal meetings, capturing discussions among employees
  • Healthcare workers in a variety of settings, purposefully or inadvertently, capturing images or data of patients and their families
  • Manufacturing employees whose glasses document work processes while also recording conversations with coworkers

Why It Matters:

Connecticut, Delaware, and New York require employers to notify employees of certain electronic monitoring. California’s CCPA gives employees specific rights over their personal information, including the right to know what’s collected and the right to deletion. These protections were strengthened in recently updated regulations under the California Privacy Rights Act which created, among other things, an obligation to conduct and report on risk assessments performed in connection with certain surveillance activities.

Union environments face additional scrutiny. Surveillance may constitute an unfair labor practice requiring collective bargaining. The NLRB has issued guidance limiting employers’ ability to ban workplace recordings because such bans can interfere with protected rights. However, continuous AI-powered surveillance could still create a chilling effect that violates labor law.

Practical Compliance Considerations:

  • Implement clear policies: Be deliberate about whether to permit these wearables in the workplace. And, if so, establish policies limiting when and where they may be used, and what recording features can be activated and under what circumstances.
  • Provide notice: Providing written notice about AI glasses capabilities, including what data is collected, how it’s processed, and how it may be used.
  • Perform an assessment: Conduct privacy impact/risk assessments before deploying AI glasses in the workplace, including when interacting with customers.
  • Consider bargaining obligations, protected concerted activity rights: If deploying AI glasses in union environments, engage in collective bargaining about their use, assess PCA rights.
  • Establish technical limits and safeguard: Consider implementing technical controls like automatic disabling of recording in break rooms, bathrooms, and areas designated for private conversations.

Conclusion

AI glasses represent transformative technology with genuine business value, from hands-free information access to enhanced productivity and innovative customer experiences. The 210% growth in smart glasses shipments in 2024 demonstrates their appeal. But the legal risks are real and growing.

The key is to approach the deployment of AI glasses (and deployment of similar technologies) with eyes wide open—understanding both the capabilities of the technology and the complex legal frameworks that govern its use. With thoughtful policies, robust technical controls, ongoing compliance monitoring, and respect for privacy rights, organizations can harness the benefits of AI glasses while managing the risks.

New York State’s 2025 legislative session marked a notable moment in the evolution of artificial intelligence (AI) and privacy regulation. Governor Kathy Hochul signed the Responsible AI Safety and Education (RAISE) Act, creating one of the first state-level frameworks aimed specifically at the most advanced AI systems, while vetoing the proposed New York Health Information Privacy Act (NYHIPA), a bill that would have significantly expanded health data protections beyond existing federal law. Together, these developments provide important signals for businesses operating in or touching New York.

The RAISE Act

The RAISE Act amends the General Business Law to impose transparency and risk-management obligations on developers of certain high-end AI systems. The law is narrowly focused on “frontier models,” defined by extraordinarily high computational thresholds, generally models trained with more than 10²⁶ computational operations and over $100 million in compute costs.

For most businesses, this means the law will primarily affect developers and deployers of the most powerful AI systems rather than everyday enterprise automation tools.

Practical examples of AI technologies that could fall within scope include:

  • Large language models such as GPT-4-class, Claude-class, or Gemini-class systems trained at a massive scale;
  • Generative AI systems capable of producing highly realistic video or audio content, including synthetic voices or deepfake-quality media;
  • Advanced medical or scientific AI tools, such as models used to support diagnostics, drug discovery, or large-scale biological simulations that require substantial computational resources.

Covered “large developers” must implement and publish a safety and security protocol (with limited redactions), assess whether deployment poses an unreasonable risk of “critical harm,” and report certain safety incidents to the New York Attorney General within 72 hours, in contrast to changes to data breach laws that took effect at the end of 2024.

 While the law does not create a private right of action, enforcement authority rests with the Attorney General, including significant civil penalties for violations.

The RAISE Act takes effect January 1, 2027.

For businesses that license or integrate frontier AI models from third parties, the RAISE Act is also relevant contractually. Vendors may pass through compliance obligations, audit rights, or usage restrictions as part of their efforts to meet statutory requirements.

Health Information Privacy Act Vetoed

Although NYHIPA was vetoed, its contents remain highly relevant, particularly for businesses in health, wellness, advertising, and AI-enabled consumer services. The bill would have applied broadly to any entity processing health-related information linked to a New York resident or someone physically present in the state, regardless of HIPAA status. This would have been a more expansive law than similar state health data laws in Washington and Nevada.

Key provisions included strict limits on processing health data without express authorization, detailed and standalone consent requirements, and explicit bans on consent practices that obscure or manipulate user decision-making. The bill would have excluded research, development, and marketing from “internal business operations”, meaning AI training or product improvement using health data could have required new authorization. Individuals would also have been granted robust access and deletion rights, including obligations to notify downstream service providers and third parties of deletion requests going back one year.

Takeaways for Businesses

Taken together, these developments reflect New York’s intent to play a leading role in AI and privacy governance. For businesses, the message is not one of immediate across-the-board compliance, but of strategic preparation.

Companies developing or deploying advanced AI should strengthen governance, documentation, and incident-response processes. Organizations handling health-adjacent data, especially data that falls outside of HIPAA, should continue monitoring legislative activity and assess whether existing consent flows, data uses, and vendor arrangements would withstand a future version of NYHIPA or similar state laws.

New York’s approach underscores a broader trend: even narrowly scoped laws can have a wide practical impact through contracts, product design, and risk management. Businesses that plan early will be best positioned as this regulatory landscape continues to evolve.

As artificial intelligence (AI) becomes more widely used in hiring and employment decisions, Illinois has taken a significant step to regulate how employers must inform workers about AI’s use. Effective January 1, 2026, House Bill 3773 amended the Illinois Human Rights Act (IHRA) to require, among other things, employer notice when AI influences or facilitates employment decisions. According to reporting from the National Federation of Independent Business, the Illinois Department of Human Rights (IDHR) discussed at a recent stakeholder meeting draft rules to implement the notification requirement. See Subpart J — Use of Artificial Intelligence in Employment.

When Notice Is Required — And When It Isn’t

Under draft Subpart J, notice would be required whenever an employer uses AI to influence or facilitate any “covered employment decision.” A covered employment decision means:

a decision with respect to recruitment, hiring, promotion, renewal of employment, selection for training or apprenticeship, discharge, discipline, tenure, or the terms, privileges, or conditions of employment.

The draft rules make clear that notice would be required regardless of whether the AI’s use has discriminatory effects — meaning even if the employer believes the technology is fair or unbiased, the notice obligation would still applies.

Examples that would trigger the notice requirement include:

  • Computer-based assessments, skills tests, or personality quizzes used to predict employee outcomes;
  • Resume screening or ranking by AI;
  • AI evaluation of facial expression, voice, or text in interviews;
  • Targeted job advertising driven by AI;
  • AI analysis of third-party data about workers or candidates.

Notice would not be required when an employer uses AI for general business tasks unrelated to influencing or facilitating covered employment decisions. For example:

  • Using AI to draft marketing content or internal reports;
  • Standard word processing, spreadsheets, firewalls, anti-spam systems, or other tools that do not infer, generate, or influence employment decisions as defined.  

When To Provide Notice

Timing matters, and the rules would distinguish between current and prospective employees:

  • For current employees, notice must be provided annually, and within 30 days after adopting or making substantial updates to an AI system used for covered decisions.
  • For prospective employees, as part of the job notice or posting.

These timing requirements aim to ensure transparency throughout the AI adoption lifecycle.

How Employers Must Provide Notice

The draft regulations specify multiple methods to maximize employee awareness and reduce the risk that workers or applicants miss the disclosure:

  • Inclusion in employee handbooks, manuals, or policy documents;
  • Posting in conspicuous physical locations where employer notices are typically displayed;
  • Posting on an employer’s intranet or external website where the employer customarily posts notices to prospective or current employees, including a conspicuous link on the homepage; and
  • Included with any job notice and posting

What the Notice Must Include

Subpart J’s draft content requirements for notice would go well beyond a simple “yes/no” that AI is used. Required elements would include:

  1. The AI system’s product name and, if applicable, developer or vendor;
  2. Which covered employment decisions the AI system influences or facilitates (e.g., hiring, discipline);
  3. The purpose of the AI system and the categories of personal information or employee data processed;
  4. The types of job positions the AI tool will be used for;
  5. A contact person — typically an HR representative — who can answer questions about the system and its use;
  6. How to request a reasonable accommodation related to the AI use; and
  7. Language from 775 ILCS 5/2-102(L) of the IL Human Rights Act.

Accessibility Requirements

Notably, the draft rules emphasize that notices must be accessible:

  • Plain language and a readable format;
  • Availability in languages commonly spoken by the employer’s workforce;
  • Reasonable accessibility for employees with disabilities.

This accessibility focus aligns with broader non-discrimination goals and reinforces meaningful notice beyond mere disclosure.

Context: Statute and Federal AI Policy

The notice requirement stems from Illinois’ 2024 amendments to the Human Rights Act in HB 3773, which added AI use to nondiscrimination protections and included a statutory notice mandate without detail — leaving specifics to IDHR regulations.

Other jurisdictions like Colorado and New York City also regulate AI and automated tools used in hiring — though Illinois’ approach stops short of mandatory bias audits or impact assessments.

At the federal level, the regulatory landscape is shifting. A December 2025 Executive Order (EO) titled Ensuring a National Policy Framework for Artificial Intelligence directs the U.S. Attorney General to establish an AI Litigation Task Force that will evaluate and potentially challenge state AI laws deemed “inconsistent” with federal policy.

Conclusion

Illinois’ draft Subpart J notice rules would establish a comprehensive, detailed disclosureframework for employers using AI in covered employment decisions — aiming for informed consent and transparency across the workforce.

However, with federal policy now pushing toward a national AI regime, state laws like Illinois’ may increasingly be scrutinized or even litigated in the coming years. Staying ahead of both state notice requirements and the evolving federal policy environment will be critical for employers using AI in hiring and workforce decisions.

As we explored in Part 1 of this series, AI-enabled smart glasses are rapidly evolving from niche wearables into powerful tools with broad workplace appeal — but their innovative capabilities bring equally significant legal and privacy concerns. Modern smart glasses blend high-resolution cameras, always-on microphones, and real-time AI assistants into a hands-free wearable that can capture, analyze, and even transcribe ambient information around the wearer. These features — from continuous audio capture to automated transcription — create scenarios where bystanders (co-workers, customers, etc.) may be recorded or have their conversations documented without ever knowing it, raising fundamental questions about consent and the boundaries of lawful observation.

Part 2 shifts focus to how these core capabilities intersect with consent requirements and note-taking practices under U.S. and state wiretapping and recording laws. In many jurisdictions, recording or transcribing a conversation without the express permission of all participants — particularly where devices can run discreetly in the background — can potentially trigger two-party (or all-party) consent obligations and potential statutory violations. Likewise, the promise of AI-assisted note taking — where every spoken word in a meeting could be saved, indexed, and shared — brings not just operational benefits but significant legal and business risk. Understanding how the unique sensing and recording features of smart glasses intersect with these consent and notetaking issues is essential for any organization contemplating deploy­­ment.

The Risk

AI glasses with continuous recording, AI note-taking, or voice transcription capabilities can easily violate state wiretapping laws. Twelve states require all parties to consent to audio recording of confidential communications, including California, Florida, Illinois, Maryland, Massachusetts, Connecticut, Montana, New Hampshire, Pennsylvania, and Washington. Even in one-party consent states, recording in locations where individuals have reasonable expectations of privacy violates surveillance laws. Going one step further, consider the possibility of the user being close enough to record a conversation between two unrelated persons.

The rise of AI note-taking capabilities in smart glasses makes this risk particularly acute. Unlike traditional recording that often requires deliberate action, AI glasses can passively capture and transcribe conversations throughout the day, creating permanent searchable records of discussions that participants never knew were being documented. Smart glasses that record continuously with no visible indicator, amplify this concern.

Relevant Use Cases

  • Sales representatives wearing AI glasses that automatically transcribe client meetings without explicit consent from all parties
  • Managers using glasses with AI note-taking features during performance reviews, disciplinary meetings, or interviews
  • Medical professionals recording patient consultations through smart glasses for AI-generated documentation
  • Employees wearing glasses during phone calls where the other party is in a two-party consent state
  • Anyone wearing recording-capable glasses in restrooms, locker rooms, medical facilities, or other areas with heightened privacy expectations
  • Workers using AI transcription features during confidential business discussions or trade secret conversations
  • OSHA inspectors using AI glasses (announced for expanded deployment in 2025) to record workplace inspections without proper protocols

Why It Matters

Violations of two-party consent laws carry criminal penalties, including potential jail time, as well as civil liability. The fact that many AI glasses lack obvious recording indicators—or have only tiny LED lights that are easily missed—compounds the risk. AI-generated transcripts created without consent or even awareness raise a myriad of issues, some of which are outlined here. The ease with which these devices could continuously record and transcribe conversations raises particular concerns relating to increasing emphasis and regulation directed at data minimization.

Practical Compliance Considerations

The compliance challenges surrounding AI glasses are significant, but manageable with proper planning:

  • Implement clear policies: Develop clear policies about when and where AI glasses with recording capabilities can be worn
  • Get consent: Obtain explicit verbal or written consent from all parties before activating recording features—consent banners on video calls may not suffice for glasses
  • Provide notice: Provide visible notification that recording is occurring (though many AI glasses lack adequate indicators)
  • Establish technical limits and safeguard: Implement geofencing or technical controls to automatically disable recording features in prohibited areas
  • Monitor usage: Maintain detailed logs of when recording features are activated and by whom
  • Train users: Train employees on state-specific wiretapping laws, especially when traveling or conducting interstate communications
  • Increase awareness of device features and capabilities: For AI note-taking features, ensure participants know transcription is occurring and can opt out
  • Leverage existing policies: Apply existing privacy and security controls, such as access and retention, relating to transcripts generated from the wearables.

Conclusion

AI glasses represent transformative technology with genuine business value, from hands-free information access to enhanced productivity and innovative customer experiences. The 210% growth in smart glasses shipments in 2024 demonstrates their appeal. But the legal risks are real and growing.

The key is to approach the deployment of AI glasses (and deployment of similar technologies) with eyes wide open—understanding both the capabilities of the technology and the complex legal frameworks that govern its use. With thoughtful policies, robust technical controls, ongoing compliance monitoring, and respect for privacy rights, organizations can harness the benefits of AI glasses while managing the risks.

Following failed congressional attempts to limit state AI laws, on December 11, 2025, the President issued an Executive Order titled Ensuring a National Policy Framework for Artificial Intelligence. The Order represents federal intervention into the growing landscape of state-level AI regulation. According to the Administration, a patchwork of state laws has created inconsistent and burdensome compliance obligations, particularly for startups and organizations operating across multiple jurisdictions. The Order claims that certain current state AI laws not only restrict innovation but could also force AI developers to incorporate “ideological bias.”

The EO provides the following example:

a new Colorado law banning “algorithmic discrimination” may even force AI models to produce false results in order to avoid a “differential treatment or impact” on protected groups.

To address these concerns, the Executive Order establishes a new AI Litigation Task Force within the Department of Justice. This group is charged with challenging state AI laws that conflict with the federal policy of promoting minimally burdensome, innovation-focused AI governance.

The Administration anticipates litigation against states whose laws it believes unconstitutionally regulate interstate commerce, impose unlawfully compelled speech, or require model outputs to be modified in ways that conflict with federal law. Within 90 days, the Department of Commerce must also publish a public evaluation identifying specific state laws considered “onerous” or inconsistent with the national policy framework—the Colorado AI Act and California Consumer Privacy Act’s ADMT Regulations will very likely make the list, including those that require disclosures or reporting obligations the Administration argues may infringe the First Amendment.

The Order further ties compliance with federal AI policy to federal funding. States that maintain AI laws deemed inconsistent with federal objectives may become ineligible for certain Broadband Equity, Access, and Deployment (BEAD) funds, and federal agencies are directed to explore conditioning other discretionary grants on a state’s willingness to refrain from enforcing its AI regulations during funding periods. This introduces a significant financial dimension to federal-state tensions and may influence how aggressively states choose to regulate AI going forward.

In addition, the Order directs federal agencies to begin steps that lay the groundwork for federal preemption. The Federal Communications Commission must consider creating a national reporting and disclosure standard that would override conflicting state requirements, while the Federal Trade Commission is instructed to clarify that state laws compelling alterations to truthful AI outputs may be preempted under federal prohibitions on deceptive practices. These efforts suggest a shift toward a unified federal approach that could substantially reshape or displace existing state obligations.

The effects of the EO remain uncertain. Organizations have been grappling with a rapid proliferation of state AI laws governing areas such as notice, transparency, nondiscrimination, fairness, safety, accuracy, and vendor management stemming from automated decision-making. For covered organizations, these AI developments also intersect with long-standing civil rights laws, like Title VII and similar state laws, and well-established guardrails to prevent employment discrimination, like the Uniform Guidelines on Employee Selection Procedures, which continue to shape how AI-enabled selection tools must be assessed for compliance. 

If federal litigation succeeds or preemptive standards emerge, some existing obligations may shrink or change. At the same time, organizations should expect a period of regulatory instability as states and the federal government contest the limits of their respective authority. Organizations that have invested heavily in state-specific compliance frameworks may need to revisit or revise them, while AI developers could face shifting expectations around disclosure, output modification, and fairness-related requirements.

The Executive Order also directs federal advisors to prepare legislative recommendations for a uniform federal AI framework. Although the Administration proposes broad federal preemption, it indicates that certain topics—such as child safety protections and state AI procurement rules—should remain within state authority. This signals a coming debate in Congress over how much room states should retain to regulate AI-related issues.

Finally, the Order is almost certain to face legal challenges from states, which may argue that the Administration is exceeding its authority, infringing on state sovereignty, or coercively attaching conditions to federal funding. Litigation could take years to resolve, leaving covered organizations to navigate an evolving legal environment where both federal and state rules remain in flux—underscoring the importance of developing AI governance approaches that are flexible, regularly revisited, and attentive to how AI tools interact with existing employment discrimination laws and privacy requirements, for example. The bottom line is that the Executive Order marks the beginning of an aggressive federal push to standardize AI regulation nationwide, with substantial consequences for compliance, risk management, and future governance. Covered organizations should monitor developments closely and prepare for a shifting regulatory landscape.

Smart glasses with AI capabilities have evolved from futuristic concept to everyday reality. The market exploded in 2024, with global smart glasses shipments surging 210% year-over-year, driven primarily by Meta’s Ray-Ban smart glasses. From the consumer-focused Meta Ray-Ban Display (featuring a built-in heads-up display announced in September 2025) to Meta’s partnership with Oakley for athletic glasses, enterprise solutions like RealWear and Vuzix for industrial use, and developer-focused options like Brilliant Labs’ Frame glasses, these devices promise to revolutionize how we interact with the world.

But with innovation comes risk. Modern AI glasses can record video and audio, process conversations in real-time with AI assistants, perform visual analysis of everything you see, generate meeting summaries, create searchable transcripts, and transmit data to cloud servers—often without obvious visual indicators. For businesses deploying these technologies and individuals using them in professional settings, the compliance landscape is treacherous.

In Part 1 of this series, we address biometric data collection.

The Risk

AI glasses increasingly incorporate biometric data collection capabilities that trigger strict privacy regulations. This includes facial recognition through camera feeds, voiceprint capture through AI transcription (see upcoming Part 2 in this series for AI specific risks), eye tracking and gaze analysis, and even the processing of images that could be used to identify individuals. Under laws like California’s Consumer Privacy Act (CCPA), Illinois’ Biometric Information Privacy Act (BIPA), and the EU’s General Data Protection Regulation (GDPR), biometric data receives heightened protection.

The 2024 Charlotte Tilbury settlement established that virtual try-on features using facial geometry may constitute biometric data collection under BIPA, potentially requiring separate notifications and annual consent reaffirmation. This and other precedents extend directly to AI glasses that process visual and audio data that can constitute biometric information.

Relevant Use Cases

  • Retail employees using AI glasses that analyze customer faces or body language for personalized service recommendations
  • Security personnel deploying glasses with facial recognition capabilities for identification
  • Healthcare providers using glasses that process patient images, potentially capturing biometric identifiers
  • Any workplace use where AI processes images or voices of employees, customers, or the public
  • Industrial workers whose AI glasses capture and analyze faces or voices of colleagues during recorded training sessions

Why It Matters

BIPA provides for statutory damages of $1,000 to $5,000 per violation, along with attorneys’ fees. Following the Illinois Supreme Court’s 2023 Cothron decision, each scan or transmission could constitute a separate violation—though a 2024 amendment limited this to one violation per person per collection method. The $51.75 million Clearview AI settlement in 2025 demonstrates the scale of exposure: with biometric data from millions of individuals, companies face bankruptcy-level liability.

While BIPA may be the most popular of the biometric laws in the United States, it certainly is not the only one. Measures to regulate the collection, use, and disclosure of biometric information exist in states such as California, Colorado, Texas, and Washington, as well as several cities including New York City and Portland OR.

For a summary of these requirements, see our Biometrics white paper.

Practical Compliance Considerations

The compliance challenges surrounding AI glasses are significant, but manageable with proper planning:

  • Address Applicable Notice, Consent, and Policy Requirements: Organizations may need to create detailed, written policies governing when, where, and how AI glasses may be used. Address recording features, AI processing, data transmission, and specify prohibited uses. Include clear guidance on consumer versus enterprise devices. And, of course, consider applicable notice, consent, and record retention policies.
  • Conduct Privacy Impact Assessments: Before deploying AI glasses, evaluate privacy risks specific to your industry, geography, and use cases. Consider biometric data collection, workplace surveillance, third-party AI processing, and cross-border data transfers. Note such risk assessments may be required, see here and here.
  • Implement Technical Controls: Use device management solutions to control which features can be activated in which locations. Consider geofencing to automatically disable recording in sensitive areas like bathrooms, break rooms, confidential meeting spaces, and healthcare facilities.
  • Vet Vendors and AI Services: Understand where data goes, who processes it, how long it’s retained, what security controls exist, and whether vendors will sign appropriate agreements (BAAs for HIPAA, DPAs for GDPR, etc.). Negotiate contracts that protect your organization and comply with your obligations.
  • Train Rigorously: Ensure all users understand the legal implications of AI glasses, including consent requirements, prohibited uses, data handling obligations, and discovery implications. Training should be role-specific and regularly updated.
  • Monitor Regulatory Developments: Regulation is evolving rapidly concerning biometrics, as well as AI tools that leverage that information for additional capabilities. The EU AI Act took effect in 2024, California increased its AI-regulatory environment in 2024-2025, and federal AI legislation is under consideration. State workplace surveillance laws are proliferating. Stay current with legal developments.
  • Establish Clear Lines of Responsibility: Designate who is responsible for AI glasses compliance, including legal review, privacy assessment, security controls, HR considerations, policy enforcement, and incident response.
  • Consult Legal Counsel: Given the complexity and variability of the regulatory environment, work with attorneys familiar with privacy, employment, biometric, and AI regulations before rolling out these wearables.

Conclusion

AI glasses represent transformative technology with genuine business value, from hands-free information access to enhanced productivity and innovative customer experiences. The 210% growth in smart glasses shipments in 2024 demonstrates their appeal. But the legal risks are real and growing.

Organizations that fail to address these compliance concerns face not just regulatory penalties, but class action litigation (BIPA damages alone can reach millions), reputational harm, loss of customer trust, and the erosion of employee confidence.

The key is to approach the deployment of AI glasses (and deployment of similar technologies) with eyes wide open—understanding both the capabilities of the technology and the complex legal frameworks that govern its use. With thoughtful policies, robust technical controls, ongoing compliance monitoring, and respect for privacy rights, organizations can harness the benefits of AI glasses while managing the risks.

test

After years of development and extensive stakeholder engagement, California has finalized groundbreaking cybersecurity audit regulations under the California Consumer Privacy Act (CCPA). These new requirements may significantly impact how covered businesses protect consumer data.

The New Regulations

The California Privacy Protection Agency (CPPA) Board approved comprehensive amendments to CCPA regulations covering cybersecurity audits, risk assessments, and automated decision-making technology (ADMT), among other things. The regulations were subsequently approved by the California Office of Administrative Law on September 23, 2025, marking the completion of a rulemaking process that began in November 2024.

When Does the Audit Requirement Apply?

Not all businesses subject to the CCPA must conduct cybersecurity audits. According to the regulations, the requirement applies only to businesses whose data processing presents a “significant risk” to consumer security, defined by specific thresholds:

Businesses must conduct annual cybersecurity audits if they fall into one of two buckets:

  1. They derive 50% or more of their annual revenue in the preceding calendar year from selling or sharing consumers’ personal information, OR
  2. They have over $25 million in annual gross revenue (adjusted every two years; currently $26,625,000) AND process in the preceding calendar year the either:
    • Personal information of more than 250,000 California consumers or households, OR
    • Sensitive personal information of more than 50,000 California consumers or households.

These thresholds ensure that the audit requirement focuses on businesses handling substantial volumes of consumer data or those whose business models center on data monetization.

Effective Dates and Compliance Deadlines

The regulations officially take effect on January 1, 2026. However, businesses have staggered deadlines for submitting their first cybersecurity audit certifications to the CPPA based on their revenue size:

  • April 1, 2028: Businesses with annual revenues over $100 million for 2026.
  • April 1, 2029: Businesses with annual revenues between $50-100 million for 2027.
  • April 1, 2030: Businesses with annual revenues under $50 million for 2028.

This phased approach gives businesses time to establish robust audit processes and implement necessary cybersecurity improvements before their first submission deadline.

What the Audit Requirement Entails

The regulations establish detailed requirements for conducting comprehensive cybersecurity audits, the results of which must be provided to a member of the business’s executive management team who has direct responsibility for the business’s cybersecurity program. Here’s a summary of what businesses must do:

Auditor Qualifications: Audits must be conducted by qualified, objective, independent professionals—either internal or external—using recognized auditing standards such as those adopted by the American Institute of CPAs. Auditors must possess expertise in cybersecurity and auditing methodologies.

Audit Scope: The cybersecurity audit must comprehensively evaluate the business’s cybersecurity program across 18 key areas, including:

  • Secure user authentication and access controls
  • Encryption of personal information
  • Account management systems
  • Personal information inventory and management
  • Secure hardware and software configuration
  • Vulnerability scanning and penetration testing
  • Audit-log management and network monitoring
  • Network defenses and segmentation
  • Antivirus and anti-malware protections
  • Vendor and third-party risk management
  • Data retention schedules and secure disposal
  • Incident response capabilities
  • Cybersecurity training programs
  • Breach and incident review for the audit period

Even businesses not subject to the mandatory audit requirement should view the 18 standards as a framework for evaluating their own cybersecurity programs, as the CPPA may use these criteria when assessing CCPA compliance more broadly.

Documentation Requirements: Businesses must prepare detailed audit reports documenting the review scope, policies assessed, evaluation criteria, supporting documentation, identified compliance gaps, and remediation plans. All audit records must be retained for five years.

Annual Certification: Companies must submit written certifications of compliance to the CPPA on an annual basis, signed under penalty of perjury by appropriate executive leadership.

Flexibility for Existing Audits: Importantly, businesses may leverage cybersecurity audits conducted for other regulatory purposes—such as NIST Cybersecurity Framework 2.0 assessments—provided they meet all CCPA requirements. This allows companies to avoid duplicative efforts where existing audits are sufficiently comprehensive.

What This Means for Your Business

Businesses subject to the audit requirement should begin preparation now by identifying qualified audit personnel, establishing appropriate internal reporting structures, conducting comprehensive inventories of personal information processing activities, and documenting current cybersecurity practices. The clock is ticking toward those first compliance deadlines in 2028.

As artificial intelligence (AI), particularly generative AI, becomes increasingly woven into our professional and personal lives—from personalized travel itineraries to reviewing resumes to summarizing investigation notes and reports—questions about who or what controls our data and how it’s used are ever present. AI systems survive and thrive on information and that intersection of AI and privacy elevates the need for data protection.

Recent regulations issued by the California Privacy Protection Agency (CPPA) under the California Consumer Privacy Act (CCPA) begin to erect those protections. Among its various provisions, the CCPA now specifically addresses automated decision-making technologies (ADMT), attempting to bring transparency and consumer rights to, among other things, push back on algorithms making significant decisions about them.

As a starting point, it is important to define ADMT. Under the CCPA, it means any technology that processes personal information and uses computation to replace human decision-making or substantially replace human decision-making. For this purpose, “replace” means to make decision without human involvement. To be considered human involvement, a human must:

  1. know how to interpret and use the technology’s output to make the decision;
  2. review and analyze the output of the technology, and any other information that is relevant to make or change the decision; and
  3. have the authority to make or change the decision based on their analysis in (B).

CCPA-covered businesses that use ADMT to make “significant decisions” about consumers have several new compliance obligations to navigate. A “significant decision” is defined as a decision that has important consequences for a consumer’s life, opportunities, or access to essential services. CCPA regulations define these decisions as those that result in the provision or denial of:

  • Financial or lending services (e.g., credit approval, loan eligibility)
  • Housing (e.g., rental applications, mortgage decisions)
  • Education enrollment or opportunities (e.g., admissions decisions)
  • Employment or independent contracting opportunities or compensation (e.g., hiring, promotions, work assignments)
  • Healthcare services (e.g., treatment eligibility, insurance coverage)

These decisions are considered “significant” because they directly affect a consumer’s economic, health, or personal well-being.

When such businesses use ADMT to make significant decisions, they generally must do the following:

  • Provide an opt-out right for consumers.
  • Provide a pre-use notice that clearly explains the business’s use of ADMT, in plain language.
  • Provide consumers with the ability to request information about the business’s use of ADMT.

Businesses using ADMT for significant decisions before January 1, 2027, must comply by January 1, 2027. Businesses that begin using ADMT after January 1, 2027, must comply immediately when the use begins.

Businesses will need to examine these new requirements carefully, including how they fit into the existing CCPA compliance framework, along with exceptions that may apply. For example, in the case of a consumer’s right to opt-out of ADMT, a business may not be required to make that right available.

If a business provides consumers with a method to appeal the ADMT decision to a human reviewer who has the authority to overturn the decision, opt-out is not required. Additionally, the right to opt-out of ADMT in connection with certain admission, acceptance, or hiring decisions, is not required if the following are satisfied:

  • the business uses ADMT solely for the business’s assessment of the consumer’s ability to perform at work or in an educational program to determine whether to admit, accept, or hire them; and
  • the ADMT works as intended for the business’s proposed use and does not unlawfully discriminate based upon protected characteristics.

Likewise, the right to opt-out of ADMT is not required for certain allocation/assignment of work and compensation decisions, if the business:

  • uses the ADMT solely for the business’s allocation/assignment of work or compensation; and
  • the ADMT works for the business’s purpose and does not unlawfully discriminate based upon protected characteristics.

As many businesses are realizing, successfully deploying AI requires a coordinated approach to achieve more than getting the desired output. It includes understanding a complex regulatory environment of which data privacy and security is a significant part.