As we have discussed in prior posts, AI-enabled smart glasses are rapidly evolving from niche wearables into powerful tools with broad workplace appeal — but their innovative capabilities bring equally significant legal and privacy concerns.

  • In Part 1, we addressed compliance issues that arise when these wearables collect biometric information.
  • In Part 2, we covered all-party consent requirements and AI notetaking technologies.
  • In Part 3, we considered broader privacy and surveillance issues, including from a labor law perspective.

In this Part 4, we consider the potentially vast amount of personal and other confidential data that may be collected, visually and audibly, through everyday use of this technology. Cybersecurity and data security risk more broadly pose another major and often underestimated exposure from this technology.

The Risk

AI smart glasses collect, analyze, and transmit enormous volumes of sensitive data—often continuously, and typically transmitting it to cloud-based servers operated by third parties. This creates a perfect storm of cybersecurity risk, regulatory exposure, and breach notification obligations under laws in all 50 states, as well as the CCPA, GDPR, and numerous sector-specific regulations, such as HIPAA for the healthcare industry.

Unlike traditional cameras or recording devices, AI glasses are designed to collect and process data in real time. Even when users believe they are not “recording,” the devices may still be capturing visual, audio, and contextual information for AI analysis, transcription, translation, or object recognition. That data is frequently transmitted to third-party AI providers with unclear security controls, retention practices, and secondary-use restrictions.

Many AI glasses explicitly rely on third-party AI services. For example, Brilliant Labs’ Frame glasses use ChatGPT to power their AI assistant, Noa, and disclose that multiple large language models may be involved in processing. In practice, this means sensitive business conversations, images, and metadata may leave the organization entirely—often without IT, security, or legal teams fully understanding where the data goes or how it is protected.

Use Cases at Risk

  • Hospital workers going on rounds with their team equipped with AI glasses that access, capture, view, and record patients, charts, wounds, family members, in electronic format, triggering the HIPAA Security Rule and state law obligations
  • Financial services employees wearing AI glasses that capture customer financial data, account numbers, or investment information
  • Any workplace use involving personally identifiable information (PII), such as Social Security numbers, credit card data, or medical information, as well as confidential business of the company and/or its customers
  • Attorneys and legal professionals using AI glasses during privileged communications, potentially risking waiver of attorney-client privilege
  • Employees connecting AI glasses to unsecured or public Wi-Fi networks, creating man-in-the-middle attack risks
  • Lost or stolen AI glasses that store unencrypted audio, video, or contextual data

Why It Matters

Data breaches involving biometric data, health information, or financial data carry outsized legal and financial consequences. With AI glasses, as a practical matter, an entity generally is less likely to face a large-scale data breach affecting hundreds of thousands or millions of people. However, a breach and exposure of sensitive patient images, discussions, or other data captured with AI glasses could be just as, if not more, harmful to the reputation of a health system, for example, than an attack by a criminal threat actor. Beyond reputational harm, incident response costs, litigation, and regulatory penalties also remain a significant risk factor.

Shadow AI (the unauthorized use of artificial intelligence tools by employees in the workplace) also poses a potential data security, breach, and third-party risks. Many devices sync automatically to consumer cloud accounts with security practices that employers neither control nor audit. When an employee uses personal AI glasses for work, fundamental questions often go unanswered: Where is the data stored? Is it encrypted? Who has access? How long is it retained? Is it used to train AI models?

Finally, the use of AI glasses can diminish the effects of a powerful data security tool – data minimization. Businesses will need to grapple with the question whether the constant, ambient data collection and recording aligns with the principles of data minimization, a principle that is woven into data privacy laws, such as the California Consumer Privacy Act.

Practical Compliance Considerations

  • Implement clear policies: Be deliberate about whether to permit these wearables in the workplace. And, if so, establish policies limiting when and where they may be used, and what recording features can be activated and under what circumstances.
  • Perform an assessment: Conduct security and privacy assessments of specific AI glasses models before deployment
  • Understand third-party service provider risks: Review security documentation, including encryption practices, access controls, and incident response commitments
  • Understand obligations to customers: Review services agreements concerning the collection, processing, and security obligations for handling customer personal and confidential business information
  • Update incident response plans: Factor in wearable device compromises
  • For HIPAA Covered Entities and Business Associates: Confirm that AI glasses meet HIPAA requirements
  • Evaluate cyber insurance coverage: Assess whether your policy (assuming you have a cyber policy!) covers breaches involving wearable technology and AI-related risks

Conclusion

AI smart glasses may feel futuristic and convenient, but from a data security and compliance perspective, they dramatically expand an organization’s attack surface. Without careful controls, these devices can quietly introduce breach risks, third-party data sharing, and regulatory exposure that outweigh their perceived benefits.

The key is to approach the deployment of AI glasses (and deployment of similar technologies) with eyes wide open—understanding both the capabilities of the technology and the complex legal frameworks that govern their use. With thoughtful policies, robust technical controls, ongoing compliance monitoring, and respect for privacy rights, organizations can harness the benefits of AI glasses while managing the risks.

As we have discussed in prior posts, AI-enabled smart glasses are rapidly evolving from niche wearables into powerful tools with broad workplace appeal — but their innovative capabilities bring equally significant legal and privacy concerns. In Part 1, we addressed compliance issues that arise when these wearables collect biometric information. In Part 2, we covered all-party consent requirements and AI notetaking technologies.

In this Part 3, we consider broader privacy and surveillance issues, including from a labor law perspective. Left uncontrolled, the nature and capabilities of AI smart glasses open the door to a range of circumstances in which legal requirements as well as societal norms could be violated, even inadvertently. At the same time, a pervasive surveillance environment fueled by the technologies such as AI smart glasses may spur arguments by some employees that their right to engage in protected concerted activity has been infringed.

The Risk

When employers provide AI glasses to employees or permit their use in the workplace, they can potentially create continuous and/or intrusive surveillance conditions that may violate the privacy rights of individuals they encounter, including employees, customers, and others. Various state statutory and common law limit surveillance, and new laws are emerging that would target workplace surveillance technologies. For example, California Assembly Bill 1331, introduced in early 2025, sought to limit employer surveillance and enhance employee privacy. The bill would have banned monitoring in private off-duty spaces (like bathrooms, lactation rooms) and prohibited surveillance of homes or personal vehicles. California Governor Newsom vetoed this bill in October.

However, other law in California, notably the California Consumer Privacy Act (CCPA), seeks to regulate surveillance that would involve certain personal information. Under the CCPA, continuous surveillance may trigger a risk assessment obligation. See more about that here. The CCPA and several other states that have adopted a comprehensive privacy law require covered entities to communicate about the personal information they collect from residents of those states. Covered entities that permit employees to use these devices in the course of their employment may nee to better understand the type of personal information those employees’ glasses are collecting.

The National Labor Relations Board (NLRB) generally establishes a right of employees to act with co-workers to address work-related issues. Widespread surveillance and recording could chill protected concerted activity – employees might be less likely to engage with other employees about working conditions under such circumstances. Of course, introducing AI glasses in the workplace may trigger an obligation to bargain under the NLRA.

Relevant Use Cases

  • Warehouse workers using AI glasses for inventory management that also track movement patterns, productivity metrics, and conversations of coworkers
  • School employees that use AI glasses while interacting with minor students in a range of circumstances
  • Field service technicians wearing glasses that record all customer interactions as well as communications with coworkers
  • Office workers using AI glasses with note-taking features during internal meetings, capturing discussions among employees
  • Healthcare workers in a variety of settings, purposefully or inadvertently, capturing images or data of patients and their families
  • Manufacturing employees whose glasses document work processes while also recording conversations with coworkers

Why It Matters:

Connecticut, Delaware, and New York require employers to notify employees of certain electronic monitoring. California’s CCPA gives employees specific rights over their personal information, including the right to know what’s collected and the right to deletion. These protections were strengthened in recently updated regulations under the California Privacy Rights Act which created, among other things, an obligation to conduct and report on risk assessments performed in connection with certain surveillance activities.

Union environments face additional scrutiny. Surveillance may constitute an unfair labor practice requiring collective bargaining. The NLRB has issued guidance limiting employers’ ability to ban workplace recordings because such bans can interfere with protected rights. However, continuous AI-powered surveillance could still create a chilling effect that violates labor law.

Practical Compliance Considerations:

  • Implement clear policies: Be deliberate about whether to permit these wearables in the workplace. And, if so, establish policies limiting when and where they may be used, and what recording features can be activated and under what circumstances.
  • Provide notice: Providing written notice about AI glasses capabilities, including what data is collected, how it’s processed, and how it may be used.
  • Perform an assessment: Conduct privacy impact/risk assessments before deploying AI glasses in the workplace, including when interacting with customers.
  • Consider bargaining obligations, protected concerted activity rights: If deploying AI glasses in union environments, engage in collective bargaining about their use, assess PCA rights.
  • Establish technical limits and safeguard: Consider implementing technical controls like automatic disabling of recording in break rooms, bathrooms, and areas designated for private conversations.

Conclusion

AI glasses represent transformative technology with genuine business value, from hands-free information access to enhanced productivity and innovative customer experiences. The 210% growth in smart glasses shipments in 2024 demonstrates their appeal. But the legal risks are real and growing.

The key is to approach the deployment of AI glasses (and deployment of similar technologies) with eyes wide open—understanding both the capabilities of the technology and the complex legal frameworks that govern its use. With thoughtful policies, robust technical controls, ongoing compliance monitoring, and respect for privacy rights, organizations can harness the benefits of AI glasses while managing the risks.

As we explored in Part 1 of this series, AI-enabled smart glasses are rapidly evolving from niche wearables into powerful tools with broad workplace appeal — but their innovative capabilities bring equally significant legal and privacy concerns. Modern smart glasses blend high-resolution cameras, always-on microphones, and real-time AI assistants into a hands-free wearable that can capture, analyze, and even transcribe ambient information around the wearer. These features — from continuous audio capture to automated transcription — create scenarios where bystanders (co-workers, customers, etc.) may be recorded or have their conversations documented without ever knowing it, raising fundamental questions about consent and the boundaries of lawful observation.

Part 2 shifts focus to how these core capabilities intersect with consent requirements and note-taking practices under U.S. and state wiretapping and recording laws. In many jurisdictions, recording or transcribing a conversation without the express permission of all participants — particularly where devices can run discreetly in the background — can potentially trigger two-party (or all-party) consent obligations and potential statutory violations. Likewise, the promise of AI-assisted note taking — where every spoken word in a meeting could be saved, indexed, and shared — brings not just operational benefits but significant legal and business risk. Understanding how the unique sensing and recording features of smart glasses intersect with these consent and notetaking issues is essential for any organization contemplating deploy­­ment.

The Risk

AI glasses with continuous recording, AI note-taking, or voice transcription capabilities can easily violate state wiretapping laws. Twelve states require all parties to consent to audio recording of confidential communications, including California, Florida, Illinois, Maryland, Massachusetts, Connecticut, Montana, New Hampshire, Pennsylvania, and Washington. Even in one-party consent states, recording in locations where individuals have reasonable expectations of privacy violates surveillance laws. Going one step further, consider the possibility of the user being close enough to record a conversation between two unrelated persons.

The rise of AI note-taking capabilities in smart glasses makes this risk particularly acute. Unlike traditional recording that often requires deliberate action, AI glasses can passively capture and transcribe conversations throughout the day, creating permanent searchable records of discussions that participants never knew were being documented. Smart glasses that record continuously with no visible indicator, amplify this concern.

Relevant Use Cases

  • Sales representatives wearing AI glasses that automatically transcribe client meetings without explicit consent from all parties
  • Managers using glasses with AI note-taking features during performance reviews, disciplinary meetings, or interviews
  • Medical professionals recording patient consultations through smart glasses for AI-generated documentation
  • Employees wearing glasses during phone calls where the other party is in a two-party consent state
  • Anyone wearing recording-capable glasses in restrooms, locker rooms, medical facilities, or other areas with heightened privacy expectations
  • Workers using AI transcription features during confidential business discussions or trade secret conversations
  • OSHA inspectors using AI glasses (announced for expanded deployment in 2025) to record workplace inspections without proper protocols

Why It Matters

Violations of two-party consent laws carry criminal penalties, including potential jail time, as well as civil liability. The fact that many AI glasses lack obvious recording indicators—or have only tiny LED lights that are easily missed—compounds the risk. AI-generated transcripts created without consent or even awareness raise a myriad of issues, some of which are outlined here. The ease with which these devices could continuously record and transcribe conversations raises particular concerns relating to increasing emphasis and regulation directed at data minimization.

Practical Compliance Considerations

The compliance challenges surrounding AI glasses are significant, but manageable with proper planning:

  • Implement clear policies: Develop clear policies about when and where AI glasses with recording capabilities can be worn
  • Get consent: Obtain explicit verbal or written consent from all parties before activating recording features—consent banners on video calls may not suffice for glasses
  • Provide notice: Provide visible notification that recording is occurring (though many AI glasses lack adequate indicators)
  • Establish technical limits and safeguard: Implement geofencing or technical controls to automatically disable recording features in prohibited areas
  • Monitor usage: Maintain detailed logs of when recording features are activated and by whom
  • Train users: Train employees on state-specific wiretapping laws, especially when traveling or conducting interstate communications
  • Increase awareness of device features and capabilities: For AI note-taking features, ensure participants know transcription is occurring and can opt out
  • Leverage existing policies: Apply existing privacy and security controls, such as access and retention, relating to transcripts generated from the wearables.

Conclusion

AI glasses represent transformative technology with genuine business value, from hands-free information access to enhanced productivity and innovative customer experiences. The 210% growth in smart glasses shipments in 2024 demonstrates their appeal. But the legal risks are real and growing.

The key is to approach the deployment of AI glasses (and deployment of similar technologies) with eyes wide open—understanding both the capabilities of the technology and the complex legal frameworks that govern its use. With thoughtful policies, robust technical controls, ongoing compliance monitoring, and respect for privacy rights, organizations can harness the benefits of AI glasses while managing the risks.

Key Takeaways

  • Outlines basic steps to determine whether a business may need to perform a risk assessment under the California Consumer Privacy Act (CCPA) in connection with its use of dashcams
  • Provide a resource for exploring the basic requirements for conducting and reporting risk assessments

If you have not reviewed the recently approved, updated CCPA regulations, you might want to soon. There are several new requirements, along with many modifications and clarifications to existing rules. In this post, we discuss a new requirement – performing risk assessments – in the context of dashcam and related fleet management technologies.

In short, when performing a risk assessment, the business needs to assess whether the risk to consumer privacy from the processing of personal information outweighs the benefits to consumers, the business, others, and the public, and, if so, restricting or prohibiting that processing, as appropriate.

Of course, the first step to determine whether a business needs to perform a risk assessment under the CCPA is to determine whether the CCPA applies to the business. We discussed those basic requirements in Part 1 of our post on risk assessments under the CCPA.

If you are still reading, you have probably determined that your organization is a “business” covered by the CCPA and, possibly, your business is using certain fleet management technologies, such as dashcam or other vehicle tracking technologies. Even if that is not the case, the remainder of this post may be of interest for “businesses” under the CCPA that are curious about examples applying the new risk assessment requirement.

As discussed in Part 1 of our post on the basics of CCPA risk assessments, businesses are required to perform risk assessments when their processing of personal information presents “significant risk” to consumer privacy. The regulations set out certain types of processing activities involving personal information that would trigger a risk assessment. Depending on the nature and scope of the dashcam technology deployed, a business should consider whether a risk assessment is required.

Dashcams and similar devices increasingly come with an array of features. As the name suggests, these devices include cameras that can record activity inside and outside the vehicle. They also can be equipped with audio recording capabilities permitting the recording of voice in and outside the vehicle. Additionally, dashcams can play a role in logistics, as they often include GPS technology, and they can contribute significantly to worker and public safety through telematics. In general, telematics help businesses understand how the vehicle is being driven – acceleration, hard stops, swerving, etc. More recently, dashcams can have biometrics and AI technologies embedded in them. A facial scan can help determine if the driver is authorized to be driving that vehicle. AI technology also might be used to help determine if the driver is driving safely – is the driver falling asleep, eating, using their phone, wearing a seatbelt, and so on.

Depending on how a dashcam is equipped or configured, businesses subject to the CCPA should consider whether the dashcam involves the processing of personal information that requires a risk assessment.

For instance, a risk assessment is required when processing “sensitive personal information.” Remember that sensitive personal information includes, among other elements, precise geolocation data and biometric information for identifying an individuals. While the regulations include an exception for certain employment-related processing, businesses would have to assess whether those apply.

Another example of processing personal information that requires a risk assessment is profiling a consumer through “systematic observation” of that consumer when they are acting in their capacity as an educational program applicant, job applicant, student, employee, or independent contractor for the business. The regulations define “systematic observation” to mean:

methodical and regular or continuous observation. This includes, for example, methodical and regular or continuous observation using Wi-Fi or Bluetooth tracking, radio frequency identification, drones, video or audio recording or live-streaming, technologies that enable physical or biological identification or profiling; and geofencing, location trackers, or license-plate recognition.

The regulation also defines profiling as:

any form of automated processing of personal information to evaluate certain personal aspects (including intelligence, ability, aptitude, predispositions) relating to a natural person and in particular to analyze or predict aspects concerning that natural person’s performance at work, economic situation, health (including mental health), personal preferences, interests, reliability, predispositions, behavior, location, or movements.

Considering the range of use cases for vehicle/fleet tracking technologies, and depending on their capabilities and configurations, it is conceivable that in some cases the processing of personal information by such technology could be considered a “significant risk,” requiring a risk assessment under the CCPA.

In that case, Part 2 of our post on risk assessments outlines the steps a business needs to take to conduct a risk assessment, including what must be included in the required risk assessment report, and timely certifying the assessment to the California Privacy Protection Agency.

It is important to note that this is only one of a myriad of potential processing activities that businesses engage in that might trigger a risk assessment requirement. Businesses will need to identify those activities and assess next steps. If the business finds comparable activities, it may be able to minimize the risk assessment burden, by conducting a single assessment for those comparable activities.

Again, the new CCPA regulations represent a fundamental shift toward proactive privacy governance under the CCPA. Rather than simply reacting to consumer requests and data breaches, covered businesses must now systematically evaluate and document the privacy implications of their data processing activities before they begin. With compliance deadlines approaching in 2026, organizations should begin now to establish the cross-functional processes, documentation practices, and governance structures necessary to meet these new obligations.

test

The California Privacy Protection Agency (CPPA) has adopted significant updates to the California Consumer Privacy Act (CCPA) regulations, which were formally approved by the California Office of Administrative Law on September 23, 2025. These comprehensive regulations address automated decision-making technology, cybersecurity audits, and risk assessments, with compliance deadlines beginning in 2026. Among these updates, the risk assessment requirements represent a substantial new compliance obligation for many businesses subject to the CCPA.

Of course, as a threshold matter, businesses must first determine whether they are subject to the CCPA. For businesses that are not sure of whether the CCPA applies to them, our earlier discussion here may be helpful. If your business is subject to the CCPA, read on.

When Is a Risk Assessment Required?

The new regulations require businesses to conduct risk assessments when their processing of personal information presents “significant risks” to consumer privacy. The CPPA has defined specific processing activities that trigger this requirement:

  • Selling or sharing personal information.
  • Processing “sensitive personal information.” However, there is a narrow exception for limited human resources-related uses such as payroll, benefits administration, and legally mandated reporting. Employers will have to examine carefully which activities are excluded and which are not. Sensitive personal information under the CCPA includes precise geolocation, racial or ethnic origin, religious beliefs, genetic data, biometric information, health information, sexual orientation, and citizenship status, among other categories.
  • Using automated decision-making technology (ADMT) to make significant decisions about consumers. Significant decisions include those resulting in the provision or denial of financial services, lending, housing, education enrollment, employment opportunities, compensation, or healthcare services. More on ADMT to come.
  • Profiling a consumer through “systematic observation” when they are acting in their capacity as an educational program applicant, job applicant, student, employee, or independent contractor for the business. Systematic observation means methodical and regular or continuous observation, such as through Wi-Fi or Bluetooth tracking, radio frequency identification, drones, video or audio recording or live-streaming, technologies that enable physical or biological identification or profiling; and geofencing, location trackers, or license-plate recognition. Businesses engaged in workplace monitoring and using performance management applications may need to consider those activities under this provision.
  • Profiling a consumer based upon their presence in a “sensitive location.” A sensitive location means the following physical places: healthcare facilities including hospitals, doctors’ offices, urgent care facilities, and community health clinics; pharmacies; domestic violence shelters; food pantries; housing/emergency shelters; educational institutions; political party offices; legal services offices; union offices; and places of worship.
  • Processing personal information to train ADMT for a significant decisions, or train  facial recognition, biometric, or other technology to verify identity. This recognizes the heightened privacy risks associated with developing systems that may later be deployed at scale.

What is Involved in Completing a Risk Assessment?

For businesses engaged in activities with personal information that will require a risk assessment, it is important to note that there are a number of steps set forth in the new CCPA regulations for performing those assessments. These include:

  • Determining which stakeholders should be involved in the risk assessment process and the n nature of that involvement.
  • Establishing appropriate purposes and objectives for conducting the risk assessment
  • Satisfying timing and record keeping obligations.
  • Preparing risk assessment reports that meet certain content requirements.
  • Timely submitting certifications of required risk assessments to the CPPA

In Part 2 of this post we will discuss the requirements above to help businesses that have to perform one or more risk assessments develop a process for doing so.

The new CCPA regulations represent a fundamental shift toward proactive privacy governance under the CCPA. Rather than simply reacting to consumer requests and data breaches, covered businesses must now systematically evaluate and document the privacy implications of their data processing activities before they begin. With compliance deadlines approaching in 2026, organizations should begin now to establish the cross-functional processes, documentation practices, and governance structures necessary to meet these new obligations.

Businesses across many industries naturally want to showcase their satisfied customers. Whether it’s a university featuring successful graduates, a retailer highlighting happy shoppers, or a healthcare facility showcasing thriving patients, these real-world testimonials can be powerful marketing tools. However, when it comes to healthcare providers subject to HIPAA, using patient images and information for promotional purposes requires careful navigation of both federal privacy rules and state law requirements.

In a recent case, the failure to comply with these requirements resulted in a $182,000 fine and a two year compliance program for a Delaware nursing home, according to the resolution agreement.

The Office for Civil Rights (OCR), which enforces the HIPAA Privacy and Security Rules, recently announced an enforcement action that serves as an important reminder of these obligations. The case involved a nursing home that posted photographs of approximately 150 facility residents over a period of time to its social media page. These postings were part of a campaign to highlight the success residents were achieving at the nursing home. When a resident complained to OCR, the agency investigated and found the covered entity had not obtained the required HIPAA authorizations or complied with breach notification requirements. The enforcement actions that followed underscore that even seemingly benign marketing practices can trigger significant compliance issues under HIPAA.

Understanding HIPAA’s Authorization Requirements

Under HIPAA, covered entities may generally use and disclose protected health information (PHI) for treatment, payment, and healthcare operations, and certain other purposes, without patient authorization. Marketing activities, however, fall outside these permissible uses. In the OCR investigation, the covered entity didn’t simply share photographs—it also disclosed information about residents’ care to tell “success stories” of patients at their facilities. This combination of visual identification and health information, according to the OCR, constituted a use of PHI requiring express patient authorization under HIPAA.

The authorization requirement isn’t merely a technicality. HIPAA authorizations must meet specific regulatory standards, such as a clear description of the information to be disclosed, the purpose of the disclosure, and a date or event after which the authorization will cease to be valid. A patient’s informal agreement or willingness to participate doesn’t satisfy these requirements.

The Breach Notification Complication

The OCR investigation revealed another compliance failure: not providing the required breach notification. Under HIPAA’s Breach Notification Rule, a disclosure not permitted under the Privacy Rule can constitute a reportable breach requiring notification to affected individuals and potentially to OCR and the media. This means that a marketing misstep can go beyond just failing to get an authorization.

Lessons from Social Media Cases

This isn’t an isolated concern. Similar issues have arisen when healthcare providers, such as dentists and other practitioners, responded to patient complaints on platforms like Google and Yelp. Well-intentioned responses that acknowledge treating a patient or try to resolve the patient’s concerns can violate HIPAA. These cases make clear that covered entities must think carefully about any use or disclosure of patient information outside the core functions of treatment, payment, and healthcare operations, even when the patient may have disclosed the same information already.

State Law Adds Another Layer, Including for Regulation of AI and Biometrics

HIPAA compliance alone may not be sufficient, particularly when potentially more stringent protections exist at state law. Many states have laws and common law obligations requiring consent before using a person’s image or likeness for commercial purposes, as well as specifics concerning what that consent should look like. Covered entities must ensure they’re meeting both HIPAA authorization requirements and any applicable state law consent requirements. They also should be sure to understand the technologies they are using, including whether they are inadvertently collecting biometric data.

Looking ahead, covered entities should be aware that several states have begun enacting or amending laws addressing how businesses can use digital replicas of individuals, particularly in the AI context. As healthcare organizations increasingly adopt AI technologies, questions about using patient images or data to create or train AI systems, will require careful analysis under both existing HIPAA rules and these emerging state laws.

The Bottom Line

The message for HIPAA covered entities is clear: think before you post, promote, or publicize to good work you do for your patients. Even when patients are willing participants in marketing efforts, formal HIPAA authorizations and state law consents may be required. The cost of non-compliance—including financial settlements, required corrective action plans, and reputational harm—far exceeds the investment in proper authorization processes. When in doubt about whether patient information can be used for a particular purpose, covered entities should consult with privacy counsel to ensure full compliance with both federal and state requirements.

For businesses subject to the California Consumer Privacy Act (CCPA), a compliance step often overlooked is the requirement to annually update the businesses online privacy policy. Under Cal. Civ. Code § 1798.130(a)(5), CCPA-covered businesses must among other things update their online privacy policies at least once every 12 months. Note that CCPA regulations establish content requirements for online privacy policies, one of which is that the policy must include “the date the privacy policy was last updated.” See 11 CCR § 7011(e)(4).

As businesses continue to grow, evolve, adopt new technologies, or otherwise make online and offline changes in their business, practices, and/or operations, CCPA required privacy policies may no longer accurately or completely reflect the collection and processing of personal information. Consider, for example, the adoption of emerging technologies, such as so-called “artificial intelligence” tools. These tools may be collecting, inferring, or processing personal information in ways that were not contemplated when preparing the organization’s last privacy policy update.

The business also may have service providers that collect and process personal information on behalf of the business in ways that are different than they did when they began providing services to the business.

Simply put: If your business (or its service providers) has adopted any new technologies or otherwise changed how it collects or processes personal information, your privacy policy may need an update.

Practical Action Items for Businesses

Here are some steps businesses can take to comply with the annual privacy policy review and update requirement under the CCPA:

  • Inventory Personal Information
    Reassess what categories of personal information your organization collects, processes, sells, and shares. Consider whether new categories—such as biometric, geolocation, or video —have been added.
  • Review Data Use Practices
    Confirm whether your uses of personal information have changed since the last policy update. This includes whether you are profiling, targeting, or automating decisions based on the data.
  • Assess adoption of new technologies, such as AI and New Tech Tools
    Has your business adopted any new technologies or systems, such as AI applications? Examples may include:
    • AI notetakers, transcription, or summarization tools for use in meetings (e.g., Otter, Fireflies)
    • AI used for chatbots, personalized recommendations, or hiring assessments
  • Evaluate Third Parties and Service Providers
    Are you sharing or selling information to new third parties? Has your use of service providers changed, or have service providers changed their practices around the collection or processing of personal information?
  • Review Your Consumer Rights Mechanisms
    Are the methods for consumers to submit access, deletion, correction, or opt-out requests clearly stated and functioning properly?

These are only a few of the potential recent developments that may drive changes in an existing privacy policy. There may be additional considerations for businesses in certain industries and departments within those businesses that should be considered as well. Here are a few examples:

Retail Businesses

  • Loyalty programs collecting purchase history and predictive analytics data.
  • More advanced in-store cameras and mobile apps collecting biometric or geolocation information.
  • AI-driven customer service bots that gather interaction data.

Law Firms

  • Use of AI notetakers or transcription tools during client calls.
  • Remote collaboration tools that collect device or location data.
  • Marketing platforms that profile client interests based on website use.

HR Departments (Across All Industries)

  • AI tools used for resume screening and candidate profiling.
  • Digital onboarding platforms collecting sensitive identity data.
  • Employee productivity and monitoring software that tracks usage, productivity, or location.

The online privacy policy is not just a static compliance document—it’s a dynamic reflection of your organization’s data privacy practices. As technologies evolve and regulations expand, taking time once a year to reassess and update your privacy disclosures is not only a legal obligation in California but a strategic risk management step. And, while we have focused on the CCPA in this article, inaccurate or incomplete online privacy policies can elevate compliance and litigation risks under other laws, including the Federal Trade Commission Act and state protections against deceptive and unfair business practices.

Montana recently amended its privacy law through Senate Bill 297, effective October 1, 2025, strengthening consumer protections and requiring businesses to revisit their privacy policies that apply to citizens of Montana. Importantly, it lowered the threshold for applicability to persons and businesses who control or process the personal data of 25,000 or more consumers (previously 50,000), unless the controller uses that data solely for completing payments. For those who derive more than 25% of gross revenue from the sale of personal data, the threshold is now 15,000 or more consumers (previously 25,000).

With the amendments, nonprofits are no longer exempt unless they are set up to detect and prevent insurance fraud. Insurers are now similarly exempt.

When a consumer requests confirmation that a controller is processing their data, the controller can no longer disclose but must identify possession of: (1) social security numbers, (2) ID numbers, (3) financial account numbers, (4) health insurance or medical identification numbers, (5) passwords, security questions, or answers, or (6) biometric data.

Privacy notices must now include: (1) personal data categories, (2) controller’s purpose in possessing personal data, (3) categories controller sells or shares with third parties, (4) categories of third parties, (5) contact information for the controller, (6) explanation of rights and how to exercise them, and (7) the date privacy notice was last updated. Privacy notices must be accessible to and usable to people with disabilities and available in each language in which the controller provides a product or service. Any material changes to the controller’s privacy notice or practices require notices to affected consumers and the opportunity to withdraw consent. Notices need not be Montana-specific, but controllers must conspicuously post them on websites, in mobile applications, or through whatever medium the controller interacts with customers.

The amendments further clarified information the attorney general must publicly provide, including an online mechanism for consumers to file complaints. Further, the attorney general may now issue civil investigative demands and need not issue any notice of violation or provide a 60-day period for the controller to correct the violation.

In today’s hybrid and remote work environment, organizations are increasingly turning to digital employee management platforms that promise productivity insights, compliance enforcement, and even behavioral analytics. These tools—offered by a growing number of vendors—can monitor everything from application usage and website visits to keystrokes, idle time, and screen recordings. Some go further, offering video capture, geolocation tracking, AI-driven risk scoring, sentiment analysis, and predictive indicators of turnover or burnout.

While powerful, these platforms also carry real legal and operational risks if not assessed, configured, and governed carefully.

Capabilities That Go Beyond Traditional Monitoring

Modern employee management tools have expanded far beyond “punching in,” reviewing emails, and tracking websites visited. Depending on the features selected and how the platform is configured, employers may have access to:

  • Real-time screen capture and video recording
  • Automated time tracking and productivity scoring
  • Application and website usage monitoring
  • Keyword or behavior-based alerts (e.g., data exfiltration risks)
  • Behavioral biometrics or mouse/keyboard pattern analysis
  • AI-based sentiment or emotion detection
  • Geolocation or IP-based presence tracking
  • Surveys and wellness monitoring tools

Not all of these tools are deployed in every instance, and many vendors allow companies to configure what they monitor. Some important questions arise, such as who at the company is making the decisions on how to configure the tool, what data is collected, is the collection permissible, who has access , how are decisions made using that data, and what safeguards are in place to protect the data. But even limited use can present privacy and employment-related risks if not governed effectively.

Legal and Compliance Risks

While employers generally have some leeway to monitor their employees on company systems, existing and emerging law, particularly concerning AI, along with considering best practices, employee relations, and other factors should help with developing some guidelines.

  • Privacy Laws: State and international privacy laws (like the California Consumer Privacy Act, GDPR, and others) may require notice, consent, data minimization, and purpose limitation. Even in the U.S., where workplace privacy expectations are often lower, secretive or overly broad monitoring can trigger complaints or litigation.
  • Labor and Employment Laws: Monitoring tools that disproportionately affect certain groups or are applied inconsistently may prompt discrimination or retaliation claims. Excessive monitoring activities could trigger bargaining obligations and claims concerning protected concerted activity.
  • AI-Driven Features: Platforms that employ AI or automated decision-making—such as behavioral scoring or predictive analytics—may be subject to emerging AI-specific laws and guidance, such as New York City’s Local Law 144, Colorado’s AI Act, and AI regulations recently approved by the California Civil Rights Department under the Fair Employment and Housing Act (FEHA) concerning the use of automated decision-making systems.
  • Data Security and Retention: These platforms collect sensitive behavioral data. If poorly secured or over-retained, that data could become a liability in the event of a breach or internal misuse.

Governance Must Extend Beyond IT

Too often, these tools are procured and managed primarily, sometimes exclusively, by IT or security teams without broader organizational involvement. Given the nature of data these tools collect and analyze, as well as their potential impact on members of a workforce, a cross-functional approach is a best practice.

Involving stakeholders from HR, legal, compliance, data privacy, etc., can have significant benefits not only at the procurement and implementation stages, but also throughout the lifecycle of these tools. This includes regular reviews of feature configurations, access rights, data use, decision making, and staying abreast of emerging legal requirements.

Governance considerations should include:

  • Purpose Limitation and Transparency: Clear internal documentation and employee notices should explain what is being monitored, why, and how the information will be used.
  • Access Controls and Role-Based Permissions: Not everyone needs full access to dashboards or raw monitoring data. Access should be limited to what’s necessary and tied to a specific function.
  • Training and Oversight: Employees who interact with the monitoring dashboards must understand the scope of permitted use. Misuse of the data—whether for personal curiosity, retaliation, or outside policy—should be addressed appropriately.
  • Data Minimization and Retention Policies: Avoid “just in case” data collection. Align retention schedules with actual business need and regulatory requirements.
  • Ongoing Review of Vendor Practices: Some vendors continuously add or enable new features that may shift the risk profile. Governance teams should review vendor updates and periodically reevaluate what’s enabled and why.

A Tool, Not a Silver Bullet

Used thoughtfully, employee management platforms can be a valuable part of a company’s compliance and productivity strategy. But they are not “set it and forget it” solutions. The insights they provide can only be trusted—and legally defensible—if there is strong governance around their use.

Organizations must manage not only their employees, but also the people and tools managing their employees. That means recognizing that tools like these sit at the intersection of privacy, ethics, security, and human resources—and must be treated accordingly.

The Oklahoma State Legislature recently enacted Senate Bill 626, amending its Security Breach Notification Act, effective January 1, 2026, to address gaps in the state’s current cybersecurity framework (the “Amendment”).  The Amendment includes new definitions, mandates reporting to the state Attorney General, clarifies compliance with similar laws, and provides revised penalty provisions, including affirmative defenses.

Definitions

The Amendment provides clearer definitions related to security breaches, specifying what constitutes “personal information” and “reasonable safeguards.”

  • Personal Information:  The existing definition for “Personal Information” was expanded to also include (1) a unique electronic identifier or routing code in combination with any required security code, access code, or password that would permit access to an individual’s financial account and (2) unique biometric data such as a fingerprint, retina or iris image, or other unique physical or digital representation of biometric data to authenticate a specific individual.
  • Reasonable Safeguards:  The Amendment provides an affirmative defense in a civil action under the law for individuals or entities that have “Reasonable safeguards” in place, which are defined as “policies and practices that ensure personal information is secure, taking into consideration an entity’s size and the type and amount of personal information. The term includes, but is not limited to, conducting risk assessments, implementing technical and physical layered defenses, employee training on handling personal information, and establishing an incident response plan”.

Mandated Reporting and Exceptions

In the new year, entities required to provide notice to impacted individuals under the law in case of a breach will also be required to notify the Attorney General. The notification must include specific details including, but not limited to, the type of personal information impacted the nature of the breach, the number of impacted individuals, the estimated monetary impact of the breach to the extent such can be determined, and any reasonable safeguards the entity employs. The notification to the Attorney General must occur no more than 60 days after notifying affected residents.

However, breaches affecting fewer than 500 residents, or fewer than 1,000 residents in the case of credit bureaus, are exempt from the requirement to notify the Attorney General.

In addition, an exception from individual notification is provided for entities that comply with notification requirements under the Oklahoma Hospital Cybersecurity Protection Act of 2023 or the Health Insurance Portability and Accountability Act of 1996 (HIPAA) if such entities provide the requisite notice to the Attorney General.

What Entities Should Do Now

  1. Inventory data.  Conduct an inventory to determine what personal information is collected given the newly covered data elements.
  • Review and update policies and practices.  Reevaluate and update current information security policies and procedures to ensure proper reasonable safeguards are in place.  Moreover, to ensure that an entity’s policies and procedures remain reasonably designed, they should be periodically reviewed and updated.

If you have any questions about the revisions to Oklahoma’s Security Breach Notification Act or related issues, contact a Jackson Lewis attorney to discuss.