Whether it is facial recognition technology being used in connection with COVID-19 screening tools and in law enforcement, continued use of fingerprint-based time management systems, or the use of various biometric identifiers for physical security and access management, applications involving biometric identifiers and information in the public and private sectors continue to grow. Concerns about the privacy and security of that information continue to grow as well. Several states have laws protecting biometric information in one form or another, chief among them Illinois, but the desire for federal legislation remains.

Modeled after Illinois’s Biometric Information Privacy (BIPA), the National Biometric Information Privacy Act (Act), proposed by Sens. Jeff Merkley and Bernie Sanders, contains three key provisions:

  • A requirement to obtain consent from individual prior to collecting and disclosing their biometric identifiers and information.
  • A private right of action against entities covered by the Act that violate its protections which entitles aggrieved individuals to recover, among other things, the greater of (i) $1,000 in liquidated damages or (ii) actual damages, for negligent violations of the protections granted under the law.
  • An obligation to safeguard biometric identifier or biometric information in a manner similar to how the organization safeguards other confidential and sensitive information, such as Social Security numbers.

The Act would apply to “private entities,” generally including a business of any size in possession of biometric identifiers or biometric information of any individual. Federal, state, and local government agencies and academic institutions are excluded from the Act.

Under the Act, private entities would be required to:

  • Develop and make available to the public a written policy establishing a retention schedule and guidelines for permanently destroying biometric identifiers and biometric information. That schedule may not extend one year beyond an individual’s last interaction with the entity, but destruction could be required earlier;
  • Collect biometric identifiers or biometric information only when needed to provide a service to the individuals or have another valid business reason;
  • Inform individuals their biometric identifiers or biometric information is being collected or stored, along with the purpose and length of the collection, storage, or use, and must receive a written release from individuals which may not be combined with other consents, including an employment agreement;
  • Obtain a written release immediately prior to the disclosure of any biometric identifier or biometric information that includes the data to be disclosed, the reason for the disclosure, and the recipients of the data; and
  • Maintain the information using a reasonable standard of care.

Readers familiar with the BIPA in Illinois will find these requirements familiar. Readers familiar with the California Consumer Privacy Act (CCPA) will find the following “Right to Know” familiar as well. The Act would grant individuals the right to request certain information about biometric identifiers or biometric information collected by a covered entity within the preceding 12-month period. This information includes “specific pieces of personal information” and “the categories of third parties with whom the business shares the personal information.” The Act uses “personal information” but does not define it, leaving it unclear if it pertains only to biometric identifiers and biometric information.

Most troubling is the private right of action provision referenced above. The Act uses language similar to the language in the BIPA, which has led to a flood of class action litigation, including a decision by the IL Supreme Court finding plaintiffs need not show actual harm to recover under the law. The legislative process likely will result in some modification to the law, assuming it even survives, a fate privacy laws tend to have at the federal level. Nonetheless, we will continue to monitor the track of this and similar laws.

Pending legislation could create new consumer privacy rights in Massachusetts. Earlier this year, Senator Cynthia Creem presented An Act Relative to Consumer Data Privacy in the Massachusetts Senate. This Consumer Privacy Bill, SD.341, combines key aspects of the California Consumer Privacy Act (CCPA) and Illinois’s Biometric Information Privacy Act (BIPA). This bill would allow Massachusetts consumers a private right of action if their personal information or biometric information (referred to separately in the bill) is improperly collected.

The Consumer Privacy Bill defines “biometric information” as an individual’s physiological, biological or behavioral characteristics, including an individual’s DNA, that can be used, singly or in combination with each other or with other identifying data, to establish individual identity. Biometric information includes, but is not limited to, imagery of the iris, retina, fingerprint, face, hand, palm, vein patterns, and voice recordings, from which an identifier template, such as a faceprint, a minutiae template, or a voiceprint, can be extracted, and keystroke patterns or rhythms, gait patterns or rhythms, and sleep, health, or exercise data that contain identifying information.

The bill defines “personal information” as any information relating to an identified or identifiable consumer. “Personal information” means information that identifies, relates to, describes, is capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or the consumer’s device.

However, this definition does not include publicly available information or consumer information that is deidentified or aggregate consumer information. Moreover, the bill creates an exception for a business collecting or disclosing personal information of the business’s employees so long as the business is collecting or disclosing such information within the scope of its role as an employer. Therefore unlike California’s CCPA, where the application to employee data remains an open question, under the current text of the Massachusetts bill it is pretty clear that the law would not apply to employee data as defined above. That said, it is still early in the legislative process and the bill could be revised to include employee data.

The pending legislation would require businesses collecting a Massachusetts consumer’s personal information to notify the consumer of the following rights before the point of collection:

(1) The categories of personal information it will collect about that consumer;

(2) The business purposes for which the categories of personal information shall be used;

(3) The categories of third parties with whom the business discloses personal information;

(4) The business purpose for third party disclosure; and

(5) The consumer’s rights to request:

                  (A) A copy of the consumer’s personal information;

                  (B) The deletion of the consumer’s personal information; and

                  (C) Opt-out of third party disclosure.

In addition to this notice requirement, the bill would give consumers a statutory right to request that businesses collecting their personal information disclose to the consumer:

(1) The specific pieces of personal information the business has collected about that consumer;

(2) The sources from which the consumer’s personal information was collected;

(3) The names of third parties to whom the business disclosed the consumer’s personal information; and

(4) The business purpose for third party disclosure.

Businesses would have to make available to consumers two or more designated methods for submitting consumer verified requests for personal information, including, if the business maintains a web site, a link on the home page of the web site. A business receiving a verifiable consumer request generally must provide the requested information within 45 days of receiving the request, but may extend that period once by an additional 45 days, so long as the request for the extension is provided within the first 45-day period. The proposed legislation also creates a consumer right to request that a business delete any personal information collected from the consumer, and the right to opt out of third party disclosure at any time.

The legislation would be enforceable both through a private right of action and by the Massachusetts Attorney General. A consumer could recover damages in an amount not greater than $750 per consumer per incident or actual damages, whichever is greater (for any violation of the act); (2) injunctive or declaratory relief, and (3) reasonable attorney fees and costs. The Attorney General would be authorized to obtain a temporary restraining order or preliminary or permanent injunction against a violation of the Act. In addition, the Attorney General may seek a civil penalty of not more than $2,500 for each violation or $7,500 for each intentional violation.

This Consumer Privacy Bill would impose administrative burdens on businesses, including an obligation to train employees, as well as creating new exposure to damages and penalties. Given the litigation we are seeing under BIPA, businesses collecting Massachusetts consumers’ personal information should monitor the progress of this legislation to determine whether they should begin preparations for complying with yet another consumer privacy provision.

 

In 2018, Delta paved the way in airport terminal development, by introducing the first biometric terminal at the Hartsfield-Jackson Atlanta International Airport where passengers can use facial recognition technology from curb to gate. Delta now offers members of its Sky Club airport lounges to enter using fingerprints rather than a membership card or boarding pass. Other airlines use biometric data to verify travelers during the boarding process with a photo-capture. The photograph is then matched through biometric facial recognition technology to photos that were previously taken of the passengers for their passports, visas, or other government documentation.

Though the use of a fingerprint or facial scan aims to streamline and expedite the travel process and strengthen the security of air travel, it also presents heightened security risks for biometric data on a larger sale. As the use of biometric data increases, the more expansive the effects of the data breach becomes. While it’s possible to change a financial account number, a driver’s license number or even your social security number, you can’t change your fingerprint or your face, easily anyway. Furthermore, in the past, facial recognition software had not been able to accurately identify people of color, raising concerns that individuals may be racially profiled.

Yet, many argue that biometric-based technologies can be used to help solve vexing security and logistics challenges concerning travel. For example, in 2016, Congress authorized up to $1 billion collected from certain visa fees to fund implementation of biometric-based exit technology. That was followed by President Trump’s executive order signed in March 2017 directing the Department of Homeland Security to expedite implementation of biometric entry-exit tracking system for all travelers to the United States. As it stands, we are likely to see a rapid expansion of biometric technology used by airlines and other businesses in the travel industry, so prepare your picture perfect travel face!

Notably, the use of biometric data is growing across all industries and in a variety of different applications – e.g., premises security, time management, systems access management. But, so is the number of state laws intending to protect that data. States such as Illinois, Texas, and Washington are leading the way with others sure to follow. Regulations include notice and consent requirements, mandates to safeguard biometric information, and obligations notify individuals in the event biometric information is breached. And, litigation is increasing. The Illinois Supreme Court recently handed down a significant decision, for example, concerning the ability of individuals to bring suit under the Illinois Biometric Information Privacy Act (BIPA). In short, individuals need not allege actual injury or adverse effect, beyond a violation of his/her rights under BIPA. The decision is likely to increase the already significant number of suits, including putative class actions, filed under the BIPA.

Companies, regardless of industry, should be reevaluating their biometric use practices, and taking steps to comply with a growing body of law surrounding this sensitive information.

Earlier today, the Illinois Supreme Court handed down a significant decision concerning the ability of individuals to bring suit under the Illinois Biometric Information Privacy Act (BIPA). In short, individuals need not allege actual injury or adverse effect, beyond a violation of his/her rights under BIPA, in order to qualify as an “aggrieved” person and be entitled to seek liquidated damages, attorneys’ fees and costs, and injunctive relief under the Act.  Potential damages are substantial as the BIPA provides for statutory damages of $1,000 per negligent violation or $5,000 per intentional or reckless violation of the Act.  To date, no Illinois court has interpreted the meaning of “per violation,” but the majority of BIPA suits have been brought as class actions seeking statutory damages on behalf of each individual affected.

If they have not already done so, companies should immediately take steps to comply with the statute. That is, they should review their time management, point of purchase, physical security, or other systems that obtain, use, or disclose biometric information (any information, regardless of how it is captured, converted, stored, or shared, based on an individual’s retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry used to identify an individual) against the requirements under the BIPA. In the event they find technical or procedural gaps in compliance – such as not providing written notice, obtaining a release from the subject of the biometric information, obtaining consent to provide biometric information to a third party, or maintaining a policy and guidelines for the retention and destruction of biometric information – they need to quickly remedy those gaps.  For additional information on complying with the BIPA, please see our BIPA FAQs.

Companies were hoping that the Illinois Supreme Court would ultimately conclude, consistent with the underlying appellate decision, that in order for a plaintiff to bring a claim under the BIPA (i.e. in order for the plaintiff to be considered “aggrieved”) the plaintiff would have to allege actual harm or injury, and not just a procedural or technical violation of the statute.  In reversing and remanding the case, the Illinois Supreme Court held:

The duties imposed on private entities by section 15 of the Act (740 ILCS 14/15) regarding the collection, retention, disclosure, and destruction of a person’s or customer’s biometric identifiers or biometric information define the contours of that statutory right. Accordingly, when a private entity fails to comply with one of section 15’s requirements, that violation constitutes an invasion, impairment, or denial of the statutory rights of any person or customer whose biometric identifier or biometric information is subject to the breach. Consistent with the authority cited above, such a person or customer would clearly be “aggrieved” within the meaning of section 20 of the Act (740 ILCS 14/20) and entitled to seek recovery under that provision. No additional consequences need be pleaded or proved. The violation, in itself, is sufficient to support the individual’s or customer’s statutory cause of action.

The decision is likely to increase the already significant number of suits, including putative class actions, filed under the BIPA.  In the words of the Illinois Supreme Court, “[c]ompliance should not be difficult; whatever expenses a business might incur to meet the law’s requirements are likely to be insignificant compared to the substantial and irreversible harm that could result if biometric identifiers and information are not properly safeguarded; and the public welfare, security, and safety will be advanced.”

An Illinois nursing home is facing a putative class action lawsuit filed by a worker who argues that the facility’s required fingerprint scan for timekeeping poses a threat to their privacy, and violates Illinois’s Biometric Information Privacy Act (“BIPA”). From July 2017 to October 2017, at least 26 employment class actions based on the BIPA have been filed in Illinois state court and show no sign of slowing.

Although some consider Illinois the leader in biometric data protection, other states have enacted laws similar to the BIPA, and still others are considering such legislation. Companies that want to implement technology that uses employee or customer biometric information (for timekeeping, physical security, validating transactions, or other purposes) need to be prepared. For more information on the nursing home case and advise on how to prepare when collecting biometric information, our comprehensive article is available here.

Below are additional resources to help navigate biometric information protection laws:

Not to be outdone by the recent attention to biometric information in Illinois, and the Prairie State’s Biometric Information Privacy Act (BIPA), Washington enacted a biometric data protection statute of its own, HB 1493, which became effective July 23, 2017.

What it notable about Washington’s new biometric information law?

  • It prohibits “persons” from “enrolling” “biometric identifiers” in a database for a “commercial purpose” without first providing notice, obtaining consent, or providing a mechanism to prevent the subsequent use of the biometric identifiers for a commercial purpose. Lots of definitions, more on that below.
  • The exact type of notice and consent should depend on the context, and notice must be given through a procedure reasonably designed to be readably available to affected individuals. Note that the law does not require notice and consent if the person collects, captures, or enrolls a biometric identifier and stores it in a biometric system, or otherwise, in furtherance of a security purpose.
  • In general, a person that has obtained a biometric identifier from an individual and enrolled that identifier may not sell, lease or otherwise disclose the identifier absent consent. There are, of course, some exceptions, such as the disclosure being necessary to provide a product requested by the individual. In addition, a person generally may not use or disclose a biometric identifier for a purpose that is materially inconsistent with the terms under which the identifier was originally provided.
  • Persons that possess biometric identifiers of individuals that have been enrolled for a commercial purpose must (i) have reasonable safeguards to protect against unauthorized access or acquisition to the identifiers, and (ii) not retain the identifiers for longer than is necessary to carry out certain functions, such as providing the product for which the identifier was acquired.
  • There is no private right of action under the new Washington law. It is to be enforced by the state’s Attorney General. Remember that Illinois’ BIPA does permit persons to sue for violations of that law.

To understand how the law applies, one needs to review the defined terms. For example, the term “biometric identifiers” means:

data generated by automatic measurements of an individual’s biological characteristics, such as a fingerprint, voiceprint, eye retinas, irises, or other unique biological patterns or characteristics that is used to identify a specific individual. “Biometric identifier” does not include a physical or digital photograph, video or audio recording or data generated therefrom, or information collected, used, or stored for health care treatment, payment, or operations under the federal health insurance portability and accountability act of 1996.

The law also defines “commercial purpose” to mean:

a purpose in furtherance of the sale or disclosure to a third party of a biometric identifier for the purpose of marketing of goods or services when such goods or services are unrelated to the initial transaction in which a person first gains possession of an individual’s biometric identifier.

And, the term “enroll” means

to capture a biometric identifier of an individual, convert it into a reference template that cannot be reconstructed into the original output image, and store it in a database that matches the biometric identifier to a specific individual.

The use of biometrics and biometric identifiers in commercial transactions and for other purposes is growing, and so is the number of state laws intending to protect that kind of data. Businesses that use or disclose biometrics in carrying out their business should carefully consider whether this new state law applies and, if so, what they need to do to comply.

Capturing the time employees’ work can be a difficult business. In addition to the complexity involved with accurately tracking arrival times, lunch breaks, overtime, etc. across a range of federal and state laws (check out our Wage and Hour colleagues who keep up on all of these issues), many employers worry about “buddy punching” or other situations when time entered into their time management system is entered by a person other than the employee to whom the time relates. To address that worry, some companies have implemented biometric tools to validate time entries. A simple scan of an individual’s fingerprint, for example, can validate that individual is the employee whose time is being entered. But that simple scan can come with some significant compliance obligations, as well as exposure to litigation as discussed in a recent Chicago Tribune article.

The use of biometric data still seems somewhat futuristic and high-tech, but the technology has been around for a while, and there are already a number of state laws addressing the collection, use and safeguarding of biometric information. We’ve discussed some of those here, including the Illinois Biometric Information Privacy Act (BIPA)which is the subject of the litigation referenced above. Notably, the Illinois law permits individuals to sue for violations and, if successful, can recover liquidated damages of $1,000 or actual damages, whichever is greater, along with attorneys’ fees and expert witness fees. The liquidated damages amount increases to $5,000 if the violation is intentional or reckless.

For businesses that want to deploy this technology, whether for time management, physical security, validating transactions or other purposes, there are a number of things to be considered. Here are just a few:

  • Is the company really capturing biometric information as defined under the applicable law? New York Labor Law Section 201-a generally prohibits the fingerprinting of employees by private employers. However, a biometric time management system may not actually be capturing a “fingerprint.” According to an opinion letter issued by the State’s Department of Labor on April 22, 2010, a device that measures the geometry of the hand is permissible as long as it does not scan the surface details of the hand and fingers in a manner similar or comparable to a fingerprint. But, under BIPA, this distinction may not work in some cases. “Biometric information” means any information, regardless of how it is captured, converted, stored, or shared, based on an individual’s biometric identifier used to identify an individual, such as a fingerprint. As a federal district court explained: The affirmative definition of “biometric information” does important work for [BIPA]; without it, private entities could evade (or at least arguably could evade) [BIPA]’s restrictions by converting a person’s biometric identifier into some other piece of information, like a mathematical representation or, even simpler, a unique number assigned to a person’s biometric identifier. So whatever a private entity does in manipulating a biometric identifier into a piece of information, the resulting information is still covered by [BIPA] if that information can be used to identify the person.
  • How long should biometric information be retained? A good rule of thumb – avoid keeping personal information for longer than is needed. The Illinois statute referenced above codifies this rule. Under that law, biometric identifiers and biometric information must be permanently destroyed when the initial purpose for collecting or obtaining such identifiers or information has been satisfied or within 3 years of the individual’s last interaction with the entity collecting it, whichever occurs first.
  • How should biometric information be accessed, stored and safeguarded? Before collecting biometric data, companies may need to provide notice and obtain written consent from the individual. This is the case in Illinois. As with other personal data, if it is accessible to or stored by a third party services provider, the company should obtain written assurances from its vendors concerning such things as minimum safeguards, record retention, and breach response.
  • Is the company ready to handle a breach of biometric data? Currently, 48 states have passed laws requiring notification of a breach of “personal information.” Under those laws, the definitions of personal information vary, and the definitions are not limited to Social Security numbers. A number of them include biometric information, such as Connecticut, Illinois, Iowa and Nebraska. Accordingly, companies should include biometric data as part of their written incident response plans.

The use of biometrics is no longer something only seen in science fiction movies or police dramas on television. It is entering mainstream, including the workplace and the marketplace. Businesses need to be prepared.

Fingerprints, voice prints and vein patterns in a person’s palm are three examples of biometrics that may be “moving into the consumer mainstream to unlock laptops and smartphones, or as a supplement to passwords at banks, hospitals and libraries,” reports Anne Eisenberg at the New York Times. Of course, these technologies, aimed at increasing security and, to a lesser degree, convenience, raise data privacy concerns and other risks. However effective, convenient, and efficient these technologies may be, companies need to think through carefully their adoption and implementation, particularly in the workplace.

Below are just a few of the kinds of questions companies should be asking before implementing technologies that involve capturing biometric information.  It is likely that such technologies will go mainstream and, if so, spawn new laws regulating the use of biometric information. Thus, companies using such technologies will need to continue to monitor the legal landscape to manage their risks.

Can we collect this information? In some cases, the answer may be no. For example, in New York, Labor Law Section 201-a prohibits the fingerprinting of employees by private employers, unless required by law. However, according to an opinion letter issued by the State’s Department of Labor on April 22, 2010, a device that measures the geometry of the hand is permissible as long as it does not scan the surface details of the hand and fingers in a manner similar or comparable to a fingerprint. Other states may permit the collection of biometric information provided certain steps are taken. The Illinois Biometric Information Privacy Act, for instance, prohibits private entities from obtaining a person’s or customer’s biometric identifier or biometric information unless the person is informed in writing and consents in writing.

If we can collect it, do we have to safeguard it?  Regardless of whether a statute requires a business to safeguard such information, we believe it is good practice to do so. However, states such as Illinois (see above) already require a reasonable standard of care when storing, transmitting or disclosing biometric information.

Is there a notification obligation if unauthorized persons get access to biometric information? In some states the answer is yes.  The breach notification statutes in states such as Michigan include biometric data in the definition of personal information. MCLS § 445.72

Are there any requirements for disposing of this information? Yes, a number of states (e.g., Colorado and Massachusetts) require that certain entities meet minimum standards for properly disposing records containing biometric information.

Can employees claim this technology amounts to some form of discrimination? In addition to securing devices and accounts, biometric technologies also are being used to track employee time and attendance in order to enhance workforce management. These different applications can form the basis of discrimination claims. For example, earlier in 2013, the U.S. Equal Employment Opportunity Commission (EEOC) claimed an employer’s use of a biometric hand scanner to track employee time and attendance violated federal law by failing to accommodate certain religious beliefs which opposed the use of such devices.

Retinal scan technology is another biometric technology that can be used for identification/security purposes.  However, as explained in a recent Biometric.com article, “examining the eyes using retinal scanning can aid in diagnosing chronic health conditions such as congestive heart failure and atherosclerosis…[as well as] diseases such as AIDS, syphilis, malaria, chicken pox and Lyme disease [and] hereditary diseases, such as leukemia, lymphoma, and sickle cell anemia.” Thus, the data captured by such scans can inform employers about the health conditions of their employees, raising a range of medical privacy, medical inquiry and discrimination issues under federal and state laws, such as the Americans with Disabilities Act. 

As we have discussed in prior posts, AI-enabled smart glasses are rapidly evolving from niche wearables into powerful tools with broad workplace appeal — but their innovative capabilities bring equally significant legal and privacy concerns.

  • In Part 1, we addressed compliance issues that arise when these wearables collect biometric information.
  • In Part 2, we covered all-party consent requirements and AI notetaking technologies.
  • In Part 3, we considered broader privacy and surveillance issues, including from a labor law perspective.

In this Part 4, we consider the potentially vast amount of personal and other confidential data that may be collected, visually and audibly, through everyday use of this technology. Cybersecurity and data security risk more broadly pose another major and often underestimated exposure from this technology.

The Risk

AI smart glasses collect, analyze, and transmit enormous volumes of sensitive data—often continuously, and typically transmitting it to cloud-based servers operated by third parties. This creates a perfect storm of cybersecurity risk, regulatory exposure, and breach notification obligations under laws in all 50 states, as well as the CCPA, GDPR, and numerous sector-specific regulations, such as HIPAA for the healthcare industry.

Unlike traditional cameras or recording devices, AI glasses are designed to collect and process data in real time. Even when users believe they are not “recording,” the devices may still be capturing visual, audio, and contextual information for AI analysis, transcription, translation, or object recognition. That data is frequently transmitted to third-party AI providers with unclear security controls, retention practices, and secondary-use restrictions.

Many AI glasses explicitly rely on third-party AI services. For example, Brilliant Labs’ Frame glasses use ChatGPT to power their AI assistant, Noa, and disclose that multiple large language models may be involved in processing. In practice, this means sensitive business conversations, images, and metadata may leave the organization entirely—often without IT, security, or legal teams fully understanding where the data goes or how it is protected.

Use Cases at Risk

  • Hospital workers going on rounds with their team equipped with AI glasses that access, capture, view, and record patients, charts, wounds, family members, in electronic format, triggering the HIPAA Security Rule and state law obligations
  • Financial services employees wearing AI glasses that capture customer financial data, account numbers, or investment information
  • Any workplace use involving personally identifiable information (PII), such as Social Security numbers, credit card data, or medical information, as well as confidential business of the company and/or its customers
  • Attorneys and legal professionals using AI glasses during privileged communications, potentially risking waiver of attorney-client privilege
  • Employees connecting AI glasses to unsecured or public Wi-Fi networks, creating man-in-the-middle attack risks
  • Lost or stolen AI glasses that store unencrypted audio, video, or contextual data

Why It Matters

Data breaches involving biometric data, health information, or financial data carry outsized legal and financial consequences. With AI glasses, as a practical matter, an entity generally is less likely to face a large-scale data breach affecting hundreds of thousands or millions of people. However, a breach and exposure of sensitive patient images, discussions, or other data captured with AI glasses could be just as, if not more, harmful to the reputation of a health system, for example, than an attack by a criminal threat actor. Beyond reputational harm, incident response costs, litigation, and regulatory penalties also remain a significant risk factor.

Shadow AI (the unauthorized use of artificial intelligence tools by employees in the workplace) also poses a potential data security, breach, and third-party risks. Many devices sync automatically to consumer cloud accounts with security practices that employers neither control nor audit. When an employee uses personal AI glasses for work, fundamental questions often go unanswered: Where is the data stored? Is it encrypted? Who has access? How long is it retained? Is it used to train AI models?

Finally, the use of AI glasses can diminish the effects of a powerful data security tool – data minimization. Businesses will need to grapple with the question whether the constant, ambient data collection and recording aligns with the principles of data minimization, a principle that is woven into data privacy laws, such as the California Consumer Privacy Act.

Practical Compliance Considerations

  • Implement clear policies: Be deliberate about whether to permit these wearables in the workplace. And, if so, establish policies limiting when and where they may be used, and what recording features can be activated and under what circumstances.
  • Perform an assessment: Conduct security and privacy assessments of specific AI glasses models before deployment
  • Understand third-party service provider risks: Review security documentation, including encryption practices, access controls, and incident response commitments
  • Understand obligations to customers: Review services agreements concerning the collection, processing, and security obligations for handling customer personal and confidential business information
  • Update incident response plans: Factor in wearable device compromises
  • For HIPAA Covered Entities and Business Associates: Confirm that AI glasses meet HIPAA requirements
  • Evaluate cyber insurance coverage: Assess whether your policy (assuming you have a cyber policy!) covers breaches involving wearable technology and AI-related risks

Conclusion

AI smart glasses may feel futuristic and convenient, but from a data security and compliance perspective, they dramatically expand an organization’s attack surface. Without careful controls, these devices can quietly introduce breach risks, third-party data sharing, and regulatory exposure that outweigh their perceived benefits.

The key is to approach the deployment of AI glasses (and deployment of similar technologies) with eyes wide open—understanding both the capabilities of the technology and the complex legal frameworks that govern their use. With thoughtful policies, robust technical controls, ongoing compliance monitoring, and respect for privacy rights, organizations can harness the benefits of AI glasses while managing the risks.

As we have discussed in prior posts, AI-enabled smart glasses are rapidly evolving from niche wearables into powerful tools with broad workplace appeal — but their innovative capabilities bring equally significant legal and privacy concerns. In Part 1, we addressed compliance issues that arise when these wearables collect biometric information. In Part 2, we covered all-party consent requirements and AI notetaking technologies.

In this Part 3, we consider broader privacy and surveillance issues, including from a labor law perspective. Left uncontrolled, the nature and capabilities of AI smart glasses open the door to a range of circumstances in which legal requirements as well as societal norms could be violated, even inadvertently. At the same time, a pervasive surveillance environment fueled by the technologies such as AI smart glasses may spur arguments by some employees that their right to engage in protected concerted activity has been infringed.

The Risk

When employers provide AI glasses to employees or permit their use in the workplace, they can potentially create continuous and/or intrusive surveillance conditions that may violate the privacy rights of individuals they encounter, including employees, customers, and others. Various state statutory and common law limit surveillance, and new laws are emerging that would target workplace surveillance technologies. For example, California Assembly Bill 1331, introduced in early 2025, sought to limit employer surveillance and enhance employee privacy. The bill would have banned monitoring in private off-duty spaces (like bathrooms, lactation rooms) and prohibited surveillance of homes or personal vehicles. California Governor Newsom vetoed this bill in October.

However, other law in California, notably the California Consumer Privacy Act (CCPA), seeks to regulate surveillance that would involve certain personal information. Under the CCPA, continuous surveillance may trigger a risk assessment obligation. See more about that here. The CCPA and several other states that have adopted a comprehensive privacy law require covered entities to communicate about the personal information they collect from residents of those states. Covered entities that permit employees to use these devices in the course of their employment may nee to better understand the type of personal information those employees’ glasses are collecting.

The National Labor Relations Board (NLRB) generally establishes a right of employees to act with co-workers to address work-related issues. Widespread surveillance and recording could chill protected concerted activity – employees might be less likely to engage with other employees about working conditions under such circumstances. Of course, introducing AI glasses in the workplace may trigger an obligation to bargain under the NLRA.

Relevant Use Cases

  • Warehouse workers using AI glasses for inventory management that also track movement patterns, productivity metrics, and conversations of coworkers
  • School employees that use AI glasses while interacting with minor students in a range of circumstances
  • Field service technicians wearing glasses that record all customer interactions as well as communications with coworkers
  • Office workers using AI glasses with note-taking features during internal meetings, capturing discussions among employees
  • Healthcare workers in a variety of settings, purposefully or inadvertently, capturing images or data of patients and their families
  • Manufacturing employees whose glasses document work processes while also recording conversations with coworkers

Why It Matters:

Connecticut, Delaware, and New York require employers to notify employees of certain electronic monitoring. California’s CCPA gives employees specific rights over their personal information, including the right to know what’s collected and the right to deletion. These protections were strengthened in recently updated regulations under the California Privacy Rights Act which created, among other things, an obligation to conduct and report on risk assessments performed in connection with certain surveillance activities.

Union environments face additional scrutiny. Surveillance may constitute an unfair labor practice requiring collective bargaining. The NLRB has issued guidance limiting employers’ ability to ban workplace recordings because such bans can interfere with protected rights. However, continuous AI-powered surveillance could still create a chilling effect that violates labor law.

Practical Compliance Considerations:

  • Implement clear policies: Be deliberate about whether to permit these wearables in the workplace. And, if so, establish policies limiting when and where they may be used, and what recording features can be activated and under what circumstances.
  • Provide notice: Providing written notice about AI glasses capabilities, including what data is collected, how it’s processed, and how it may be used.
  • Perform an assessment: Conduct privacy impact/risk assessments before deploying AI glasses in the workplace, including when interacting with customers.
  • Consider bargaining obligations, protected concerted activity rights: If deploying AI glasses in union environments, engage in collective bargaining about their use, assess PCA rights.
  • Establish technical limits and safeguard: Consider implementing technical controls like automatic disabling of recording in break rooms, bathrooms, and areas designated for private conversations.

Conclusion

AI glasses represent transformative technology with genuine business value, from hands-free information access to enhanced productivity and innovative customer experiences. The 210% growth in smart glasses shipments in 2024 demonstrates their appeal. But the legal risks are real and growing.

The key is to approach the deployment of AI glasses (and deployment of similar technologies) with eyes wide open—understanding both the capabilities of the technology and the complex legal frameworks that govern its use. With thoughtful policies, robust technical controls, ongoing compliance monitoring, and respect for privacy rights, organizations can harness the benefits of AI glasses while managing the risks.