In 2018, Delta paved the way in airport terminal development, by introducing the first biometric terminal at the Hartsfield-Jackson Atlanta International Airport where passengers can use facial recognition technology from curb to gate. Delta now offers members of its Sky Club airport lounges to enter using fingerprints rather than a membership card or boarding pass. Other airlines use biometric data to verify travelers during the boarding process with a photo-capture. The photograph is then matched through biometric facial recognition technology to photos that were previously taken of the passengers for their passports, visas, or other government documentation.

Though the use of a fingerprint or facial scan aims to streamline and expedite the travel process and strengthen the security of air travel, it also presents heightened security risks for biometric data on a larger sale. As the use of biometric data increases, the more expansive the effects of the data breach becomes. While it’s possible to change a financial account number, a driver’s license number or even your social security number, you can’t change your fingerprint or your face, easily anyway. Furthermore, in the past, facial recognition software had not been able to accurately identify people of color, raising concerns that individuals may be racially profiled.

Yet, many argue that biometric-based technologies can be used to help solve vexing security and logistics challenges concerning travel. For example, in 2016, Congress authorized up to $1 billion collected from certain visa fees to fund implementation of biometric-based exit technology. That was followed by President Trump’s executive order signed in March 2017 directing the Department of Homeland Security to expedite implementation of biometric entry-exit tracking system for all travelers to the United States. As it stands, we are likely to see a rapid expansion of biometric technology used by airlines and other businesses in the travel industry, so prepare your picture perfect travel face!

Notably, the use of biometric data is growing across all industries and in a variety of different applications – e.g., premises security, time management, systems access management. But, so is the number of state laws intending to protect that data. States such as Illinois, Texas, and Washington are leading the way with others sure to follow. Regulations include notice and consent requirements, mandates to safeguard biometric information, and obligations notify individuals in the event biometric information is breached. And, litigation is increasing. The Illinois Supreme Court recently handed down a significant decision, for example, concerning the ability of individuals to bring suit under the Illinois Biometric Information Privacy Act (BIPA). In short, individuals need not allege actual injury or adverse effect, beyond a violation of his/her rights under BIPA. The decision is likely to increase the already significant number of suits, including putative class actions, filed under the BIPA.

Companies, regardless of industry, should be reevaluating their biometric use practices, and taking steps to comply with a growing body of law surrounding this sensitive information.

Earlier today, the Illinois Supreme Court handed down a significant decision concerning the ability of individuals to bring suit under the Illinois Biometric Information Privacy Act (BIPA). In short, individuals need not allege actual injury or adverse effect, beyond a violation of his/her rights under BIPA, in order to qualify as an “aggrieved” person and be entitled to seek liquidated damages, attorneys’ fees and costs, and injunctive relief under the Act.  Potential damages are substantial as the BIPA provides for statutory damages of $1,000 per negligent violation or $5,000 per intentional or reckless violation of the Act.  To date, no Illinois court has interpreted the meaning of “per violation,” but the majority of BIPA suits have been brought as class actions seeking statutory damages on behalf of each individual affected.

If they have not already done so, companies should immediately take steps to comply with the statute. That is, they should review their time management, point of purchase, physical security, or other systems that obtain, use, or disclose biometric information (any information, regardless of how it is captured, converted, stored, or shared, based on an individual’s retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry used to identify an individual) against the requirements under the BIPA. In the event they find technical or procedural gaps in compliance – such as not providing written notice, obtaining a release from the subject of the biometric information, obtaining consent to provide biometric information to a third party, or maintaining a policy and guidelines for the retention and destruction of biometric information – they need to quickly remedy those gaps.  For additional information on complying with the BIPA, please see our BIPA FAQs.

Companies were hoping that the Illinois Supreme Court would ultimately conclude, consistent with the underlying appellate decision, that in order for a plaintiff to bring a claim under the BIPA (i.e. in order for the plaintiff to be considered “aggrieved”) the plaintiff would have to allege actual harm or injury, and not just a procedural or technical violation of the statute.  In reversing and remanding the case, the Illinois Supreme Court held:

The duties imposed on private entities by section 15 of the Act (740 ILCS 14/15) regarding the collection, retention, disclosure, and destruction of a person’s or customer’s biometric identifiers or biometric information define the contours of that statutory right. Accordingly, when a private entity fails to comply with one of section 15’s requirements, that violation constitutes an invasion, impairment, or denial of the statutory rights of any person or customer whose biometric identifier or biometric information is subject to the breach. Consistent with the authority cited above, such a person or customer would clearly be “aggrieved” within the meaning of section 20 of the Act (740 ILCS 14/20) and entitled to seek recovery under that provision. No additional consequences need be pleaded or proved. The violation, in itself, is sufficient to support the individual’s or customer’s statutory cause of action.

The decision is likely to increase the already significant number of suits, including putative class actions, filed under the BIPA.  In the words of the Illinois Supreme Court, “[c]ompliance should not be difficult; whatever expenses a business might incur to meet the law’s requirements are likely to be insignificant compared to the substantial and irreversible harm that could result if biometric identifiers and information are not properly safeguarded; and the public welfare, security, and safety will be advanced.”

An Illinois nursing home is facing a putative class action lawsuit filed by a worker who argues that the facility’s required fingerprint scan for timekeeping poses a threat to their privacy, and violates Illinois’s Biometric Information Privacy Act (“BIPA”). From July 2017 to October 2017, at least 26 employment class actions based on the BIPA have been filed in Illinois state court and show no sign of slowing.

Although some consider Illinois the leader in biometric data protection, other states have enacted laws similar to the BIPA, and still others are considering such legislation. Companies that want to implement technology that uses employee or customer biometric information (for timekeeping, physical security, validating transactions, or other purposes) need to be prepared. For more information on the nursing home case and advise on how to prepare when collecting biometric information, our comprehensive article is available here.

Below are additional resources to help navigate biometric information protection laws:

Not to be outdone by the recent attention to biometric information in Illinois, and the Prairie State’s Biometric Information Privacy Act (BIPA), Washington enacted a biometric data protection statute of its own, HB 1493, which became effective July 23, 2017.

What it notable about Washington’s new biometric information law?

  • It prohibits “persons” from “enrolling” “biometric identifiers” in a database for a “commercial purpose” without first providing notice, obtaining consent, or providing a mechanism to prevent the subsequent use of the biometric identifiers for a commercial purpose. Lots of definitions, more on that below.
  • The exact type of notice and consent should depend on the context, and notice must be given through a procedure reasonably designed to be readably available to affected individuals. Note that the law does not require notice and consent if the person collects, captures, or enrolls a biometric identifier and stores it in a biometric system, or otherwise, in furtherance of a security purpose.
  • In general, a person that has obtained a biometric identifier from an individual and enrolled that identifier may not sell, lease or otherwise disclose the identifier absent consent. There are, of course, some exceptions, such as the disclosure being necessary to provide a product requested by the individual. In addition, a person generally may not use or disclose a biometric identifier for a purpose that is materially inconsistent with the terms under which the identifier was originally provided.
  • Persons that possess biometric identifiers of individuals that have been enrolled for a commercial purpose must (i) have reasonable safeguards to protect against unauthorized access or acquisition to the identifiers, and (ii) not retain the identifiers for longer than is necessary to carry out certain functions, such as providing the product for which the identifier was acquired.
  • There is no private right of action under the new Washington law. It is to be enforced by the state’s Attorney General. Remember that Illinois’ BIPA does permit persons to sue for violations of that law.

To understand how the law applies, one needs to review the defined terms. For example, the term “biometric identifiers” means:

data generated by automatic measurements of an individual’s biological characteristics, such as a fingerprint, voiceprint, eye retinas, irises, or other unique biological patterns or characteristics that is used to identify a specific individual. “Biometric identifier” does not include a physical or digital photograph, video or audio recording or data generated therefrom, or information collected, used, or stored for health care treatment, payment, or operations under the federal health insurance portability and accountability act of 1996.

The law also defines “commercial purpose” to mean:

a purpose in furtherance of the sale or disclosure to a third party of a biometric identifier for the purpose of marketing of goods or services when such goods or services are unrelated to the initial transaction in which a person first gains possession of an individual’s biometric identifier.

And, the term “enroll” means

to capture a biometric identifier of an individual, convert it into a reference template that cannot be reconstructed into the original output image, and store it in a database that matches the biometric identifier to a specific individual.

The use of biometrics and biometric identifiers in commercial transactions and for other purposes is growing, and so is the number of state laws intending to protect that kind of data. Businesses that use or disclose biometrics in carrying out their business should carefully consider whether this new state law applies and, if so, what they need to do to comply.

Capturing the time employees’ work can be a difficult business. In addition to the complexity involved with accurately tracking arrival times, lunch breaks, overtime, etc. across a range of federal and state laws (check out our Wage and Hour colleagues who keep up on all of these issues), many employers worry about “buddy punching” or other situations when time entered into their time management system is entered by a person other than the employee to whom the time relates. To address that worry, some companies have implemented biometric tools to validate time entries. A simple scan of an individual’s fingerprint, for example, can validate that individual is the employee whose time is being entered. But that simple scan can come with some significant compliance obligations, as well as exposure to litigation as discussed in a recent Chicago Tribune article.

The use of biometric data still seems somewhat futuristic and high-tech, but the technology has been around for a while, and there are already a number of state laws addressing the collection, use and safeguarding of biometric information. We’ve discussed some of those here, including the Illinois Biometric Information Privacy Act (BIPA)which is the subject of the litigation referenced above. Notably, the Illinois law permits individuals to sue for violations and, if successful, can recover liquidated damages of $1,000 or actual damages, whichever is greater, along with attorneys’ fees and expert witness fees. The liquidated damages amount increases to $5,000 if the violation is intentional or reckless.

For businesses that want to deploy this technology, whether for time management, physical security, validating transactions or other purposes, there are a number of things to be considered. Here are just a few:

  • Is the company really capturing biometric information as defined under the applicable law? New York Labor Law Section 201-a generally prohibits the fingerprinting of employees by private employers. However, a biometric time management system may not actually be capturing a “fingerprint.” According to an opinion letter issued by the State’s Department of Labor on April 22, 2010, a device that measures the geometry of the hand is permissible as long as it does not scan the surface details of the hand and fingers in a manner similar or comparable to a fingerprint. But, under BIPA, this distinction may not work in some cases. “Biometric information” means any information, regardless of how it is captured, converted, stored, or shared, based on an individual’s biometric identifier used to identify an individual, such as a fingerprint. As a federal district court explained: The affirmative definition of “biometric information” does important work for [BIPA]; without it, private entities could evade (or at least arguably could evade) [BIPA]’s restrictions by converting a person’s biometric identifier into some other piece of information, like a mathematical representation or, even simpler, a unique number assigned to a person’s biometric identifier. So whatever a private entity does in manipulating a biometric identifier into a piece of information, the resulting information is still covered by [BIPA] if that information can be used to identify the person.
  • How long should biometric information be retained? A good rule of thumb – avoid keeping personal information for longer than is needed. The Illinois statute referenced above codifies this rule. Under that law, biometric identifiers and biometric information must be permanently destroyed when the initial purpose for collecting or obtaining such identifiers or information has been satisfied or within 3 years of the individual’s last interaction with the entity collecting it, whichever occurs first.
  • How should biometric information be accessed, stored and safeguarded? Before collecting biometric data, companies may need to provide notice and obtain written consent from the individual. This is the case in Illinois. As with other personal data, if it is accessible to or stored by a third party services provider, the company should obtain written assurances from its vendors concerning such things as minimum safeguards, record retention, and breach response.
  • Is the company ready to handle a breach of biometric data? Currently, 48 states have passed laws requiring notification of a breach of “personal information.” Under those laws, the definitions of personal information vary, and the definitions are not limited to Social Security numbers. A number of them include biometric information, such as Connecticut, Illinois, Iowa and Nebraska. Accordingly, companies should include biometric data as part of their written incident response plans.

The use of biometrics is no longer something only seen in science fiction movies or police dramas on television. It is entering mainstream, including the workplace and the marketplace. Businesses need to be prepared.

Fingerprints, voice prints and vein patterns in a person’s palm are three examples of biometrics that may be “moving into the consumer mainstream to unlock laptops and smartphones, or as a supplement to passwords at banks, hospitals and libraries,” reports Anne Eisenberg at the New York Times. Of course, these technologies, aimed at increasing security and, to a lesser degree, convenience, raise data privacy concerns and other risks. However effective, convenient, and efficient these technologies may be, companies need to think through carefully their adoption and implementation, particularly in the workplace.

Below are just a few of the kinds of questions companies should be asking before implementing technologies that involve capturing biometric information.  It is likely that such technologies will go mainstream and, if so, spawn new laws regulating the use of biometric information. Thus, companies using such technologies will need to continue to monitor the legal landscape to manage their risks.

Can we collect this information? In some cases, the answer may be no. For example, in New York, Labor Law Section 201-a prohibits the fingerprinting of employees by private employers, unless required by law. However, according to an opinion letter issued by the State’s Department of Labor on April 22, 2010, a device that measures the geometry of the hand is permissible as long as it does not scan the surface details of the hand and fingers in a manner similar or comparable to a fingerprint. Other states may permit the collection of biometric information provided certain steps are taken. The Illinois Biometric Information Privacy Act, for instance, prohibits private entities from obtaining a person’s or customer’s biometric identifier or biometric information unless the person is informed in writing and consents in writing.

If we can collect it, do we have to safeguard it?  Regardless of whether a statute requires a business to safeguard such information, we believe it is good practice to do so. However, states such as Illinois (see above) already require a reasonable standard of care when storing, transmitting or disclosing biometric information.

Is there a notification obligation if unauthorized persons get access to biometric information? In some states the answer is yes.  The breach notification statutes in states such as Michigan include biometric data in the definition of personal information. MCLS § 445.72

Are there any requirements for disposing of this information? Yes, a number of states (e.g., Colorado and Massachusetts) require that certain entities meet minimum standards for properly disposing records containing biometric information.

Can employees claim this technology amounts to some form of discrimination? In addition to securing devices and accounts, biometric technologies also are being used to track employee time and attendance in order to enhance workforce management. These different applications can form the basis of discrimination claims. For example, earlier in 2013, the U.S. Equal Employment Opportunity Commission (EEOC) claimed an employer’s use of a biometric hand scanner to track employee time and attendance violated federal law by failing to accommodate certain religious beliefs which opposed the use of such devices.

Retinal scan technology is another biometric technology that can be used for identification/security purposes.  However, as explained in a recent article, “examining the eyes using retinal scanning can aid in diagnosing chronic health conditions such as congestive heart failure and atherosclerosis…[as well as] diseases such as AIDS, syphilis, malaria, chicken pox and Lyme disease [and] hereditary diseases, such as leukemia, lymphoma, and sickle cell anemia.” Thus, the data captured by such scans can inform employers about the health conditions of their employees, raising a range of medical privacy, medical inquiry and discrimination issues under federal and state laws, such as the Americans with Disabilities Act. 

The Cyber Safety Review Board (Board) issued a report entitled, Review of the Attacks Associates with Lapsus$ and Related Threat Groups (Report), released by the Department of Homeland Security on August 10, 2023. The Report begins with a message from the Board’s Chair and Vice Chair discussing WarGames, a movie with interesting parallels to the present day – the leveraging of AI and large language models into systems (see Joshua/WOPR) and teenagers compromising sophisticated systems (Matthew Broderick as a high school student hacking into the Dept. of Defense). The Report looks at “Lapsus$,” described as a loosely organized group of threat actors, that included juveniles in some cases, which gained lots of attention after providing a window into its inner workings.

“Lapsus$ made clear just how easy it was for its members (juveniles, in some instances) to infiltrate well-defended organizations.”

Established under President Biden’s Executive Order (EO) 14028 on ‘Improving the Nation’s Cybersecurity’, the role of the Board is to review major cyber events and make concrete recommendations that would drive improvements. The Report does not disappoint in terms of its description of the targeting and nature of attack by Lapsus$ and similar groups, as well as the Board’s recommendations, one being to move toward a “passwordless” world.

While we cannot cover all of the critical and helpful information in the 59-page Report, here are a few highlights.

Multi-factor Authentication Implementations Used Broadly Today are Insufficient.

A reliable joke at any data security conference is how “password” or “123456” continue to be the most popular passwords. Another weakness is the use of the same account credentials across multiple accounts. Multi-factor authentication (MFA) was designed to address these practices by going beyond the password to require one or more additional authenticators before access is permitted. MFA often comes highly recommended to help protect against one of the most financially damaging online crimes, business email compromise (BEC).

Perhaps a bit unsettling for many that have implemented MFA thinking it is the answer to system access vulnerabilities, the Report explains:

the Board saw a collective failure to sufficiently account for and mitigate the risks associated with using Short Message Service (SMS) and voice calls for MFA. In several instances, attackers gained initial access to targeted organizations through Subscriber Identity Module (SIM) swapping attacks, which allowed them to intercept one-time passcodes and push notifications sent via SMS, effectively defeating this widely used MFA control. A lucrative SIM swap criminal market further enabled this pay-foraccess to a target’s mobile phone services. Despite these factors, adopting more advanced MFA capabilities remains a challenge for many organizations and individual consumers due to workflow and usability issues.

As expected, however, some methods of MFA are better than others. The Report observed that application or token-based MFA methods, for example, were more resilient.

If you are not familiar with SIM swaps, the process goes something like this, as detailed in the Report:

  1. Attacker collects data on victim through social media, phishing, etc.
  2. Attacker uses victim’s credentials to request SIM swap from telecommunications provider.
  3. Telecommunications provider approves the attacker’s fraudulent SIM swap.
  4. With full account takeover, attacker can navigate MFA, access victim’s personal account, including their employer’s systems.

“Lapsus$ took over online accounts via sign-in and account recovery workflows that sent one-time links or MFA passcodes via SMS or voice calls”

Insider Recruitment

Many organizations might not realize or want to believe it, but employees are vulnerable to monetary incentives to assist with providing system access to the attackers. The Report notes that in some cases these incentives could be as high as $20,000 per week. Compromised employees might hand over access credentials, approving upstream MFA requests, conduct SIM swaps, and perform other actions to assist the attackers with getting access to the organization’s systems.

Supply chain attacks

Lapsus$ and similar groups do not just directly attack organizations, they also go after targets that provide access to many organizations – third-party service providers and business process outsourcers (BPOs). Evidence of this strategy by threat actor groups are the recent attacks on secure file transfer services, such as Accellion and the GoAnywhere service offered by Fortra. By gaining access to these services, the attackers have entrée to files uploaded to these services by their many customers. 

Per the report:

In January 2022, a threat actor studied for this report gained access to privileged internal tools of a third-party service provider by compromising the computer of a customer support contractor from one of its BPOs. The real target of this attack was not the third-party service provider, nor the BPO, but rather the downstream customers of the service provider itself. This is a remarkable example of a creative three-stage supply chain attack used by this class of threat actors.


The Board outlines several recommendations, some are more likely to be within an organization’s power to mitigate risk than others. The recommendations fall into four main categories

  • strengthening identity and access management (IAM);
  • mitigating telecommunications and reseller vulnerabilities;
  • building resiliency across multi-party systems with a focus on business process outsourcers (BPOs); and
  • addressing law enforcement challenges and juvenile cybercrime.

As noted above, one of the strongest suggestions for enhancing IAM is moving away from passwords. The Board encourages increased use of Fast IDentity Online (FIDO)2-compliant, hardware backed solutions. In short, FIDO authentication would permit users to sign in with passkeys, usually a biometric or security key. Of course, biometrics raise other compliance risks, but the Board observes this technology avoids the vulnerability and suboptimal practices that have developed around passwords.

Another recommendation is to develop and test cyber incident response plans. As we have discussed on this blog several times (e.g., here and here), no system of safeguards is perfect. So, as an organization works to prevent an attack, it also must plan to respond should one be successful. Among other things, these plans should:

  • identify critical data, systems, and assets that should be prioritized during an attack,
  • outline a tested process for recovering from back-ups,
  • have an internal communications plan,
  • involve BPOs and third-party service providers in the developing and practicing of the plan,
  • identify and maintain contact information for internal and external individuals and groups that are critical to the response process – key employees, DFIR firms, law enforcement, outside counsel, insurance carriers, etc.

The Report is a great read for anyone involved in some way in addressing data risk to an organization. A critical take-away for anyone reading this report is threats are evolving and come in many forms. A control implemented in year 1 may become a significant vulnerability in year 2. Forty years later, the movie WarGames continues to be relevant, even if only to show that some of the most secure systems can be compromised by a handful of curious teenagers.


On July 18, 2023, Oregon’s Governor signed Senate Bill 619 which enacts Oregon’s comprehensive consumer data privacy statute. Oregon joins California, Colorado, Connecticut, Indiana, Iowa, Montana, Tennessee, Texas, Utah, and Virginia in enacting a comprehensive consumer privacy law. Most of the sections of the law are scheduled to take effect on July 1, 2024, with a delayed effective date of July 1, 2025, for non-profit organizations.

When does the law apply?

The statute applies to any person that conducts business in the State of Oregon or that provides products or services to residents of the state and who during a calendar year, controls, or processes:

  • The personal data of 100,000 or more consumers, other than personal data controlled or processed solely for the purpose of completing a payment transaction; or,
  • The personal data of 25,000 or more consumers, while deriving 25 percent or more of the person’s annual gross revenue from selling personal data.

The following are some of the types of businesses that are exempted from the statute:  

  • A public corporation
  • Covered entities or business associates processing protected health information under the Health Insurance Portability and Accountability Act (HIPAA)
  • Organizations subject to the Gramm-Leach-Bliley Act.

Who is protected by the law?

The law protects consumers defined as a natural person who resides in the State of Oregon and acts in any capacity other than in a commercial or employment context.

What data is protected by the law?

Personal data that is protected under the statute is defined as “data, derived data or any unique identifier that is linked to or is reasonably linkable to a consumer or to a device that identifies, is linked to or is reasonably linkable to one or more consumers in a household.”

It does not include:

  • Deidentified data
  • Data that is lawfully available through federal, state, or local government records or through widely distributed media
  • Data the controller reasonably understood to have been lawfully made available to the public by the consumer.

The statute also includes biometric data under personal data. Under the legislation biometric data is defined as personal data generated by automatic measurements of a consumer’s biological characteristics, such as the consumer’s fingerprint, voice print, iris pattern, gait, or other unique biological characteristics that allow or confirm the unique identification of a consumer.

What are the rights of consumers?

Under the new legislation, consumers have the right to:

  • Confirm whether a controller is processing the consumer’s personal data and to access the personal data;
  • Correct inaccuracies in the consumer’s personal data;
  • Delete personal data provided by or obtained about the consumer;
  • Obtain a digital copy of the data the consumer previously provided, if available; and
  • Opt out of the processing of personal data for purposes of targeted advertising, the sale of personal data, or profiling in furtherance of a decision that produces a legal or similarly significant effect concerning the consumer.
  • Obtain a list of “specific third parties” to whom a controller discloses personal data.

What obligations do businesses have?

The legislation requires that businesses post a privacy policy that describes the categories of personal information it collects, the purpose of the collection, the categories of third parties with whom the personal information is shared, and an explanation of the consumer’s rights.

Covered businesses must also include a “clear and conspicuous” description of any processing done for the purpose of targeted advertising.

Eventually, covered businesses will be required to recognize universal opt-out mechanisms, though that portion of the statute does not take effect until January 1, 2026.

How is the law enforced?

The State Attorney General has exclusive authority to enforce the statute and it does not allow for a private right of action to enforce.

If you have questions about Oregon’s privacy law or related issues please reach out to a member of our Privacy, Data, and Cybersecurity practice group to discuss.

On June 18, 2023, Texas’ Governor signed House Bill (HB) 4 which enacts the Texas Data Privacy and Security Act. Texas joins California, Colorado, Connecticut, Indiana, Iowa, Montana, Tennessee, Utah, and Virginia in enacting a comprehensive consumer privacy law. Most of the sections of the law are scheduled to take effect July 1, 2024.

When does the law apply?

In general, the law applies to businesses (referred to as “controllers”) that:

  • Conduct business in the state of Texas or produce a product or service consumed by Texas residents; and
  • Processes or engages in the sale of personal data.

The law does not apply to small businesses (as defined by the Small Business Administration) and along with several categories of personal data that are excluded from coverage under the law, the following entities are specifically exempted:

  • State agencies or political subdivisions;
  • Financial institutions subject to Title V of the Gramm-Leach-Bliley Act;
  • Covered entities or business associates governed by the Health Insurance Portability and Accountability Act (HIPAA);
  • Non-profit organizations;
  • Institutions of higher education; and
  • Electric utilities.

Who is protected by the law?

Consumers that are protected under the law are defined as an individual who is a resident of the state of Texas acting only in an individual or household context. A consumer does not include an individual acting in a commercial or employment context.

What data is protected by the law?

Personal data is protected under the legislation and defined as any information, including sensitive data, that is linked or reasonably linkable to an identified or identifiable individual, but does not include de-identified data or publicly available information.

Under the law, sensitive data includes any data revealing a consumer’s racial or ethnic origin, religious beliefs, mental or physical health diagnosis, sexuality, citizenship or immigration status, as well as any genetic or biometric data used for identifying an individual, any personal data collected from a known child, or any precise geolocation data.

What are the rights of consumers?

Under the new legislation, consumers have the right to:

  • Confirm whether a controller is processing the consumer’s personal data and to access the personal data;
  • Correct inaccuracies in the consumer’s personal data;
  • Delete personal data provided by or obtained about the consumer;
  • Obtain a digital copy of the data the consumer previously provided, if available; and
  • Opt out of the processing of personal data for purposes of targeted advertising, the sale of personal data, or profiling in furtherance of a decision that produces a legal or similarly significant effect concerning the consumer.

What obligations do businesses have?

Limitations on Collection

Covered controllers must limit the collection of personal data to only what is adequate, relevant, and reasonably necessary for the purpose for which the personal data is being processed and disclosed to the consumer. They must also implement “reasonable” security practices to protect the confidentiality and integrity of the data.


In addition, controllers must obtain a consumer’s consent before (1) processing personal data for any other purpose than what was disclosed or (2) processing the sensitive data of a consumer. Controllers are barred from using the data to discriminate against consumers.

Notice to Consumers

Controllers must also provide consumers with a reasonably accessible and clear privacy notice that includes:

  • The categories of personal data processed by the controller;
  • The purpose of processing personal data;
  • How consumers may exercise their rights;
  • If applicable, the categories of personal data shared with third parties; and
  • If applicable, the categories of third parties with whom the controller shares personal data
  • A description of the methods through which consumers can submit requests to exercise rights.

In addition, controllers who engage in the sale of sensitive data or biometric personal data must give specific notices (posted in the same location and manner as the privacy notice):

  • “NOTICE: We may sell your sensitive personal data.”
  • “NOTICE: We may sell your biometric personal data.”

Data protection assessments

Whenever a controller processes any sensitive data or processes personal data for targeted advertising, the sale of personal data, specific forms of profiling, or any activity that presents a heightened risk of harm to consumers, the controller is required to prepare a detailed data protection assessment.

Consumer Rights

Controllers must also make available two or more secure and reliable methods to enable consumers to submit a request to exercise their rights under the legislation, as well as establish an appeal process that is “conspicuously available” and similar to the process established for initially exercising their rights. When a consumer seeks to exercise their rights, the controller must respond to the request without undue delay, but no later than 45 days after the receipt of the request (but may, in some circumstances, extend the response deadline once by an additional 45 days). If the controller declines the consumer’s request, it must provide justification for its decision and instructions on how to appeal the decision. If the controller denies the appeal, the controller must provide the consumer with the online mechanism to submit the complaint to the Attorney General.

How is the law enforced?

Under the law, there is no private cause of action for consumers. Instead, the Attorney General has exclusive authority to enforce the new restrictions and must establish an online mechanism through which a consumer may submit a complaint.

If the Attorney General has “reasonable cause” to believe someone has violated the law, it may issue a civil investigative demand and require a controller to disclose any relevant data protection assessment to facilitate its investigation. If the Attorney General identifies violations of the law, it must send a notice of violation to the controller at least 30 days before bringing the action and allow the controller an opportunity to cure. If the controller cures the violation within the 30-day period, the Attorney General may not bring an action against the controller.

If the Attorney General brings such an action, it may seek both civil penalties, injunctive relief, and recover attorney’s fees and expenses incurred both during the initial investigation and subsequent legal action.

Texas’ new consumer privacy law is comprehensive, and the summary above reflects only the highlights of the new obligations and risks presented to businesses operating in Texas. For more information or if you have questions or concerns or require guidance on how to bring your operations into compliance with the new law, please reach out to a member of our Privacy, Data, and Cybersecurity practice group.

On June 16, 2023, Nevada’s Governor signed Senate Bill (SB) 370, which enacts certain protections for consumer health data.

The law is similar to Washington’s My Health, My Data Act, which was passed in April. The Future of Privacy Forum prepared a useful chart comparing the Washington and Nevada laws.

Nevada’s law becomes operative on March 31, 2024.

To what entities does the law apply?

SB 370 applies to any person that:

  • Conducts business in Nevada or produces or provides products or services that are targeted at consumers in Nevada; and,
  • Alone or with other persons, determines the purpose and means of processing, sharing, or selling consumer health data.

The law includes a long list of exceptions, including exclusions for:

  • any person or entity subject to the Health Insurance Portability and Accountability Act (HIPAA), and
  • a financial institution or affiliate that is subject to the provisions of the Gramm-Leach-Bliley Act.

Who is protected by the law?

SB 370 protects “consumers” – natural persons who have requested a product or service from a regulated business and who reside in the state of Nevada or whose health information is collected in Nevada. The law does not extend to natural persons acting in an employment context or as an agent of a governmental entity.

What data is protected by the law?

Consumer health data is protected under the law. This is defined as personal information that is linked or reasonably capable of being linked to a consumer which the covered business uses to identify the past, present, or future health status of the consumer. Consumer health data includes:

  • Any health condition or status, disease, or diagnosis
  • Social psychological, behavioral, or medical intervention
  • Surgeries or health-related procedures
  • The use or acquisition of medication
  • Bodily functions, vital signs, or symptoms
  • Reproductive or sexual health care
  • Gender-affirming care
  • Biometric or genetic data

The law does not cover information used for certain research, public health, or health data shared pursuant to federal or state law.

What are the rights of consumers?

Similar to the California Consumer Privacy Act and the growing array of consumer privacy laws enacted in several states, consumers have certain rights under SB 370 concerning their consumer health information, such as:

  • The right to confirm whether a covered business is collecting, sharing, or selling their health data.
  • The right to access a list of all third parties with whom the business has shared or sold the consumer’s health data.
  • The right to request the business stop collection, sharing, or selling of the consumer’s health data.
  • The right to delete their health data.

What obligations do businesses have?

Below is a non-exhaustive list of obligations covered businesses have under SB 370.

Covered businesses must obtain affirmative voluntary consent when collecting and sharing consumer health data, except to the extent it is necessary to provide a product or service that the consumer has requested from the business. The covered business also may share consumer health information without consent when required by law.

Covered businesses shall upon request by a consumer:

  • Confirm whether the regulated entity is collecting, sharing, or selling the consumer’s health data.
  • Provide the consumer with a list of all third parties with whom the business has shared or sold the consumer’s health data.
  • Cease collection, sharing, or selling of the consumer’s health data.
  • Delete the consumer’s health data.

Responses to requests must be made without undue delay but no later than 45 days after the business authenticates the request. Note that under some other laws, such as Washington’s My Health, My Data Act, and the CCPA, the 45-day clock starts to run from the date the request is received, not when it is authenticated.

Covered businesses also are required to develop and maintain a policy concerning the privacy of consumer health data that clearly and conspicuously establishes:

  • The categories of consumer health data being collected and the manner in which it will be used.
  • The categories of sources from which the health data is collected
  • The categories of third parties and affiliates with whom the covered business shares health data.
  • The manner in which health data will be processed.
  • The procedure for submitting a request
  • The process by which a consumer can review and request changes to their health data
  • The way the business will notify consumers of changes to its privacy policy
  • Whether a third party may collect health data from the business
  • The effective date of the privacy policy

The business must conspicuously post a link to its policy on its main internet website or otherwise provide the policy to consumers in a manner that is clear and conspicuous. These website policy requirements across several states and countries are adding significant complexity to the compliance obligations of covered businesses.

Employees and processors of the covered business may be permitted to access consumer health information only where reasonably necessary (i) to further the purpose for which the consumer consented to the collection or sharing of the information, or (ii) to provide a product or service that the consumer requested.

Covered businesses also are required to establish, implement and maintain policies and practices for the administrative, technical, and physical security of consumer health data.

In addition, covered businesses may not establish a geofence within 1,750 feet of any medical facility for the purposes of identifying or tracking consumers seeking in-person health care, collecting health data, and sending notifications. 

How is the law enforced?

The new law provides for enforcement by the Nevada Attorney General. There is no private right of action.

For additional information on Nevada’s new privacy statute and other data privacy laws and regulations, please reach out to a member of our Privacy, Data, and Cybersecurity practice group.