When Colorado enacted the Colorado Privacy Act (CPA), it included “biometric data that may be processed for the purpose of uniquely identifying an individual.” However, the CPA as originally drafted did not cover the personal data of individuals acting in a commercial or employment context. Last week, Colorado amended the CPA to broaden the protections for biometric data when Gov. Jared Polis signed HB-1130 into law.

Application of the CPA Biometric Amendment. Importantly, HB-1130 alters the scope of the CPA’s application. Recall that under the CPA, a controller is subject to the CPA if it:

(i) determines the purposes and means of processing personal data, (ii) conducts business in Colorado or produces or delivers commercial products or services intentionally targeted to residents of the state, and (iii) either:  (a) controls or processes the personal data of more than 100,000 Colorado residents per year or (b) derives revenue from selling the personal data of more than 25,000 Colorado residents.

HB-1130 adds that a controller can be subject to the CPA without meeting the requirements above, provided that it would be subject to the CPA solely to the extent that it controls or processes any amount of biometric identifiers or biometric data.

Key Definitions. The amendment added language expressly applicable to employers, including defining employees to include not only individuals employed on a full or part time basis, but also individuals who are “on-call” or hired as a “contractor, subcontractor, intern, or fellow.” The amendment also adds definitions for biometric data and biometric identifier,

“Biometric data” means one or more biometric identifiers that are used or intended to be used, singly or in combination with each other or with other personal data, for identification purposes. “Biometric data” does not include the following unless the biometric data is used for identification purposes: (i) a digital or physical photograph; (ii) an audio or voice recording; or (iii) any data generated from a digital or physical photograph or an audio or video recording.

“Biometric identifier” means data generated by the technological processing, measurement, or analysis of a consumer’s biological, physical, or behavioral characteristics, which data can be processed for the purpose of uniquely identifying an individual. “Biometric identifier” includes: (a) a fingerprint; (b) a voiceprint; (c) a scan or record of an eye retina or iris; (d) a facial map, facial geometry, or facial template; or (e) other unique biological, physical, or behavioral patterns or characteristics.

While there are some similarities in these definitions to the corresponding definitions in the popular Illinois Biometric Information Privacy Act (BIPA), there are some significant differences. One is that a biometric identifier under the BIPA is defined as a “retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry.” The Illinois law does not make reference to “other unique biological, physical, or behavioral patterns or characteristics.” There is also not a private right of action for violations of the CPA amendment, as there is in the BIPA.

Requirements. HB-1130 establishes several requirements for controllers that control or process one or more biometric identifiers. These requirements include:

  • Obtaining consent from the consumer (including the employee) before collecting the consumer’s biometric data.
  • A written policy that
    • Establishes a retention schedule for biometric identifiers and biometric information,
    • Includes a process for responding to the data security incident that would compromise the security of biometric identifiers or biometric information. This would include the process for notifying consumers under the state’s existing data breach notification law.
    • Establishes guidelines addressing the deletion biometric identifiers within certain time frames.
  • Subject to certain exceptions, controllers must make the written policy available to the public. One exception is for a policy applying only to current employees of the controller.
  • Providing a reasonably accessible, clear, and meaning privacy notice satisfying specific content requirements including the purposes for processing.
  • Satisfying certain rights the consumer may have with respect to their biometric data, including the right to access.

HB-1130 also prohibits controllers from certain activities concerning biometric identifiers such as:

  • Selling, leasing or trading such information,
  • Disclosing biometric identifiers, subject to limited exceptions including consent and complying with federal or state law.
  • Refusing to provide a good or service to a consumer, based on the consumer’s refusal to consent to the controller’s collection, use, disclosure, etc. of a biometric identifier unless same is necessary to provide the good or service.

Controllers and processors also must use a reasonable standard of care when storing, transmitting, and protecting biometric identifiers from disclosure.

Employment provisions. HB-1130 includes certain specific provisions for employers. While the law provides that employers may require current or prospective employees to allow the employer to collect and process their biometric identifiers, they may do so only to

  • Permit access to secure physical locations and secure electronic hardware and software applications (but not obtain consent to retain such data for current employee location tracking or tracking time using a hardware or software application),
  • Record the commencement and conclusion of the employee’s full workday, including meal breaks and rest breaks in excess of 30 minutes,
  • Improve or monitor workplace safety or security or ensure the safety or security of employees,
  • Improve or monitor the safety or security of the public in the event of an emergency or crisis situation.

Collecting or processing biometric identifiers for other purposes will require consent which satisfied the applicable CPA requirements. However, employers will be able to collect and process biometric identifiers where the anticipated uses are “aligned with the reasonable expectations” of an employee based on the employee’s job description or role, or a prospective employee based on reasonable background check, application or identification requirements.

Organizations that collect and process information that could be considered biometric identifiers or biometric data in various jurisdiction around the country will need to do a detailed analysis of the growing privacy and cybersecurity obligations, including incident response requirements. For assistance with that, please see our biometric law map.    

In 2021, the Department of Labor (DOL) issued cybersecurity guidance for ERISA-covered retirement plans. The guidance expands the duties retirement plan fiduciaries have when selecting service providers. Specifically, the DOL makes clear that when selecting retirement plan service providers, plan fiduciaries must prudently assess the cybersecurity of those providers.  

On May 15, 2024, the Securities and Exchange Commission (SEC) adopted amendments to Regulation S-P which governs the treatment of nonpublic personal information about consumers by certain financial institutions, many of which are commonly vendors and service providers to retirement plans. For example, the amendments reach broker-dealers, investment companies, registered investment advisers, and transfer agents. Importantly, the amendments establish specific cybersecurity requirements for these entities, requirements that retirement plan fiduciaries should be aware of.

Some of the key requirements include:

  • Incident Response Program:
  • Covered institutions must develop, implement, and maintain written policies and procedures for an incident response program.
  • The program should be reasonably designed to detect, respond to, and recover from unauthorized access to or use of customer information.
  • Notice Requirements:
    • Covered institutions must provide notice to individuals whose sensitive customer information was accessed or used without authorization.
    • The notice must include details about the incident, breached data, and steps affected individuals can take to protect themselves.
    • Notice must be provided as soon as practicable, but not later than 30 days after becoming aware of the incident.
  • Service Provider Oversight
    • Covered institutions establish, maintain, and enforce written policies and procedures reasonably designed to require oversight including through due diligence and monitoring of service providers.

The amendments also set forth requirements for maintaining written records document compliance with the requirements. There are different requirements for the retention period depending on the type of covered institution, but the minimum is at least 2 years.

The amendments become effective 60 days after publication in the Federal Register. Larger entities will have 18 months after the date of publication in the Federal Register to comply with the amendments, and smaller entities will have 24 months after the date of publication in the Federal Register to comply.

When assessing the cybersecurity of a retirement plan service provider that is a financial institution, plan fiduciaries may want to be aware of these requirements as part of their assessment process. For example, the changes to the SEC requirements for incident reporting may be useful to retirement plan sponsors as they consider their own incident response plans, should a data breach experienced by a 401(k) plan involve the data of their current and former employees.  

If you have questions about steps plan fiduciaries should be thinking about when assessing service providers to their plans, including the potential impact of the SEC amended Regulation S-P contact a member of Jackson Lewis’ Privacy, Data, and Cybersecurity practice group to discuss.

Last year the White House weighed in on the use of artificial intelligence (AI) in businesses.

Since the executive order, several government entities including the Department of Labor have released guidance on the use of AI.

And now the White House published principles to protect workers when AI is used in the workplace.

The principles apply to both the development and deployment of AI systems. These principles include:

  • Awareness – Workers should be informed of and have input in the design, development, testing, training, and use of AI systems in the workplace.
  • Ethical development – AI systems should be designed, developed, and trained in a way to protect workers.
  • Governance and Oversight – Organizations should have clear governance systems and oversight for AI systems.
  • Transparency – Employers should be transparent with workers and job seekers about AI systems being used.
  • Compliance with existing workplace laws – AI systems should not violate or undermine worker’s rights including the right to organize, health and safety rights, and other worker protections.
  • Enabling – AI systems should assist and improve worker’s job quality.
  • Supportive during transition – Employers support workers during job transitions related to AI.
  • Privacy and Security of Data – Worker’s data collected, used, or created by AI systems should be limited in scope and used to support legitimate business aims.

If you have questions about the federal government’s guidance pertaining to the use of AI in the workplace or related issues, contact a Jackson Lewis attorney to discuss.

On May 1, 2024, amendments to Utah’s cybersecurity and data breach notification law took effect.

The state’s cybersecurity and data breach notification law requires an organization that conducts business in the State of Utah to prevent the unlawful use or disclosure of personal information collected by the organization.

Under the requirements, if an organization that owns or maintains the personal information of a Utah resident becomes aware of a breach of system security the organization must investigate to determine if the personal information has been or will be misused. If misuse has occurred or is likely to occur, the organization must notify every affected Utah resident. And if 500 or more Utah residents are affected the organization must notify the Utah Attorney General’s Office and the Utah Cyber Center. The Utah Cyber Center coordinates efforts between state, local, and federal resources to support security and defend against cyber-attacks.

The recent amendments revise the definition of “personal data” to be information that “is linked or can be reasonably linked” to an identified individual or identifiable individual.

Concerning nongovernmental entities, the amendments implement a definition for the term “data breach” which is now defined as the “unauthorized access, acquisition, disclosure, loss of access, or destruction of” the personal data of more than 500 or more individuals; or, of data that “compromises security, confidentiality, availability, or integrity of the computer system in use or information maintained by a governmental entity.”

The amendments reiterate that the disclosure of a breach may be confidential and classified as a protected record.

The amendments require reporting entities to include additional information in breach notifications including:

  •  the date the breach of the system security occurred;
  • the date the breach was discovered;
  • the total number of people impacted by the breach, with a breakout of the total number of Utah residents;
  • the type of personal information involved in the breach; and,
  •  a short description of the breach that occurred.

Utah also revised reporting requirements for governmental entities that discover a data breach. Governmental entities shall include all of the above reference items when reporting to the Cyber Center and also:

  • The path or means by which access was gained to the system, computer, or network if known
  • The individual or entity who perpetrated the data breach, if known
  • Any other details requested by the Cyber Center

If you have questions about Utah’s breach notification requirements or related issues please reach out to a member of our Privacy, Data, and Cybersecurity practice group to discuss.

As reported by CNN, a high school principal in Pikesville, Maryland, found his life and career turned upside down when in January a recording suggesting the principal made racially insensitive and antisemitic remarks went viral. The school faced a flood of calls from concerned persons in the district, security was tightened, and the principal was placed on administrative leave. No doubt, a challenging situation for any human resources executive, one made far more difficult because of AI.

An investigation ensured, all the while the school principal maintained that he did not make the statements in the recording – it was not his voice, he claimed. Of course, the “recording” was good enough to put the school district on edge.  

It was not until months later, in late April, that a Baltimore County Police Department investigation concluded that the recording was a fake, a “deepfake,” generated by artificial intelligence (AI) technology. As reported by CNN, Baltimore’s County Executive, Johnny Olszewski, observed:

“Today, we are relieved to have some closure on the origins of this audio…However, it is clear that we are also entering a new, deeply concerning frontier.”

Deepfake AI is a type of artificial intelligence used to create convincing images, audio and video hoaxes. Although deepfakes might have some utility, such as for entertainment purposes, they blur the lines between reality and fiction, making it increasingly difficult to discern truth from falsehood. As in the case of the Baltimore school principal, misuse raises significant concerns, particularly in the workplace. It turns out that the deepfake recording may have arisen from an employment dispute that the principal was having with the high school’s athletic director.

The US Department of Homeland Security and other agencies have recognized the threat deepfakes present. At the same time, the technology is getting easier and easier to use and harder to identify. In this case, it took three months for the Baltimore County Policy Department to investigate and make a determination about the recording.

The World Economic Forum’s “4 ways to future-proof against deepfakes in 2024 and beyond” offers a sobering suggestion for dealing with deepfakes – zero-trust.

This mindset aligns with mindfulness practices that encourage individuals to pause before reacting to emotionally triggering content and engage with digital content intentionally and thoughtfully.

This may not be the mindset most HR professionals prefer to have at or near the top of their lists. But in this context, when presented with electronic material or even a photograph from an unknown source, despite how real it might appear, intentionality and thoughtfulness should prevail.

Consider being presented, as here, with a video, a recording, or some other image, photograph, or transcribed conversation, containing insensitive remarks purportedly made by an employee about another’s race, religion, gender, etc. An organization might not have a police department that is willing and able to assist, although they might have just as much pressure in the workplace from persons reacting to the content, believing it is authentic when it may be nothing more than a fake. Having an internal plan outlining a process for investigation and resources (internal or external) lined up to evaluate the material would help to ensure that intentionality and thoughtfulness. Such a plan might also guide decision making around the various employment decisions to be made along the way, including internal and external communications, should the investigation carry on.

This is only the beginning.

On April 22, 2024, the federal Department of Health and Human Services’ Office for Civil Rights (OCR) announced a final rule enhancing privacy protections relating to reproductive health care. Specifically, the final rule amends the Privacy Rule under the Health Insurance Portability and Accountability Act (HIPAA) to, among other things, establish new limits on the use or disclosure of protected health information (PHI) relating to reproductive health care. Citing the Supreme Court decision in Dobbs v. Jackson Women’s Health Organization and its far-reaching implications for reproductive health care, the OCR asserts that the rule change is necessary in order to ensure, among other things, that individuals are not afraid to seek reproductive health care.

Under HIPAA, the Privacy Rule is one of several rules, collectively known as the HIPAA Rules, that protect the privacy and security of individuals’ protected health information (PHI). The OCR administers and enforces the Privacy Rule, which requires most health care providers, health plans, health care clearinghouses, and business associates (collectively, “regulated entities”) to safeguard the privacy of PHI and sets limits and conditions on the uses and disclosures of such information.  

PHI generally refers to individually identifiable health information transmitted by or maintained in electronic media or any other form or medium. A basic requirement of the Privacy Rule is that PHI may not be used and disclosed except as permitted under HIPAA, and which can be further limited by contrary, more stringent state law. Disclosures of PHI are required only in limited circumstances, such as when required by the Secretary of Health and Human Services to investigate a covered entity’s compliance with the Privacy Rule and to the individual pursuant to the individual’s right of access. In other limited cases, uses and disclosures of PHI may be made (they are permitted, not required) without the authorization of the individual, such as for treatment, payment, or healthcare operations.

Even with these protections, the OCR observed several concerns relating to the use and disclosure of certain PHI related to reproductive healthcare. These include potential harm caused by disclosing such information for non-health care purposes, such as to conduct an investigation against, or to impose liability upon, an individual or another person who receives or delivers reproductive healthcare. According to the OCR, these situations may chill an individual’s willingness to seek lawful healthcare treatment or to provide full information to their health care providers when obtaining that treatment. They also may hamper the willingness of health care providers to provide such care.

OCR received almost 30,000 public comments on the proposed rule. After considering those comments, the OCR’s final rule:

  • Prohibits the use or disclosure of PHI when it is sought to investigate or impose liability on individuals, health care providers, or others who seek, obtain, provide, or facilitate reproductive health care that is lawful under the circumstances in which such health care is provided, or to identify persons for such activities.
  • Requires a regulated health care provider, health plan, clearinghouse, or their business associates, to obtain a signed attestation that certain requests for PHI potentially related to reproductive health care are not for these prohibited purposes.
  • Requires regulated health care providers, health plans, and clearinghouses to modify their Notice of Privacy Practices to support reproductive health care privacy.

The final rule is effective 60 days after publication in the Federal Register, and regulated entities will have 180 days after that to comply. However, the OCR extended the compliance date for required updates to Notices of Privacy Practices (NPP). The agency considered additional changes that are required to NPPs under the 2024 Confidentiality of Substance Use Disorder Patient Records Final Rule (rules seeking to better harmonize HIPAA with rules pertaining to certain federally funded substance abuse treatment programs under 42 USC Part 2). The compliance date for those changes is February 16, 2026. The OCR adopted the same deadline for these changes.

The final rule will have several other implications. For example, some commenters questioned how the rule would affect their current business associate agreements. The OCR noted that the final rule may require regulated entities to revise existing business associate agreements where such agreements permit regulated entities to engage in activities that are no longer permitted under the revised Privacy Rule. Another concern commenters raised is whether minors and legal adults have the same protections under the Privacy Rule and whether this rule would alter existing protections. The OCR assured the commenters that the final rule does not change how the Privacy Rule applies to adults and minors – the protections provided to PHI by this final rule apply equally to adults and minors. For example, under this final rule, a regulated entity is prohibited from using or disclosing a minor’s PHI for the purposes prohibited under the final rule.  

The final rule includes conforming and clarifying changes to the HIPAA Rules, such as:

  • clarifying the definition of “person”;
  • adopting new definitions of “public health” surveillance, investigation, or intervention, and “reproductive health care”;
  • adding a new category of prohibited uses and disclosures;
  • clarifying that a regulated entity may not decline to recognize a person as a personal representative for the purposes of the Privacy Rule because they provide or facilitate reproductive health care for an individual;
  • imposing a new requirement that, in certain circumstances, regulated entities must first obtain an attestation that a requested use or disclosure is not for a prohibited purpose; and
  • requiring modifications to covered entities’ NPPs to inform individuals that their PHI may not be used or disclosed for a purpose prohibited under this final rule.

Regulated entities will need to not only review and update their written policies and procedures, they also will need to ensure that established practices by workforce members are retooled to conform to the new requirements. Training, therefore, will be helpful to ensuring compliance with the new requirements.

“Cybersecurity” has emerged as one of top risks facing organizations. Considering the steady stream of massive data breaches affecting millions (sometimes billions), the debilitating effects of ransomware on an organization’s information systems, the intrigue of international threat actors, and the mobilization and collaboration of national law enforcement to thwart these attacks, it’s no wonder. Notions of privacy have long underpinned critical principles and rights in our legal system, yet actors in the space typically do not have names like LockBit or Black Basta using applications called Colbalt Strike, and [yawn] may not trigger concerns as seemingly compelling as cybersecurity. But that may be changing, at least in the minds of insurance underwriters and persons focused on compliance.

As a recent DarkReading article points out, there is a growing sense that the “mishandling [of] protected personally identifiable information (PII) could rival the cost of ransomware attacks.” The article discusses several reasons driving this view, citing among other things, the recent uptick in pixel litigation. That is,  litigation concerning the handling of website users’ personal information obtained from tracking technologies on websites without consent.

However, the article also alludes to the vast patchwork of nuanced privacy laws across numerous jurisdictions as support for an increasing number of insurance professionals viewing privacy as the “top insurance concern.” In addition to the onslaught of litigation over the use of website tracking technologies, the challenges of navigating the ever expanding and deepening maze of privacy law seem to present much greater compliance and litigation risks for organizations.

A Insurance Journal article, “The Cyber Risk Pendulum,” echoed these sentiments earlier this month and observed:

In 2024, there is a greater focus [by carriers] on controls related to “wrongful collection” coverage – the collection of data in a manner that could run afoul of privacy regulations – whether it be on a state or federal level.

This makes sense considering the emergence of state comprehensive privacy laws, most notably the California Consumer Privacy Act (CCPA). Consider that the first “Enforcement Advisory” issued by the California Privacy Protection Agency, the agency charged with enforcing the CCPA, focuses on “data minimization” – a requirement that includes assessing the collection, use, retention, and sharing of personal information from the perspective of minimizing the personal information processed for the intended purpose(s).   

For many organizations, different privacy laws can apply depending on a range of factors, including without limitation: industry, business location, categories of customers, types of equipment used, specific services provided, methods of marketing and promotion, the categories of information collected, and employment practices.

Consider a health care organization:

  • Industry: Of course, most if not all have at least heard of the Health Insurance Portability and Accountability Act (HIPAA). Covered entities and business associates (defined terms under HIPAA generally including healthcare providers and service providers to those entities) must comply with a comprehensive set of privacy regulations regulating the use and disclosure of all protected health information, regardless of format.
  • Where it does business: All states have long-standing health laws regulating the use and disclosure of patient medical information. Indeed, HIPAA provides that covered entities and business associates have to comply with more stringent state laws that conflict with HIPAA, a particular challenge for multi-state organizations. In addition to state health laws affecting the use and disclosure of patient information, common law privacy rights and obligations also need to be considered.
  • Types of customers: A healthcare provider might provide services to or on behalf of government entities, in which case it may have to comply with certain contractor mandates. Or, it may focus its health services on minors versus adults, requiring it to understand, for example, the specific rules around consent pertaining to medical information pertaining to minors. Mental healthcare providers may have an additional layer of privacy obligations concerning their patients.
  • Equipment it uses: Whether dealing with medical devices, GPS tracking of vehicles, biometric devices used to verify access certain drugs, or smart cameras for facility surveillance, healthcare organization must consider the privacy issues related to the different types of equipment used in the delivery of care and operations. The increasing use of biometrics, as one example, has become a major risk in and beyond the healthcare industry, particularly in Illinois. By some counts, alleged violations of the Illinois Biometric Information Privacy Act (BIPA) have led to nearly 2,000 putative class action cases. The BIPA, a privacy statute, creates a remedy for, among other things, failing to obtain a consent or written released in connection with collecting a biometric identifier or biometric information.
  • Types of services:
    • University hospitals, for example, also have compliance obligations under the Family Educational Rights and Privacy Act (FERPA).
    • Providers running certain federally assisted programs involving substance use services must comply with the substance abuse confidentiality regulations issued by the Substance Abuse and Mental Health Services Administration. See 42 USC Part 2 (although recent regulations finalized in February strive to align these two privacy frameworks).
    • When treating certain highly contagious diseases, providers also must consider laws regulating the use and disclosure of information related to those diseases which often provider stronger protections and limitations on disclosure.
    • A healthcare provider that performs genetic testing services must consider the applicable genetic information privacy laws, which exist in just about all 50 states. One such law is the Illinois Genetic Information Privacy Act (GIPA) passed in 1998. This law may become the next significant privacy target for the Illinois plaintiffs’ bar. Arguably more nuanced than its sister statute, the BIPA, the GIPA has been the subject of an increasing number of case filings in the past year. Compliance can be challenging. For example, the GIPA incorporates some familiar laws – GINA, ADA, Title VII, FMLA, OSHA, and others – requiring that certain entities, including employers, treat genetic testing and genetic information (including certain family medical history information) in a manner consistent with such laws. So, it is not just the GIPA that organizations need to worry about in order to comply with the GIPA.
  • Marketing its services: In addition to the use of tracking technologies referenced above, other means of collecting and sharing personal information to promote the organization’s business may have significant privacy consequences under federal and state consumer protection laws. Examples include emailing and texting, use of employee and patient images and likeness in advertisements, and sharing personal information with third parties in connection with marketing and promotion activities.
  • Categories of personal information: Not all “personal information” is the same. The post at the link just scratches the surface on the various definitions of data that may drive different compliance obligations, including for healthcare organizations.
  • Employment practices: The processing of personal information pertaining to employees, applicants, contractors, etc. creates an additional layer of privacy obligations that touch on many of the items noted above. Areas of particular concern include – increasing use of AI in hiring and promotion, workplace surveillance, methods of identity verification, managing employee medical information, and maintaining employee benefit plans. Each of these areas raise particular issues under federal and/or state law and which are shaped by the categories of information at issues.

Attempting to track, never mind become compliant with, the various privacy laws affecting each of these facets of the business is no easy task. We have not even considered the broader and more detailed and comprehensive privacy frameworks established internationally, such as the EU General Data Protection Regulation (GDPR). And, of course, it is not just healthcare providers that face these privacy challenges at various levels of their operations. Keeping information secure from cyberattacks is one thing and it too is quite challenging, but there are established frameworks for doing so that share many common threads. In the case of privacy, there seems to be many more subtle considerations that are critical for compliance.

For instance, in most cases establishing a password policy under a cybersecurity law to protect personal information is solving for one issue – requiring persons to develop a relatively strong password that will make it difficult for an unauthorized person to gain access the protected system. This may be oversimplifying, but the point is a good password policy might suffice under many different cybersecurity laws, regardless of state, type of business, category of data, etc. Complying with a privacy law regulating the disclosure of health information, on the other hand, likely will require several factors be considered: the type of entity, where it does business, the specific type of data, the individual’s age or medical condition, the reason for the disclosure, the intended recipient, etc.

Regulatory compliance is not the end of the story for privacy. For example, organizations can cause self-inflicted wounds when they make assertions about the handling and safeguarding of the personal information they collect, and fail to meet those assertions. A good example is the privacy policy on an organization’s website. Stating in such a policy that the organization will “never” disclose the personal information collected on the site may create a binding obligation on the organization, even if there is not a law that requires such a rule concerning disclosure. Check out the Federal Trade Commission’s enforcement of these kinds of issues in its recently issued 2023 Privacy and Data Security Update.

Is privacy a bigger risk than cyber? Maybe. Regardless, trying to keep track of and comply with the wide range of privacy law is no easy task, particularly considering so much of the application of those laws are determined by many factors. For this reason, it is not hard to see why underwriters may view privacy as their top concern, and why organizations need trusted and experienced partners to help navigate the maze.   

On April 4, 2024, Kentucky’s Governor signed House Bill 15, which establishes a consumer data privacy law for the state. The state joins New Hampshire and New Jersey in passing comprehensive consumer privacy laws in 2024. Kentucky’s law takes effect January 1, 2026.

To whom does the law apply?

The law applies to persons, hereafter referred to as controllers, that conduct business in Kentucky or produce products or services that are targeted to residents of Kentucky and during a calendar year control or process personal data of at least:

  • 100,000 consumers; or
  • 25,000 consumers and derive over 50% of gross revenue from the sale of personal data.

Who is protected by the law?

A consumer protected under the new legislation is defined as a natural person who is a resident of Kentucky, acting in an individual context. A consumer does not include a person acting in a commercial or employment context.  

What data is protected by the law?

The legislation protects personal data defined as information that is linked or reasonably linkable to an identified or identifiable natural person.

Sensitive data is defined under the law as personal data indicating racial or ethnic origin, religious beliefs, mental or physical health diagnosis, sexual orientation, or citizenship or immigration status. It also includes the processing of genetic or biometric data that is processed to uniquely identify a specific natural person; personal data of a minor, or premise geolocation data.

What are the rights of consumers?

Under the law, consumers have the following rights:

  • To confirm whether a controller is processing their personal data
  • To correct inaccurate personal data
  • To delete personal data maintained by the controller
  • To opt-out of processing of personal data for targeted advertising, sale, or certain profiling

What obligations do controllers have?

Under the legislation, controllers must:

  • Establish, implement, and maintain reasonable administrative, technical, and physical data security practices;
  • Limit the collection of personal data to what is adequate, relevant, and reasonably necessary in relation to purpose
  • Obtain consent from consumers before processing sensitive data concerning the consumer.

How is the law enforced?

The Attorney General has exclusive authority to enforce violations of the legislation. The law does provide for a 30-day right to cure violations by controllers and processors of data.

If you have questions about Kentucky’s privacy law or related issues please reach out to a member of our Privacy, Data, and Cybersecurity practice group to discuss.

In what is being called the American Privacy Rights Act (Act), some are suggesting this could be the one! For many years, Congress has been unable to come together to craft a national privacy law. There have been several snags, including whether to preempt state privacy laws and whether to provide a private right of action. However, it looks like House Energy and Commerce Chair Cathy McMorris Rodgers (R-Wash.) and Senate Commerce Chair Maria Cantwell (D-Wash.) may have come to terms on such a law.

As reported in Bloomberg, the two lawmakers noted:

“This bipartisan, bicameral draft legislation is the best opportunity we’ve had in decades to establish a national data privacy and security standard that gives people the right to control their personal information,” Rodgers and Cantwell said in a statement on Sunday. “Americans deserve the right to control their data and we’re hopeful that our colleagues in the House and Senate will join us in getting this legislation signed into law.”

Following the enactment of the California Consumer Privacy Act, many states have followed California’s lead including, most recently, New Jersey, New Hampshire, and Kentucky. The state laws are quite similar in structure – a broad definition of personal information, duties for businesses/controllers and service providers/processors (e.g., notice, policy, safeguards, data minimization), and greater rights and transparency for consumers concerning their personal information (e.g., opt out of sale, deletion, correction, etc.). However, there are differences state to state.   

Still at the early stages, the Act attempts to push through the challenges of prior failed efforts and remedy the patchwork of state privacy laws. The draft legislation includes a private right of action and a state law preemption provision, while also including many of the same rights for consumers now enjoyed by residents of some states. A section by section summary provides more details.

We will be following this legislation closely!

A manager texting one of his drivers who covered the truck’s inward facing camera while stopping for lunch – “you can’t cover the camera it’s against company rules” – is not unlawful under the National Labor Relations Act (NLRA), according to a recent decision by the D.C. Circuit Court of Appeals.

A practice that has a reasonable tendency to coerce employees in the exercise of their rights under the NLRA is unlawful, according to National Labor Relations Board (NLRB) precedent. An employer’s creating an impression that it is surveilling employees while exercising their rights under the NLRA may constitute such coercion, according to the NLRB. In Stern Produce Co., Inc. v. NLRB, the Board argued the manager’s texting created such an impression. The D.C. Circuit Court of Appeals disagreed.

Like many companies managing a fleet of vehicles, in this case delivery trucks, Stern Produce Co. equips its trucks with dash-cams and telematics technologies. These systems can serve important functions for businesses – help to ensure safe driving, protect drivers and the businesses from liability for accidents for which they are not at fault, improve efficiencies through tracking location, etc. They also raise significant privacy issues, not the least of which is through inward facing cameras.

Stern required drivers to keep truck dash-cams on at all times, unless authorized to turn them off. While driving a truck for Stern, Ruiz parked for a lunch break and covered the truck’s inward facing camera. Hours later, Ruiz’s manager sent him a text: “Got the uniform guy for sizing bud, and you can’t cover the camera it’s against company rules.”

Perhaps in a move to further the positions outlined in a November 2022 memorandum concerning workplace surveillance, the Board’s General Counsel issued a complaint, alleging that the text created an impression of surveillance of organizing activities by making Ruiz aware that he was being watched. According to the Administrative Law Judge, the text did not create an impression of surveillance, but amounted to “mere observation” which was consistent with “longstanding company policies” about truck cameras. Those policies included Stern’s handbook which reserved for Stern the right to “monitor, intercept, and/or review” any data in its systems and to inspect company property at any time without notice. The handbook instructed drivers that they “should have no expectation of privacy” in any information stored or recorded on company systems, including “[c]losed-circuit television” systems, or in any company property, including vehicles. The company also maintained a manual for drivers that addressed the telematics and dash-cam technologies in their trucks. Specifically, the manual states that “[a]ll vehicle safety systems, telematics, and dash-cams must remain on at all times unless specifically authorized to turn them off or disconnect.”

The Board disagreed. Ruiz was a known supporter of a union organizing drive and had previously been subjected to unfair labor practices. Due in part to this history, the Board held the surveillance was “out of the ordinary” and argued the manager had no justification for reviewing the camera as he had done so in the past only in connection with safety concerns.    

Stern’s handbook and driver manual proved to be important to the D.C. Circuit’s analysis. The court noted that drivers were aware of the potential monitoring through the dash-cams and that those cameras must remain on at all times. The Board’s position that there was no evidence that Ruiz knew these policies when he covered the camera was “nonsense,” according to the court. Beyond the policies, the court reasoned that a driver would not have a basis to believe he was being monitored for organizing activities when (i) the driver knew he could be monitored in the vehicle at all times, and (ii) there was no evidence of union activity going on in the small cab of a delivery truck.

It is worth noting that the court recognized that elevated or abnormal scrutiny of pro-union employees can support a finding of impressions of surveillance. That was not the case here, even with Ruiz being a known supporter of union organizing efforts. The manager’s one-time, brief text was, according to the court, consistent with company policy, and did not suggest Ruiz was singled out for union activity. The Board did not satisfy the coercion element.

Takeaways from this case

The ubiquity and sophistication of dash-cams and similar monitoring and surveillance technologies raise a host of legal, compliance, and other issues, both in and outside of a labor management context. While focused on a potential violations of a worker’s rights under the NLRA, there are several key takeaways from this case beyond labor relations.

  • Understand the technology. This case considered a relatively mundane feature of today’s dash-cams – video cameras. However, current dash-cam technology increasingly leverages more sophisticated technologies, such as AI and biometrics. Decisions to adopt and deploy devices so equipped should be considered carefully.
  • Assess legal and compliance requirements. According to the court in this case, the policies adopted and communicated by the employer were adequate to apprise employees of the vehicle monitoring and mandatory video surveillance in the vehicle. However, depending on the circumstances, more may have been needed. The particular technology at issue and applicable state laws are examples of factors that could trigger additional legal requirements. Such requirements could include (i) notice and policy obligations under the California Consumer Privacy Act, (ii) notice requirements for GPS tracking in New Jersey, (iii) potential consent requirements for audio recording, and (iv) consent requirements for collection of biometrics.
  • Develop and communicate clear policies addressing expectation of privacy. Whether employees are working in the office, remotely from home, or in a vehicle, having clear policies concerning the nature and scope of permissible workplace monitoring is essential. The court in Stern relied on the employer’s policies significantly in finding that it has not violated the NLRA.
  • Provide guidance to managers. Maintaining the kinds of written policies discussed above may not be enough. The enforcement such policies, particularly in the labor context, also could create liability for employers. In this case, more aggressive actions by the manager directed only at Ruiz could have created an impression of surveillance that coerced the employee in the exercise of his rights. Accordingly, training for managers and even an internal policy for managers may be useful in avoiding and/or defending against such claims, as well as other claims relating to discrimination, invasion of privacy, harassment, etc.