A manager texting one of his drivers who covered the truck’s inward facing camera while stopping for lunch – “you can’t cover the camera it’s against company rules” – is not unlawful under the National Labor Relations Act (NLRA), according to a recent decision by the D.C. Circuit Court of Appeals.

A practice that has a reasonable tendency to coerce employees in the exercise of their rights under the NLRA is unlawful, according to National Labor Relations Board (NLRB) precedent. An employer’s creating an impression that it is surveilling employees while exercising their rights under the NLRA may constitute such coercion, according to the NLRB. In Stern Produce Co., Inc. v. NLRB, the Board argued the manager’s texting created such an impression. The D.C. Circuit Court of Appeals disagreed.

Like many companies managing a fleet of vehicles, in this case delivery trucks, Stern Produce Co. equips its trucks with dash-cams and telematics technologies. These systems can serve important functions for businesses – help to ensure safe driving, protect drivers and the businesses from liability for accidents for which they are not at fault, improve efficiencies through tracking location, etc. They also raise significant privacy issues, not the least of which is through inward facing cameras.

Stern required drivers to keep truck dash-cams on at all times, unless authorized to turn them off. While driving a truck for Stern, Ruiz parked for a lunch break and covered the truck’s inward facing camera. Hours later, Ruiz’s manager sent him a text: “Got the uniform guy for sizing bud, and you can’t cover the camera it’s against company rules.”

Perhaps in a move to further the positions outlined in a November 2022 memorandum concerning workplace surveillance, the Board’s General Counsel issued a complaint, alleging that the text created an impression of surveillance of organizing activities by making Ruiz aware that he was being watched. According to the Administrative Law Judge, the text did not create an impression of surveillance, but amounted to “mere observation” which was consistent with “longstanding company policies” about truck cameras. Those policies included Stern’s handbook which reserved for Stern the right to “monitor, intercept, and/or review” any data in its systems and to inspect company property at any time without notice. The handbook instructed drivers that they “should have no expectation of privacy” in any information stored or recorded on company systems, including “[c]losed-circuit television” systems, or in any company property, including vehicles. The company also maintained a manual for drivers that addressed the telematics and dash-cam technologies in their trucks. Specifically, the manual states that “[a]ll vehicle safety systems, telematics, and dash-cams must remain on at all times unless specifically authorized to turn them off or disconnect.”

The Board disagreed. Ruiz was a known supporter of a union organizing drive and had previously been subjected to unfair labor practices. Due in part to this history, the Board held the surveillance was “out of the ordinary” and argued the manager had no justification for reviewing the camera as he had done so in the past only in connection with safety concerns.    

Stern’s handbook and driver manual proved to be important to the D.C. Circuit’s analysis. The court noted that drivers were aware of the potential monitoring through the dash-cams and that those cameras must remain on at all times. The Board’s position that there was no evidence that Ruiz knew these policies when he covered the camera was “nonsense,” according to the court. Beyond the policies, the court reasoned that a driver would not have a basis to believe he was being monitored for organizing activities when (i) the driver knew he could be monitored in the vehicle at all times, and (ii) there was no evidence of union activity going on in the small cab of a delivery truck.

It is worth noting that the court recognized that elevated or abnormal scrutiny of pro-union employees can support a finding of impressions of surveillance. That was not the case here, even with Ruiz being a known supporter of union organizing efforts. The manager’s one-time, brief text was, according to the court, consistent with company policy, and did not suggest Ruiz was singled out for union activity. The Board did not satisfy the coercion element.

Takeaways from this case

The ubiquity and sophistication of dash-cams and similar monitoring and surveillance technologies raise a host of legal, compliance, and other issues, both in and outside of a labor management context. While focused on a potential violations of a worker’s rights under the NLRA, there are several key takeaways from this case beyond labor relations.

  • Understand the technology. This case considered a relatively mundane feature of today’s dash-cams – video cameras. However, current dash-cam technology increasingly leverages more sophisticated technologies, such as AI and biometrics. Decisions to adopt and deploy devices so equipped should be considered carefully.
  • Assess legal and compliance requirements. According to the court in this case, the policies adopted and communicated by the employer were adequate to apprise employees of the vehicle monitoring and mandatory video surveillance in the vehicle. However, depending on the circumstances, more may have been needed. The particular technology at issue and applicable state laws are examples of factors that could trigger additional legal requirements. Such requirements could include (i) notice and policy obligations under the California Consumer Privacy Act, (ii) notice requirements for GPS tracking in New Jersey, (iii) potential consent requirements for audio recording, and (iv) consent requirements for collection of biometrics.
  • Develop and communicate clear policies addressing expectation of privacy. Whether employees are working in the office, remotely from home, or in a vehicle, having clear policies concerning the nature and scope of permissible workplace monitoring is essential. The court in Stern relied on the employer’s policies significantly in finding that it has not violated the NLRA.
  • Provide guidance to managers. Maintaining the kinds of written policies discussed above may not be enough. The enforcement such policies, particularly in the labor context, also could create liability for employers. In this case, more aggressive actions by the manager directed only at Ruiz could have created an impression of surveillance that coerced the employee in the exercise of his rights. Accordingly, training for managers and even an internal policy for managers may be useful in avoiding and/or defending against such claims, as well as other claims relating to discrimination, invasion of privacy, harassment, etc.

The California Privacy Protection Agency (CPPA) issued its first enforcement advisory concerning the California Consumer Privacy Act (CCPA). In Enforcement Advisory No. 2024-01, the CPPA tackles a foundational principle – data minimization. Much of the attention surrounding the CCPA seems to focus on website privacy policies, notices at collection, and consumer rights requests. With its inaugural advisory directed at data minimization, the CPPA may be reminding covered business, service providers and others that CCPA compliance requires a deeper review of an organization’s practices concerning the collection, use, retention, and sharing of personal information.

First, a word on CPPA “Enforcement Advisories.” Being the first of its kind for the CCPA, we thought it would make sense to convey what the agency noted about these advisories :

Enforcement Advisories address select provisions of the California Consumer Privacy Act and its implementing regulations. Advisories do not cover all potentially applicable laws or enforcement circumstances; the Enforcement Division will make case-by-case enforcement determinations. Advisories do not implement, interpret, or make specific the law enforced or administered by the California Privacy Protection Agency, establish substantive policy or rights, constitute legal advice, or reflect the views of the Agency’s Board.

Based on this language, while it appears that an enforcement advisory will not provide a compliance safe harbor, there are valuable insights to be gained concerning the potential application of the CCPA.

For any organization concerned about data risk, data minimization is certainly one way to mitigate that risk. Most organizations work diligently to design and build information systems that prevent unauthorized access to those systems. But, when that unauthorized access happens, and it does, the data is compromised. If there is less of that data in the compromised system, risk has been mitigated, even if not eliminated.

The concept of data minimization did not originate with the CCPA. For example, under HIPAA, covered entities and business associates must comply with the minimum necessary rule. According to the CPPA:

Data minimization serves important functions. For example, data minimization reduces the risk that unintended persons or entities will access personal information, such as through data breaches. Data minimization likewise supports good data governance, including through potentially faster responses to consumers’ requests to exercise their CCPA rights. Businesses reduce their exposure to these risks and improve their data governance by periodically assessing their collection, use, retention, and sharing of personal information from the perspective of data minimization.  

The process of achieving data minimization can be challenging as it does not lend itself to a one-size fits-all approach. Under the CCPA, businesses must apply the data minimization principle “to each purpose for which they collect, use, retain, and share consumers’ personal information—including information that businesses collect when processing consumers’ CCPA requests.” As noted in the Enforcement Advisory, there are many obligations under the CCPA for which data minimization must be considered and applied, such as requests to opt-out of the sale or sharing of personal information, or requests to limit the use and disclosure of sensitive personal information. Of course, even the collection of personal information by a business must be “reasonably necessary and proportionate to achieve the purposes for which the personal information was collected or processed.”

Applying this foundational principle, according to the Enforcement Advisory, essentially amounts to asking questions about the particular collection, use, retention, and sharing of personal information. In one example, the Advisory discusses how to apply data minimization to the process of verifying a consumer’s identity to process a request to delete personal information. It offers the following questions as examples of what a business might ask itself:

  • What is the minimum personal information that is necessary to achieve this purpose (i.e., identity verification)?
  • We already have certain personal information from this consumer. Do we need to ask for more personal information than we already have?
  • What are the possible negative impacts posed if we collect or use the personal information in this manner?
  • Are there additional safeguards we could put in place to address the possible negative impacts?

Considering the CCPA’s rules for verification and the needs of the business for that personal information, the business should make decisions for the verification process with minimization in mind. Further, minimization is something that should be periodically assessed.

The need to apply the principle of data minimization makes clear that CCPA compliance is more than posting a privacy policy on the business’s website. It requires, among other things, that businesses think carefully about what categories of personal information they are collecting, the sensitivity of those categories of personal information, the purpose(s) of that collection, and whether the information collected is minimized while still serving the applicable purposes.

As organizations continue to take steps to prevent cyberattacks, a near-universal recommendation is that they should implement multi-factor authentication (MFA), and for good reason. Organizations subject to the updated FTC Safeguards Rule, for example, are required to implement MFA. The Cybersecurity & Infrastructure Security Agency (CISA) includes MFA as a best practice. And for the insurance industry, “MFA has quickly become a minimum standard requirement for companies to be considered for cyber insurance coverage.”

However, according to a recent HIPAA Journal article, bad actors figured out a way around MFA (no April Fool’s joke here!):

The Los Angeles County Department of Mental Health has recently notified the California Attorney General about a breach of an employee’s email account. The email account had multi-factor authentication (MFA) in place; however, MFA was bypassed. The cyber threat actors bypassed MFA using a technique known as push notification spamming, where a user is sent multiple MFA push notifications to their mobile device in the hope that they will eventually respond. The employee did respond, resulting in their email account being compromised.

This is not to say that MFA is not a critical safeguard for securing an organization’s systems. But, it also is not the first instance of MFA being bypassed. Instead, the incident referred to above should be a reminder that no means of system security is perfect. Organizations need to continue to make reasonable efforts to identify vulnerabilities and address them. They should not be overconfident in the security that MFA provides.

There are several ways to strengthen MFA, including through the use of hardware-based MFA, limiting login attempts, training, etc. That determination should be part of an ongoing process of continually monitoring the organization’s systems and assessing information risk, including by way of the enhanced capabilities and creativity of bad actors, including as aided by AI. Doing so will not only help to protect the organization, but it also will improve its defensible position in a litigation or compliance review, and avoid a data breach.   

On Wednesday, March 13, 2024, Members of European Parliament endorsed the Artificial Intelligence Act (“AI Act”), with 523 votes in favor, 46 against, and 49 abstentions. This is the world’s first comprehensive AI law and likely to have significant influence on the rapid development of AI regulation in other jurisdictions including in the United States.

Article 1 of the AI Act explains its purpose:

to improve the functioning of the internal market and promoting the uptake of human centric and trustworthy artificial intelligence, while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter, including democracy, rule of law and environmental protection against harmful effects of artificial intelligence systems in the Union and supporting innovation

More specifically, in addition to harmonizing rules for developing, implementing, and using AI, the AI Act aims to (1) protect EU citizens’ fundamental rights, including from certain “high risk” AI; and (2) foster, rather than hinder, technological innovation and Europe’s AI leadership.

The Act categorizes AI into 4 levels of risk: unacceptable risk, high risk, limited risk, and low risk. Based on the risk level, individuals and entities within the scope of the Act, such as providers, deployers, importers, and distributors of AI systems (see “Definitions” in Article 3) are required to meet specific requirements. For example, AI with unacceptable risk is simply banned because it violates basic human and civil rights, manipulates human behaviors, or exploits human vulnerabilities.

Use of AI in employment is considered high-risk AI, categorized as such due to its significant threat to civil rights and the law. For employers, utilizing high-risk AI compliance will require a number of steps including keeping accurate use logs, being transparent about the AI use, maintaining “human oversight”, and other efforts to reduce risks.

Individuals are able to submit complaints about high-risk AI systems and are entitled to explanations about decisions made, such as employment decisions, based on the high-risk AI system.

A goal of the AI Act is “to reduce risks, create opportunities, combat discrimination, and bring transparency. Thanks to Parliament, unacceptable AI practices will be banned in Europe and the rights of workers and citizens will be protected. The AI Office will now be set up to support companies to start complying with the rules before they enter into force. “We ensured that human beings and European values are at the very centre of AI’s development[,]” as stated by Brando Benifei, the Internal Market Committee co-rapporteur of Italy.

What’s next? The AI Act is still subject to a lawyer-linguist verification and must be endorsed by the Council. However, it is expected to be adopted before the end of the legislature and will be entered into force 20 days after it is published in the official Journal. It will be fully applicable two years after its entry into force, with some exceptions.

Jackson Lewis attorneys are closely monitoring the EU AI Act as well as U.S. AI regulation. Additional targeted updates regarding the EU AI Act will be posted as we near the effective date, by attorneys from both Jackson Lewis and the firm-led L&E Global alliance.    


 

The explosion of generative AI has spawned a wide range of personal and professional tools and applications. One noteworthy (no pun intended) example of those tools and applications is notetakers that can capture, transcribe, and organize the content discussed at meetings (virtual or otherwise), enabling participants to more meaningfully participate in the meeting/discussion. They can even enable an individual to not be present at the meeting at all and not miss out! Of course, like any new AI or other technology, it is important to consider the risks along with the benefits.

There are already many AI notetakers on the market. Summaries like this can help potential users evaluate the different features, options, ratings, etc. In addition, potential users might consider the following questions when selecting and implementing an AI notetaker for their organization.

  • Does the tool record the conversation/meeting from which it develops the notes, transcript? If so, you will need to think about several issues, a few of which are discussed here.
    • One is whether you have complied with the applicable consent requirements. For example, some states, known as all-party or two-party consent states, require consent of all persons to a call before it can be recorded. Some AI notetakers can attend and record a meeting on behalf of the user. In some cases, the default rule may not alert others on a call that the AI notetaker is dialed in and recording the call. Organizations should alert employees of this possibility and address it accordingly. The organization also will need to consider whether it has provided appropriate notice of the collection of personal information from persons participating in the meeting. Businesses subject to the California Consumer Privacy Act (CCPA), for example, generally are required to provide a notice at collection to California residents concerning, among other things, the categories of personal information the business collects from them. This includes the business’ employees. Accordingly, such businesses will need to evaluate notetakers along with other means for collecting personal information from such individuals.
    • Another issue is how a recording is handled once created – should it be encrypted, who is permitted to access it, how long should it be maintained, etc. Such recordings could become the subject of a litigation hold, or a data subject access request. For example, an individual whose personal information is covered by the CCPA or a similar law, might request access to that information or deletion of it.
  • Is your data used to train the notetaking tool? Some notetaking tools will use the transcriptions generated by customers to help improve the accuracy of the product. Of course, the organization using the tool will need to consider the confidentiality, privacy, and security of the information it permits its notetaking vendor to acquire for this purpose, and whether this practice raises regulatory or contractual issues. The tool might provide an opt out from this use and the organization will want to make sure to train employees to opt out, as needed.
  • What kind of confidential and personal information do you anticipate will be captured by the tool? As with many AI applications, it is critical to understand the use cases that you anticipate being served by the technology. The use cases can be wide-ranging and will be shaped by, among other things, the type of business and activities engaged in, which departments/employees in the organization are using the tool, and other factors. For example, in a law firm environment, using a notetaker likely will raise attorney-client privilege issues. In a healthcare environment, it is likely that a notetaker could capture protected health information (PHI) of patients. However, if a health system’s marketing department is using a notetaker, capturing PHI might be less likely, but still possible. So, when thinking about how your organization will use a notetaker, it is important to consider not only your organization’s regulatory environment, but also who in the organization will be permitted to use the technology and for what purpose(s), what representations have been made about disclosures of confidential and personal information, etc. See policy development below.
  • If the product promotes deidentification, what standard for deidentification applies? Depending on the use cases that an organization anticipates when using notetakers, deidentification may not be a critical issue. Businesses in the construction industry, for example, might find it unlikely that the organization’s use of a notetaker would involve individually identifiable personal information. But where that is the case, and where the organization desires or needs to protect that information and or minimize the creation of it, some notetakers offer deidentification functionality. In those cases, however, it will important to understand the product’s deidentification process. Healthcare entities subject to HIPAA, for example, must satisfy a specific regulatory standard for deidentification. See 45 CFR 164.514.
  • How do we address others outside the organization who are using these tools? Customers, applicants, business partners, vendors, and other third parties also may be using these tools during meetings with persons at the organization. In the process, they may be creating a recording or transcript of the discussion, perhaps capturing confidential business or privileged information. The organization will need to evaluate how it will approach different situations, e.g., a vendor versus a job applicant. However, making the organization’s employees sensitive to this possibility is a starting point.
  • Do we need a policy? New technologies like generative AI and their various iterations often raise many questions concerning use in organizations. Indeed, many organizations have adopted policies to guide employees when using another popular application of generative AI technology – ChatGPT and similar tools. Policies can be helpful to establish guiding principles and requirements for employees, such as:
    • which notetaker(s) have been vetted by the organization and are approved for use in the course of employment,
    • which employees are permitted to use the notetaker and for what purposes,
    • guidelines for providing notice, consent, etc.,
    • what safeguards should be followed for securing transcriptions with confidential and personal information,
    • guidelines for limiting access to transcriptions,
    • record retention and litigation hold requirements, and
    • how to handle meetings intended to be privileged.

Policies will help the organization take into account regulatory concerns, client preferences, among other things. For what it is worth, we asked ChatGPT about whether to have a policy, and it responded, “Implementing a policy to govern how your organization’s employees use a generative AI note-taker is a prudent decision.”

Even if your organization has not formally adopted an AI notetaker, some of your employees may already be using the technology. As noted above, there are several considerations that should prompt additional analysis concerning the nature and scope of the use of such tools.

On March 6, 2024, New Hampshire’s Governor signed Senate Bill 255, which establishes a consumer data privacy law for the state. The Granite State joins the myriad of state consumer data privacy laws. It is the second state in 2024 to pass a privacy law, following New Jersey. The law shall take effect January 1, 2025.

To whom does the law apply?

The law applies to persons who conduct business in the state or persons who produce products or services targeted to residents of the state that during a year period:

  • Controlled or processed the personal data of not less than 35,000 unique consumers, excluding personal data controlled or processed solely for the purpose of completing a payment transaction; or,
  • Controlled or processed the personal data of not less than 10,000 unique consumers and derived more than 25 percent of their gross revenue from the sale of personal data.

The law excludes certain entities such as non-profit organizations, entities subject to the Gramm-Leach-Bliley Act, and covered entities and business associates under HIPAA.

Who is protected by the law?

The law protects consumers defined as a resident of New Hampshire. However, it does not include an individual acting in a commercial or employment context.

What data is protected by the law?

The law protects personal data defined as any information linked or reasonably linkable to an identified or identifiable individual. Personal data does not include de-identified data or publicly available information. Other exempt categories of data include without limitation personal data collected under the Family Educational Rights and Privacy Act (FERPA), protected health information under HIPAA, and several other categories of health information.

What are the rights of consumers?

Consumers have the right under the law to:

  • Confirm whether or not a controller is processing the consumer’s personal data and accessing such personal data
  • Correct inaccuracies in the consumer’s personal data
  • Delete personal data provided by, or obtained about, the consumer
  • Obtain a copy of the consumer’s personal data processed by the controller
  • Opt-out of the processing of the personal data for purposes of target advertising, the sale of personal data, or profiling in furtherance of solely automated decisions that produce legal or similarly significant effects. Although subject to some exceptions, a “sale” of personal data under the New Hampshire law includes the exchange of personal data for monetary or other valuable consideration by the controller to a third party, language similar to the California Consumer Privacy Act (CCPA).

When consumers seek to exercise these rights, controllers shall respond without undue delay, but no later than 45 days after receipt of the request. The controller may extend the response period by 45 additional days when reasonably necessary. A controller must establish a process for a consumer to appeal the controller’s refusal to take action on a request within a reasonable period of the decision. As with the CCPA, controllers generally may authenticate a request to exercise these rights and are not required to comply with the request if they cannot authenticate, provided they notify the requesting party.

What obligations do controllers have?

Controllers have several obligations under the New Hampshire law. A significant obligation is the requirement to provide a “reasonably accessible, clear and meaningful privacy notice” that meets standards established by the secretary of state and that includes the following content:

  • The categories of personal data processed by the controller;
  • The purpose for processing personal data;
  • How consumers may exercise their consumer rights, including how a consumer may appeal a controller’s decision with regard to the consumer’s request;
  • The categories of personal data that the controller shares with third parties, if any;
  • The categories of third parties, if any, with which the controller shares personal data; and
  • An active electronic mail address or other online mechanism that the consumer may use to contact the controller.

This means that the controller needs to do some due diligence in advance of preparing the notice to understand the nature of the personal information it collects, processes, and maintains.

Controllers also must:

  • Limit the collection of personal data to what is adequate, relevant, and reasonably necessary in relation to the purposes for which such data is processed, as disclosed to the consumer. As with other state data privacy laws, this means that controllers must give some thought to what they are collecting and whether they need to collect it;
  • Not process personal data for purposes that are neither reasonably necessary to, nor compatible with, the disclosed purposes for which such personal data is processed, as disclosed to the consumer unless the controller obtains the consumer’s consent;
  • Establish, implement, and maintain reasonable administrative, technical, and physical data security practices to protect the confidentiality, integrity, and accessibility of personal data appropriate to the volume and nature of the personal data at issue. What is interesting about this requirement, which exists in several other privacy laws, is that this security requirement applies beyond more sensitive personal information, such as social security numbers, financial account numbers, health information, etc.;
  • Not process sensitive data concerning a consumer without obtaining the consumer’s consent, or, in the case of the processing of sensitive data concerning a known child, without processing such data in accordance with COPPA. Sensitive data means personal data that includes data revealing racial or ethnic origin, religious beliefs, mental or physical health condition or diagnosis, sex life, sexual orientation, or citizenship or immigration status; the processing of genetic or biometric data for the purpose of uniquely identifying an individual; personal data collected from a known child; or, precise geolocation data;
  • Not process personal data in violation of the laws of this state and federal laws that prohibit unlawful discrimination against consumers;
  • Provide an effective mechanism for a consumer to revoke the consumer’s consent that is at least as easy as the mechanism by which the consumer provided the consumer’s consent and, upon revocation of such consent, cease to process the data as soon as practicable, but not later than fifteen days after the receipt of such request; and
  • Not process the personal data of a consumer for purposes of targeted advertising, or sell the consumer’s personal data without the consumer’s consent, under circumstances where a controller has actual knowledge, and willfully disregards, that the consumer is at least thirteen years of age but younger than sixteen years of age.  
  • Not discriminate against a consumer for exercising any of the consumer rights contained in the New Hampshire law, including denying goods or services, charging different prices or rates for goods or services, or providing a different level of quality of goods or services to the consumer.

In some cases, such as when a controller processes sensitive personal information as discussed above or for purposes of profiling, it must conduct and document a data protection assessment for those activities. Such assessments are required for the processing of data that presents a heightened risk of harm to a consumer.  

Are controllers required to have agreements with processors?

As with the CCPA and other comprehensive data privacy laws, the law appears to require that a contract between a controller and a processor govern the processor’s data processing procedures with respect to processing performed on behalf of the controller. 

Among other things, the contract must require that the processor:

  • Ensure that each person processing personal data is subject to a duty of confidentiality with respect to the data;
  • At the controller’s direction, delete or return all personal data to the controller as requested at the end of the provision of services, unless retention of the personal data is required by law.
  • Upon the reasonable request of the controller, make available to the controller all information in its possession necessary to demonstrate the processor’s compliance with the obligations in this chapter;
  • After providing the controller an opportunity to object, engage any subcontractor pursuant to a written contract that requires the subcontractor to meet the obligations of the processor with respect to the personal data; and
  • Allow, and cooperate with, reasonable assessments by the controller or the controller’s designated assessor, or the processor may arrange for a qualified and independent assessor to conduct an assessment of the processor’s policies and technical and organizational measures in support of the obligations under the law, using an appropriate and accepted control standard or framework and assessment procedure for such assessments.  The processor shall provide a report of such assessment to the controller upon request.

Other provisions might be appropriate in an agreement between a controller and a processor, such as terms addressing responsibility in the event of a data breach and specific record retention obligations.

How is the law enforced?

The attorney general shall have sole and exclusive authority to enforce a violation of the statute.

If you have questions about New Hampshire’s privacy law or related issues please reach out to a member of our Privacy, Data, and Cybersecurity practice group to discuss.

California Invasion of Privacy Act (CIPA) has become a focal point in recent legal battles, particularly within the retail industry. As retailers increasingly adopt technologies like session replay and chatbots to enhance customer experiences, they inadvertently tread into murky legal waters. These technologies, while valuable for optimizing websites and addressing customer inquiries, have faced a barrage of lawsuits and threats. Claimants argue that using these tools without obtaining customer consent amounts to wiretapping or using a “pen register.”

Session-replay software records specific customer interactions on websites, aiding in bug fixes, issue investigation, and market optimization. However, these tools may fall under so-called “two-party consent” statutes. For instance, the California Penal Code § 631 (a) requires consent from all parties involved in a communication. Retailers across various industries—clothing, finance, jewelry, and more—have found themselves in the crosshairs of these lawsuits.

At least 40 lawsuits originating in California have been filed involving CIPA since May 31, 2022. May 2022 was when the U.S. Court of Appeals for the 9th Circuit ruled in Javier v. Assurance IQ that, under CIPA, allparties to a “communication” must consent to that communication. Essentially finding that if a website does not request consent prior to a consumer engaging with a website, recording of any kind would be without valid consent.

As such, retailers with an online presence need to review the use of technologies such as session replay and chatbots and ensure there is a mechanism for consent from the consumer prior to interaction to ensure compliance with CIPA and other statutes that require two-party consent when recording communications.

If you have questions about CIPA compliance or related issues, contact a Jackson Lewis attorney to discuss.

On February 28, 2024, President Biden issued an Executive Order (EO) seeking to protect the sensitive personal data of Americans from potential exploitation by particular countries. The EO acknowledges that access to Americans’ “bulk sensitive personal data” and United States Government-related data by countries of concern can, among other things:

…fuel the creation and refinement of AI and other advanced technologies, thereby improving their ability to exploit the underlying data and exacerbating the national security and foreign policy threats.  In addition, access to some categories of sensitive personal data linked to populations and locations associated with the Federal Government — including the military — regardless of volume, can be used to reveal insights about those populations and locations that threaten national security.  The growing exploitation of Americans’ sensitive personal data threatens the development of an international technology ecosystem that protects our security, privacy, and human rights.

The EO also acknowledges that due to advances in technology, combined with access by countries of concern to large data sets, data that is anonymized, pseudonymized, or de-identified is increasingly able to be re-identified or de-anonymized. This prospect is significantly concerning for health information warranting additional steps to protect health data and human genomic data from threats.

The EO does not specifically define “bulk sensitive personal data” or “countries of concern,” it leaves those definitions to the Attorney General and regulations. However, under the EO, “sensitive personal data” generally refers to elements of data such as covered personal identifiers, geolocation and related sensor data, biometric identifiers, personal health data, personal financial data, or any combination thereof.

Significantly, the EO does not broadly prohibit:

United States persons from conducting commercial transactions, including exchanging financial and other data as part of the sale of commercial goods and services, with entities and individuals located in or subject to the control, direction, or jurisdiction of countries of concern, or impose measures aimed at a broader decoupling of the substantial consumer, economic, scientific, and trade relationships that the United States has with other countries. 

Instead, building on previous executive actions, such as Executive Order 13694 of April 1, 2015 (Blocking the Property of Certain Persons Engaging in Significant Malicious Cyber-Enabled Activities), the EO intends to establish “specific, carefully calibrated actions to minimize the risks associated with access to bulk sensitive personal data and United States Government-related data by countries of concern while minimizing disruption to commercial activity.”

In short, some of what the EO does includes the following:

  • Directs the Attorney General, in coordination with the Department of Homeland Security (DHS), to issue regulations that prohibit or otherwise restrict United States persons from engaging in certain transactions involving bulk sensitive personal data or United States Government-related data, including transactions that pose an unacceptable risk to the national security. Such proposed regulations, to be issued within 180 days of the EO, would identify the prohibited transactions, countries of concern, and covered persons.  
  • Directs the Secretary of Defense, the Secretary of Health and Human Services, the Secretary of Veterans Affairs, and the Director of the National Science Foundation to consider steps, including issuing regulations, guidance, etc. to prohibit the provision of assistance that enables access by countries of concern or covered persons to United States persons’ bulk sensitive personal data, including personal health data and human genomic data.  

At this point, it remains to be seen how this EO might impact certain sensitive personal information or transactions involving the same.

Jackson Lewis will continue to track developments regarding the EO and related issues in data privacy. If you have questions about the Executive Order or related issues contact a Jackson Lewis attorney to discuss.

Artificial intelligence tools are fundamentally changing how people work. Tasks that used to be painstaking and time-consuming are now able to be completed in real-time with the assistance of AI.

Many organizations have sought to leverage the benefits of AI in various ways. An organization, for instance, can use AI to screen resumes and identify which candidates are likely to be the most qualified. The organization can also use AI to predict which employees are likely to leave the organization so retention efforts can be implemented.

One AI use that is quickly gaining popularity is performance management of employees. An organization could use AI to summarize internal data and feedback on employees to create performance summaries for managers to review. By constantly collecting this data, the AI tool can help ensure that work achievements or issues are captured in real-time and presented effectively on demand. This can also help facilitate more frequent touchpoints for employee feedback—with less administrative burden—so that organizations can focus more on having meaningful conversations with employees about the feedback they receive and recommended areas of improvement.

While the benefits of using AI have been well publicized, its potential pitfalls have attracted just as much publicity. The use of AI tools in performance management can expose organizations to significant privacy and security risks, which need to be managed through comprehensive policies and procedures.

Potential Risks

  1. Accuracy of information. AI tools have been known to create outputs that are nonsensical or simply inaccurate, commonly referred to as “AI hallucinations.” Rather than solely relying on the outputs provided by an AI tool, an organization should ensure it independently verifies the accuracy of the outputs provided by the AI tool. Inaccurate statements in an employee’s performance evaluation, for instance, could expose the organization to significant liability.
  2. Bias and discrimination. AI tools are trained using historical data from various sources, which can inadvertently perpetuate biases existing in that data. In a joint statement issued by several federal agencies, the agencies highlighted that the datasets used to train AI tools could be unrepresentative, incorporate historical bias, or correlate data with protected classes, which could lead to a discriminatory outcome. A recent experiment conducted with ChatGPT illustrated how these embedded biases can manifest in the performance management context.
  3. Compliance with legal obligations. In recent years, legislatures at the federal, state, and local levels have prioritized AI regulation in order to protect individuals’ privacy and secure data. Last year, New York City’s AI law took effect requiring employers to conduct bias audits before using AI tools in employment decisions. Other jurisdictions—including California, New Jersey, New York, and Washington D.C.—have proposed similar bias audit legislation. In addition, Vermont introduced legislation that would prohibit employers from relying solely on information from AI tools when making employment-related decisions. As more jurisdictions become active with AI regulation, organizations should remain mindful of their obligations under applicable laws.

Mitigation Strategies

  1. Conduct employee training. Organizations should ensure all employees are trained on the use of AI tools in accordance with organization policy. This training should include information on the potential benefits and risks associated with AI tools, organization policies concerning these tools, and the operation and use of approved AI tools.
  2. Examine issues related to bias. To help minimize risks related to bias in AI tools, organizations should carefully review the data and algorithms used in their performance management platforms. Organizations should also explore what steps, if any, the AI-tool vendor took to successfully reduce bias in employment decisions.  
  3. Develop policies and procedures to govern AI use. To comply with applicable data privacy and security laws, an organization should ensure that it has policies and procedures in place to regulate how AI is used in the organization, who has access to the outputs, to whom the outputs are shared, where the outputs are stored, and how long the outputs are kept. Each of these important considerations will vary across organizations, so it is critical that the organization develops a deeper understanding of the AI tools sought to be implemented.

For organizations seeking to use AI for performance management of employees, it is important to be mindful of the risks associated with AI use. Most of these risks can be mitigated, but it will require organizations to be proactive in managing their data privacy and security risks.  

On February 13, 2024, Nebraska’s Governor signed Legislative Bill 308, which enacts additional consumer protections for consumers in the state. It is similar to another genetic information law passed by Montana last year.

The law takes effect July 17, 2024 (90 days after the legislature adjourns on April 18, 2024).  

Covered Businesses

The law applies to direct-to-consumer genetic testing companies which are defined as an entity that:

  • Offers consumer genetic testing products or services directly to a consumer; or,
  • Collects, uses, or analyzes genetic data that resulted from a direct-to-consumer genetic testing product or service and was provided to the company by the consumer.

The law does not cover entities that are solely engaged in collecting, using, or analyzing genetic data or biological samples in the context of research under federal law.

Covered Consumers

The law applies to an individual who is a resident of the State of Nebraska.

Obligations Under the Law

Under the new law covered businesses would be required to:

  • Provide clear and complete information regarding the company policies and procedures for the collection, use, or disclosure of genetic data
  • Obtain a consumer’s consent for the collection, use, or disclosure of the consumer’s genetic data
  • Require a valid legal process before disclosing genetic data to any government agency, including law enforcement, without the consumer’s express written consent
  • Develop, implement, and maintain a comprehensive security program to protect a consumer’s genetic data from authorized access, use, or disclosure

Similar to several comprehensive consumer privacy laws, the company must provide a consumer with:

  • Access to their genetic data
  • A process to delete an account and genetic data
  • A process to request and obtain written documentation verifying the destruction of the consumer’s biological sample

Enforcement

Under the new law, the Nebraska Attorney General may bring an action on behalf of a consumer to enforce rights under the law. There is no private right of action specified within the statute.

A violation of the act is subject to a civil penalty of $2,500 per violation, in addition to actual damages, costs, and reasonable attorney’s fees.

If you have questions about Nebraska’s genetic privacy law or related issues please reach out to a member of our Privacy, Data, and Cybersecurity practice group to discuss.