As the integration of technology in the workplace accelerates, so do the challenges related to privacy, cybersecurity, and the ethical use of artificial intelligence (AI). Human resource professionals and in-house counsel must navigate a rapidly evolving landscape of legal and regulatory requirements. This National Privacy Day, it’s crucial to spotlight emerging issues in workplace technology and the associated implications for data privacy, cybersecurity, and compliance.

We explore here practical use cases raising these issues, highlight key risks, and provide actionable insights for HR professionals and in-house counsel to manage these concerns effectively.

1. Wearables and the Intersection of Privacy, Security, and Disability Law

Wearable devices have a wide range of use cases including interactive training, performance monitoring, and navigation tracking. Wearables such as fitness trackers and smartwatches became more popular in HR and employee benefits departments when they were deployed in wellness programs to monitor employees’ health metrics, promote fitness, and provide a basis for doling out insurance premium incentives. While these tools offer benefits, they also collect sensitive health and other personal data, raising significant privacy and cybersecurity concerns under the Health Insurance Portability and Accountability Act (HIPAA), the Americans with Disabilities Act (ADA), and state privacy laws.

Earlier this year, the Equal Employment Opportunity Commission (EEOC) issued guidance emphasizing that data collected through wearables must align with ADA rules. More recently, the EEOC withdrew that guidance in response to an Executive Order issued by President Trump. Still, employers should evaluate their use of wearables and whether they raise ADA issues, such as voluntary use of such devices when collecting confidential medical information, making disability-related inquiries, and using aggregated or anonymized data to prevent discrimination claims.

Beyond ADA compliance, cybersecurity is critical. Wearables often collect sensitive data and transmit same to third-party vendors. Employers must assess these vendors’ data protection practices, including encryption protocols and incident response measures, to mitigate the risk of breaches or unauthorized access.

Practical Tip: Implement robust contracts with third-party vendors, requiring adherence to privacy laws, breach notification, and security standards. Also, ensure clear communication with employees about how their data will be collected, used, and stored.

2. Performance Management Platforms and Employee Monitoring

Platforms like Insightful and similar performance management tools are increasingly being used to monitor employee productivity and/or compliance with appliable law and company policies. These platforms can capture a vast array of data, including screen activity, keystrokes, and time spent on tasks, raising significant privacy concerns.

While such tools may improve efficiency and accountability, they also risk crossing boundaries, particularly when employees are unaware of the extent of monitoring and/or where the employer doesn’t have effective data minimization controls in place. State laws like the California Consumer Privacy Act (CCPA) can place limits on these monitoring practices, particularly if employees have a reasonable expectation of privacy. They also can require additional layers of security safeguards and administration of employee rights with respect to data collected and processed using the platform.

Practical Tip: Before deploying such tools, assess the necessity of data collection, ensure transparency by notifying employees, and restrict data collection to what is strictly necessary for business purposes. Implement policies that balance business needs with employee rights to privacy.

3. AI-Powered Dash Cams in Fleet Management

AI-enabled dash cams, often used for fleet management, combine video, audio, GPS, telematics, and/or biometrics to monitor driver behavior and vehicle performance, among other things. While these tools enhance safety and efficiency, they also present significant privacy and legal risks.

State biometric privacy laws, such as Illinois’s Biometric Information Privacy Act (BIPA) and similar laws in California, Colorado, and Texas, impose stringent requirements on biometric data collection, including obtaining employee consent and implementing robust data security measures. Employers must also assess the cybersecurity vulnerabilities of dash cam providers, given the volume of biometric, location, and other data they may collect.

Practical Tip: Conduct a legal review of biometric data collection practices, train employees on the use of dash cams, and audit vendor security practices to ensure compliance and minimize risk.

4. Assessing Vendor Cybersecurity for Employee Benefits Plans

Third-party vendors play a crucial role in processing data for retirement plans, such as 401(k) plan, as well as health and welfare plans. The Department of Labor (DOL) emphasized in recent guidance the importance of ERISA plan fiduciaries’ role to assess the cybersecurity practices of such service providers.

The DOL’s guidance underscores the need to evaluate vendors’ security measures, incident response plans, and data breach notification practices. Given the sensitive nature of data processed as part of plan administration—such as Social Security numbers, health records, and financial information—failure to vet vendors properly can lead to breaches, lawsuits, and regulatory penalties, including claims for breach of fiduciary duty.

Practical Tip: Conduct regular risk assessments of vendors, incorporate cybersecurity provisions into contracts, and document the due diligence process to demonstrate compliance with fiduciary obligations.

5. Biometrics for Access, Time Management, and Identity Verification

Biometric technology, such as fingerprint or facial recognition systems, is widely used for identity verification, physical access, and timekeeping. While convenient, the collection of biometric data carries significant privacy and cybersecurity risks.

BIPA and similar state laws require employers to obtain written consent, provide clear notices about data usage, and adhere to stringent security protocols. Additionally, biometrics are uniquely sensitive because they cannot be changed if compromised in a breach.

Practical Tip: Minimize reliance on biometric data where possible, ensure compliance with consent and notification requirements, and invest in encryption and secure storage systems for biometric information. Check out our Biometrics White Paper.

6. HIPAA Updates Affecting Group Health Plan Compliance

Recent changes to the HIPAA Privacy Rule, including provisions related to reproductive healthcare, significantly impact group health plans. The proposed HIPAA Security Rule amendments also signal stricter requirements for risk assessments, access controls, and data breach responses.

Employers sponsoring group health plans must stay ahead of these changes by updating their HIPAA policies and Notice of Privacy Practices, training staff, and ensuring that business associate agreements (BAAs) reflect the new requirements.

Practical Tip: Regularly review HIPAA compliance practices and monitor upcoming changes to ensure your group health plan aligns with evolving regulations.

7. Data Breach Notification Laws and Incident Response Plans

Many states have updated their data breach notification laws, lowering notification thresholds, shortening notification timelines, and expanding the definition of personal information. Employers should revise their incident response plans (IRPs) to align with these changes.

Practical Tip: Ensure IRPs reflect updated laws, test them through simulated breach scenarios, and coordinate with legal counsel to prepare for reporting obligations in case of an incident.

8. AI Deployment in Recruiting and Retention

AI tools are transforming HR functions, from recruiting to performance management and retention strategies. However, these tools require vast amounts of personal data to function effectively, increasing privacy and cybersecurity risks.

The EEOC and other regulatory bodies have cautioned against discriminatory impacts of AI, particularly regarding protected characteristics like disability, race, or gender. (As noted above, the EEOC recently withdrew its AI guidance under the ADA and Title VII following an Executive Order by the Trump Administration.) For example, the use of AI in hiring or promotions may trigger compliance obligations under the ADA, Title VII, and state laws.

Practical Tip: Conduct bias audits of AI systems, implement data minimization principles, and ensure compliance with applicable anti-discrimination laws.

9. Employee Use of AI Tools

Moving beyond the HR department, AI tools are fundamentally changing how people work.  Tasks that used to require time-intensive manual effort—creating meeting minutes, preparing emails, digesting lengthy documents, creating PowerPoint decks—can now be completed far more efficiently with assistance from AI.  The benefits of AI tools are undeniable, but so too are the associated risks.  Organizations that rush to implement these tools without thoughtful vetting processes, policies, and training will expose themselves to significant regulatory and litigation risk.     

Practical Tip: Not all AI tools are created equal—either in terms of the risks they pose or the utility they provide—so an important first step is developing criteria to assess, and then going through the process of assessing, which AI tools to permit employees to use.  Equally important is establishing clear ground rules for how employees can use those tools.  For instance, what company information are they permitted to use to prompt the tool; what are the processes for ensuring the tool’s output is accurate and consistent with company policies and objectives; and should employee use of AI tools be limited to internal functions or should they also be permitted to use these tools to generate work product for external audiences. 

10. Data Minimization Across the Employee Lifecycle

At the core of many of the above issues is the principle of data minimization. The California Privacy Protection Agency (CPPA) has emphasized that organizations must collect only the data necessary for specific purposes and ensure its secure disposal when no longer needed.

From recruiting to offboarding, HR professionals must assess whether data collection practices align with the principle of data minimization. Overcollection not only heightens privacy risks but also increases exposure in the event of a breach.

Practical Tip: Develop a data inventory mapping employee information from collection to disposal. Regularly review and update policies to limit data retention and enforce secure deletion practices.

Conclusion

The rapid adoption of emerging technologies presents both opportunities and challenges for employers. HR professionals and in-house counsel play a critical role in navigating privacy, cybersecurity, and AI compliance risks while fostering innovation.

By implementing robust policies, conducting regular risk assessments, and prioritizing data minimization, organizations can mitigate legal exposure and build employee trust. This National Privacy Day, take proactive steps to address these issues and position your organization as a leader in privacy and cybersecurity.

Insider threats continue to present a significant challenge for organizations of all sizes. One particularly concerning scenario involves employees who leave an organization and impermissibly take or download sensitive company data. These situations can severely impact a business, especially when departing employees abscond with confidential business information or trade secrets. Focusing on how the theft of such information could cripple a business’s operations, competitive advantage, etc. is warranted. It is critical not to overlook, however, other legal and regulatory implications stemming from the theft of certain data, including potential data breach notification obligations.

The Importance of Safeguarding Trade Secrets

Trade secrets generally refer to information that has commercial value because it’s kept secret. Examples include formulas, patterns, programs, devices, methods, and other valuable business data. Such data are often the lifeblood of a company’s competitive edge. These secrets must be safeguarded to retain their value and legal protections under the Uniform Trade Secrets Act (UTSA) which has been adopted by most states. Businesses will need to demonstrate that they took reasonable measures to protect their trade secrets.

Reasonable safeguards under the UTSA can include:

  • Implementing access controls to restrict employees’ ability to download or share sensitive information.
  • Requiring employees to sign confidentiality agreements and restrictive covenants.
  • Regularly training employees on the importance of data security and confidentiality.
  • Using monitoring tools to detect unusual access or downloads of sensitive data.

Failing to adopt such safeguards can jeopardize a company’s ability to claim protection for trade secrets and pursue legal remedies if those secrets are stolen. Companies should consult with trusted IT and legal advisors to ensure they have adequate safeguards.

Beyond Trade Secrets: Data Breach Concerns

While the theft of confidential business and trade secret information rightly garners attention, focusing exclusively on this aspect may cause companies to miss another critical risk: the theft of personal information. As part of their efforts to remove company information, departing employees may inadvertently or intentionally take personal information, such as employee or customer data, which could trigger significant legal obligations, particularly if accessed or acquired without authorization.

Contrary to common assumptions, data breach notification laws do not solely apply to stolen Social Security numbers. Most state data breach laws define “personal information” broadly to include elements such as:

  • Financial account information, including debit or credit card numbers.
  • Driver’s license or state identification numbers.
  • Health insurance and medical information.
  • Dates of birth.
  • Online account credentials, such as usernames and passwords.
  • Biometric data, such as fingerprints or facial recognition profiles.

The unauthorized access or acquisition of these data elements together with the individual’s name can constitute a data breach, requiring timely notification to affected individuals and, in some cases, regulatory authorities.

Broader Regulatory and Contractual Implications

In addition to state breach notification laws that seek to protect personal information, companies must consider other regulatory and contractual obligations when sensitive data is stolen. For example:

  • Publicly traded companies: Theft of critical business information by a departing employee may require disclosure under U.S. Securities and Exchange Commission (SEC) regulations if the theft is deemed material. If a company determines the materiality threshold has been reached, it has four days to report to the public.
  • Critical infrastructure businesses: Companies providing services in regulated industries, such as energy or healthcare, may have reporting obligations to regulatory authorities if sensitive confidential business data is compromised.
  • Contractual obligations: Many businesses enter into agreements with business customers that require notification if confidential business information or personal data is compromised.

Ignoring these obligations could expose organizations to fines, lawsuits, and reputational harm, compounding the difficulties already created by the theft of an organization’s confidential business information.

Taking a Comprehensive Approach to Data Theft

The theft of confidential business information by a departing employee can be devastating for a business. However, focusing solely on restrictive covenants, trade secrets, or business information risks overlooking the full scope of legal and regulatory obligations. To effectively respond to such incidents, companies should:

  1. Identify the nature of the stolen data: Assess whether the data includes personal information, trade secrets, or other sensitive information that could trigger specific legal obligations.
  2. Evaluate legal and regulatory obligations: Determine whether notification is required under state breach laws, SEC or other regulations (if applicable), industry-specific rules, or contractual agreements.
  3. Leverage restrictive covenant agreements: Assess appropriate legal or contractual remedies, including under restrictive covenant, confidentiality, and other agreements, as part of a broader strategy to address the theft.
  4. Implement safeguards: Strengthen data protection measures to mitigate the risk of future incidents, including employee training, enhanced monitoring, and robust exit procedures.

While dealing with insider threats is undoubtedly challenging, taking a comprehensive and proactive approach can help businesses protect their interests and minimize legal exposure. In today’s interconnected and highly regulated world, understanding the full scope of risks and obligations tied to data theft is essential for any business.

Ask any chief information security officer (CISO), cyber underwriter or risk manager, or cybersecurity attorney about what controls are critical for protecting an organization’s information systems, you’ll likely find multifactor authentication (MFA) at or near the top of every list. Government agencies responsible for helping to protect the U.S. and its information systems and assets (e.g., CISA, FBI, Secret Service) send the same message. But that message may be evolving a bit as criminal threat actors have started to exploit weaknesses in MFA.  

According to a recent report in Forbes, for example, threat actors are harnessing AI to break though multifactor authentication strategies designed to prevent new account fraud. “Know Your Customer” procedures are critical in certain industries for validating the identity of customers, such as financial services, telecommunications, etc. Employers increasingly face similar issues with recruiting employees, when they find, after making the hiring decision, that the person doing the work may not be the person interviewed for the position.

Threat actors have leveraged a new AI deepfake tool that can be acquired on the dark web to bypass the biometric systems that been used to stop new account fraud. According to the Forbes article, the process goes something like this:

1. Bad actors use one of the many generative AI websites to create and download a fake image of a person.

2. Next, they use the tool to synthesize a fake passport or a government-issued ID by inserting the fake photograph…

3. Malicious actors then generate a deepfake video (using the same photo) where the synthetic identity pans their head from left to right. This movement is specifically designed to match the requirements of facial recognition systems. If you pay close attention, you can certainly spot some defects. However, these are likely ignored by facial recognition because videos are prone to have distortions due to internet latency issues, buffering or just poor video conditions.

4. Threat actors then initiate a new account fraud attack where they connect a cryptocurrency exchange and proceed to upload the forged document. The account verification system then asks to perform facial recognition where the tool enables attackers to connect the video to the camera’s input.

5. Following these steps, the verification process is completed, and the attackers are notified that their account has been verified.”

Sophisticated AI tools are not the only MFA vulnerability. In December 2024, the Cybersecurity & Infrastructure Security Agency (CISA) issued best practices for mobile communications. Among its recommendations, CISA advised mobile phone users, in particular highly-targeted individuals,  

Do not use SMS as a second factor for authentication. SMS messages are not encrypted—a threat actor with access to a telecommunication provider’s network who intercepts these messages can read them. SMS MFA is not phishing-resistant and is therefore not strong authentication for accounts of highly targeted individuals.

In a 2023 FBI Internet Crime Report, the FBI reported more than 1,000 “SIM swapping” investigations. A SIM swap is just another technique by threat actors involving the “use of unsophisticated social engineering techniques against mobile service providers to transfer a victim’s phone service to a mobile device in the criminal’s possession.

In December, Infosecurity Magazine reported on another vulnerability in MFA. In fact, there are many reports about various vulnerabilities with MFA.

Are we recommending against the use of MFA. Certainly not. Our point is simply to offer a reminder that there are no silver bullets to achieving security of information systems and that AI is not only used by the good guys. An information security program, preferably one that is written (a WISP), requires continuous vigilance, and not just from the IT department, as new technologies are leveraged to bypass older technologies.

In 2024, Israel became the latest jurisdiction to enact comprehensive privacy legislation, largely inspired by the EU’s General Data Protection Regulation (“GDPR”). On August 5, 2024, Israel’s parliament, the Knesset, voted to approve the enactment of Amendment No. 13 (“the Amendment”) to the Israel Privacy Protection Law (“IPPL”). The amendment which will take effect on August 15, 2025, is considered an overhaul to the IPPL, which has been left largely untouched since the law’s enactment in 1996.

Key Features of the Amendment include:

  • Expansion of key definitions in the law
    • Personal Information – Expanded to include any “data related to an identified or identifiable person”.Highly Sensitive Information – Replaces the IPPL’s current definition of “sensitive information” and is similar in kind to the GDPR’s Special Categories of Data.  Types of information that qualify as highly sensitive information under the Amendment include biometric data, genetic data, location and traffic data, criminal records and assessment of personality types.Data Processing The Amendment broadens the definition of processing to include any operation on information, including receipt, collection, storage, copying, review, disclosure, exposure, transfer, conveyance, or granting access.Database Controller – The IPPL previously used the term “database owner”, and akin to the GDPR has changed the term to database controller, which is defined as the person or entity that determines the purpose of processing personal information in the database.
    • Database Holder – Similar to the GDPR’s “processor”, the Amendment includes the term database holder which is defined as an entity “external to the data controller that processes information on behalf of the data controller”, which due to the broad definition of data processing, captures a broad set of third-party service providers.
  • Mandatory Appointment of a Privacy Protection Officer & Data Security Officer
    • Equivalent to the GDPR’s Data Protection Officer (DPO) role, an entity that meets certain criteria based on size and industry (inclusive of both data controllers and processors), will be required to implement a new role in their organization entitled the Privacy Protection Officer, tasked with ensuring compliance with the IPPL and promoting data security and privacy protection initiatives within their organization.   Likewise, the obligation to appoint a Data Security Officer, which was a requirement for certain organizations prior to the Amendment, has now been expanded to apply to a broader set of entities.
  • Expansion of Enforcement Authority
    • The Privacy Protection Authority (“PPA”), Israel’s privacy regulator, has been given broader enforcement authority including a significant increase in financial penalties based on the number of data subjects impacted due to a violation, the type of violation and the violating entity’s financial turnover.  Financial penalties are capped at 5% of the businesses‘ annual turnover for larger organizations which could reach millions of dollars (e.g. a data processor that processes data without the controller’s permission in a database of 1,000,000 data subjects (8 ILS per data subject) can be fined 8,000,000 ILS (approx. $2.5 million USD)).  Small and micro businesses are capped at penalties of 140,000 ILS ($45,000 USD) per year. Other enhancements to the PPA’s authority include expansive investigative and supervisory powers as well as increased authority for the Head of the PPA to issue warnings and injunctions. 

Additional updates to the Amendment include expansion of the notice obligation in the case of a data breach, increased rights of data subjects, extension of the statute of limitations and exemplary damages. In following segments on the IPPL leading up to the August 2025 effective date, we will dive deeper on some of the key features of the Amendment, certain to have impact on entities with customers and/or employees in Israel.

Data privacy and security regulation is growing rapidly around the world, including in Israel. This legislative activity, combined with the growing public awareness of data privacy rights and concerns, makes the development of a meaningful data protection program an essential component of business operations.

On June 25, 2024, Rhode Island became the 20th state to enact a comprehensive consumer data protection law, the Rhode Island Data Transparency and Privacy Protection Act (“RIDTPPA”). The state joins Kentucky, Maryland, Minnesota, Nebraska, New Hampshire, and New Jersey in passing consumer data privacy laws this year.

The RIDTPPA takes effect on January 1, 2026.

To Whom does the law apply?

The law applies to two types of organizations, defined as “controllers”:

1. For-profit  entities that conduct business in the state of Rhode Island or that produce products or services that are targeted to residents of the state and that during the preceding calendar year did any of the following:

  • Controlled or processed the personal data of not less than thirty-five thousand (35,000) customers, excluding personal data controlled or processed solely for the purpose of completing a payment transaction, or
  • Controlled or processed the personal data of not less than ten thousand (10,000) customers and derived more than twenty percent (20%) of their gross revenue from the sale of personal data.

2. A commercial website or internet service provider conducting business in Rhode Island or with customers in Rhode Island or that is otherwise subject to Rhode Island jurisdiction and collects stores, and sells customers’ personally identifiable information.

Who is protected by the law?

Customer means an individual residing in Rhode Island who is acting in an individual or household context. The definition of customer does not include an individual acting in a commercial or employment context.

What data is protected by the law?

The law protects personal data, which is defined as any information that is linked or reasonably linkable to an identified or identifiable individual and does not include de-identified data or publicly available information.

RIDTPPA contains numerous exceptions for specific types of data including data that meets the definition of protected health information under HIPAA, personal data collected, processed, sold, or disclosed pursuant to the federal Gramm-Leach-Bliley Act, and personal data regulated by the federal Family Educations Rights and Privacy Act.

The law also provides heightened protection for sensitive data, which means personal data revealing racial or ethnic origin, religious beliefs, mental or physical health condition or diagnosis, sex life, sexual orientation, or citizenship or immigration status; the processing of genetic or biometric data for the purpose of uniquely identifying an individual; the personal data of a known child; or precise geolocation data.

What are the rights of customers?

Under the law, customers have the following rights with respect to data collected by for-profit  entities that conduct business in the state or produce products or services targeted to residents of the state and meet one of the relevant thresholds:

  • Confirm whether a controller is processing their personal data and access that data.
  • Correct inaccuracies in the data a controller is processing.
  • Have personal data deleted unless the retention of the personal data is permitted or required by law.
  • Port personal data.
  • Opt out of the processing of personal data for targeted advertising, the sale of personal data, or profiling in furtherance of automated decisions that produce legal or similarly significant effects concerning the customer.

Under the law, customers also have a right to receive notice from commercial websites or internet service providers of their data collection activities.

What obligations do controllers have?

Both categories of controllers under Rhode Island’s law are required to provide a notice of data collection activities. Controllers that are for-profit  entities conducting business in the state or producing products or services targeted to residents of the state and that meet one of the relevant thresholds have the following additional obligations:

  • Limit collection of personal data to what is adequate, relevant, and reasonably necessary in relation to the purposes for which the data are processed.
  • Establish, implement, and maintain reasonable administrative, technical, and physical data security practices to protect, the confidentiality, integrity, and accessibility of personal data.
  • Obtain consent prior to processing a customer’s sensitive personal data.
  • Conduct and document a data privacy and protection assessment for processing activities that represent heightened risk.
  • Contractually obligate any processors who will process personal data on behalf of the organization to adhere to specific data protection obligations including ensuring the security of the processing.

How is the law enforced?

The statute will be enforced by the Rhode Island Attorney General and does not provide for a right to cure. The statute does not create a private right of action.

If you have questions about Rhode Island’s privacy law or related issues please reach out to a member of our Privacy, Data, and Cybersecurity practice group to discuss.

On May 24, 2024, Minnesota’s governor signed an omnibus bill, HF4757 which included the new Consumer Data Privacy Act. The state joins Kentucky, Nebraska, New Hampshire, New Jersey, and Rhode Island in passing consumer data privacy laws this year.

Minnesota’s law takes effect July 31, 2025, except that postsecondary institutions and nonprofit corporations governed by Minnesota Statutes, chapter 317A, are not required to comply until July 31, 2029.

To who does the law apply?

The law applies to legal entities that conduct business in the state of Minnesota or that provide products or services that are targeted to residents of the state and that during the preceding calendar year did any of the following:

  • Controls or processes personal data of 100,00 consumers or more, excluding personal data controlled or processed solely for the purpose of completing a payment transaction, or,
  • Derives over 25 percent of gross revenue from the sale of personal data and processes or controls personal data of 25,000 consumers or more.

Companies that are deemed a “small business” as defined by the United States Small Business Administration under the Code of Federal Regulations, title 13, part 121, are exempt from compliance with the exception that they must not sell a consumer’s sensitive data without the consumer’s prior consent.

Who is protected by the law?

Consumer means an individual who is a resident of the State of Minnesota. The definition of consumer does not include an individual acting in a commercial or employment context.

What data is protected by the law?

The law protects personal data, which is defined as any information that is linked or reasonably linked to an identified or identifiable individual. Personal data excludes de-identified data and publicly available information.

The Consumer Data Privacy Act contains numerous exceptions for specific types of data including data that meets the definition of protected health information under HIPAA, personal data collected, processed, sold, or disclosed pursuant to the federal Gramm-Leach-Bliley Act, and personal data regulated by the federal Family Educations Rights and Privacy Act.

The law also provides heightened protection for sensitive data, which means personal data revealing racial or ethnic origin, religious beliefs, mental or physical health condition or diagnosis, sexual orientation, or citizenship or immigration status; the processing of biometric data or genetic information for the purpose of uniquely identifying an individual; the personal data of a known child; or specific geolocation data.

What are the rights of consumers?

Under the law, consumers have the following rights:

  • Confirm whether a controller is processing their personal data
  • Access to personal data a controller is processing
  • Correct inaccuracies in data a controller is processing
  • Have personal data deleted unless the retention of the personal data is required by law
  • Obtain a list of the categories of third parties to which the controller discloses personal data.
  • Port personal data
  • Opt out of the processing of personal data for targeted advertising, the sale of personal data, or profiling in furtherance of automated decisions that produce legal effects concerning a consumer or similarly significant effects concerning a consumer.

What obligations do controllers have?

Controllers under Minnesota’s law have the following obligations:

  • Provide consumers with a reasonably accessible, clear, and meaningful privacy notice.
  • Limit the collection of personal data to what is adequate, relevant, and reasonably necessary in relation to the purposes for which the data are processed.
  • Establish, implement, and maintain reasonable administrative, technical, and physical data security practices to protect, the confidentiality, integrity, and accessibility of personal data.
  • Document and maintain a description of the policies and procedures to comply with the law.
  • Conduct and document a data privacy and protection assessment for high-risk processing activities.
  • Contractually obligate service providers who will process personal data on behalf of the organization to adhere to specific data protection obligations including ensuring the security of the processing.

How is the law enforced?

The statute will be enforced by Minnesota’s attorney general. Prior to filing an enforcement action, the attorney general must provide the controller or processor with a warning letter identifying the specific provisions alleged to be violated. If after 30 days of issuance of the letter the attorney general believes the violation has not been cured, an enforcement action may be filed. The right to cure sunsets on January 31, 2026.

The statute specifies that it does not create a private right of action.

If you have questions about Minnesota’s privacy law or related issues please reach out to a member of our Privacy, Data, and Cybersecurity practice group to discuss.

On August 2, 2024, Governor Pritzker signed Senate Bill (SB) 2979, which amends the Illinois Biometric Information Privacy Act, 740 ILCS 14/1, et seq. (BIPA). The bill, which passed both the Illinois House and Senate by an overwhelming majority, confirms that a private entity that more than once collects or discloses the same biometric identifier or biometric information from the same person via the same method of collection in violation of the Act has committed a single violation for which an aggrieved person is entitled to, at most, one recovery. SB 2979 adds the following clarifying language into Section 20 of the BIPA, which is the section of the statute that identifies the damages a prevailing party mayrecover under the Act:

(b) For purposes of subsection (b) of Section 15, a private entity that, in more than one instance, collects, captures, purchases, receives through trade, or otherwise obtains the same biometric identifier or biometric information from the same person using the same method of collection in violation of subsection (b) of Section 15 has committed a single violation of subsection (b) of Section 15 for which the aggrieved person is entitled to, at most, one recovery under this Section.

(c) For purposes of subsection (d) of Section 15, a private entity that, in more than one instance, discloses, rediscloses, or otherwise disseminates the same biometric identifier or biometric information from the same person to the same recipient using the same method of collection in violation of subsection (d) of Section 15 has committed a single violation of subsection (d) of Section 15 for which the aggrieved person is entitled to, at most, one recovery under this Section regardless of the number of times the private entity disclosed, redisclosed, or otherwise disseminated the same biometric identifier or biometric information of the same person to the same recipient.

The amendment takes effect immediately.

Background

In Cothron v. White Castle System, Inc., 2023 IL 128004, the Illinois Supreme Court held that claims under Sections 15(b) and (d) of the BIPA accrue “with every scan or transmission” of alleged biometric identifiers or biometric information.  Yet, the Illinois Supreme Court, in deciding the issue of claim accrual under Sections 15(b) and (d) of the BIPA, acknowledged that there was some ambiguity about how its holding should be construed in connection with Section 20 of the BIPA, which outlines the damages that a prevailing party may recover. Notably, the Illinois Supreme Court acknowledged, “there is no language in the Act suggesting legislative intent to authorize a damages award that would result in the financial destruction of a business,” which would be the result if the legislature intended to award statutory damages on a “per-scan” basis. The Court went on to say that “policy-based concerns about potentially excessive damage awards under the Act are best addressed by the legislature” and expressly “suggest[ed] that the legislature review these policy concerns and make clear its intent regarding the assessment of damages under the Act.”

SB 2979 was introduced in the Illinois Senate on January 31, 2024, in response to the invitation from the Illinois Supreme Court and clarifies the General Assembly’s intention regarding the assessment of damages under the BIPA.

Electronic Signatures

In addition, the bill also adds “electronic signature” to the definition of written release, clarifying that an electronic signature constitutes a valid written release under Section 15(b)(3) of the BIPA. An electronic signature is defined in SB 2979 as “an electronic sound, symbol, or process attached to or logically associated with a record and executed or adopted by a person with the intent to sign a record.”

If you have questions about SB 2979 or related issues, please contact a member of our Privacy, Data, and Cybersecurity group.

Virtually all organizations have an obligation to safeguard their personal data against unauthorized access or use, and, in some instances, to notify affected individuals in the event such access or use occurs.  Those obligations are, in some instances, relatively nebulous, and organizations—for better or worse—have flexibility to determine what pre-incident safeguards and post-incident responsive actions are “reasonable” under the circumstances. 

The SEC, in its recent amendments to Regulation S-P (the Amendments), takes a different approach.  The Amendments impose detailed and specific obligations on covered institutions—including broker-dealers, investment companies, registered investment advisers, and transfer agents—to (1) develop and maintain written incident response programs and (2) provide notification to affected individuals in the event their sensitive customer information is subject to unauthorized access or use (a Data Breach)

Incident Response Program

The Amendments require covered institutions to develop and maintain written information response programs.  The function of these programs is to enable covered institutions to better detect and respond to Data Breaches, including by facilitating their:

  • assessment of the nature and scope of these incidents, including identification of the internal systems containing customer information and the types of customer information that may have been accessed or used without authorization.  The Amendments indicate that covered institutions when assessing an incident, should consider the type and extent of the unauthorized access, the impact on operations, and whether information has been exfiltrated or is no longer accessible;
  • containment and control of the incident to prevent further unauthorized access to or use of customer information.  The Amendments acknowledge that the appropriate steps for containing and controlling an incident will vary based on its nature, but identify the following as potential key action items: isolation of affected systems, enhancement of system monitoring, identifying additional compromised systems, forcing password resets, and changing or disabling default user accounts; and
  • notification to individuals whose “sensitive customer information” (defined below) was, or is reasonably likely to have been, accessed or used without authorization.

Notably, while the foregoing incident response program requirements apply to all consumer “nonpublic personal information”—a broad category encompassing all personally identifiable financial information a financial institution collects about an individual in connection with providing a financial product or service—the notification obligations discussed below are limited to incidents impacting “sensitive customer information.”

Notification to Affected Individuals

Covered institutions must provide notice to each affected individual whose sensitive customer information was, or is reasonably likely to have been, subject to a Data Breach.  “Sensitive customer information” includes:

  • information uniquely identified with an individual, such that it can reasonably be used to authenticate the individual’s identity;
  • government-issued identification numbers, including a social security number, driver’s license number, alien registration number, passport number, or employer or taxpayer identification number;
  • a biometric record;
  • a unique electronic identification number, address, or routing code;
  • telecommunication identifying information or access device; or
  • information identifying an individual or an individual’s account, including an account number, name, or online username, in combination with other authenticating information that could be used to gain access to an individual’s account.

In the event of a Data Breach, the Amendments require covered institutions to provide clear and conspicuous notice “as soon as practicable,” but not later than 30 days after their discovery of the breach.  Notice to affected individuals must include the following:

  • a general description of the incident and type of sensitive customer information affected;
  • the date (or estimated date/date range) of the incident;
  • contact information notice recipients can utilize to obtain more information about the incident; and
  • steps affected individuals can take to protect their information, including how they can obtain free credit reports, place fraud alerts on their accounts, and review their account statements for suspicious activity.

Under the Amendments, unauthorized access to or use of sensitive customer information does not always trigger the obligation to notify.  Notice is not required if, after a reasonable investigation of relevant facts and circumstances, the covered institution determines that the sensitive customer information in question has not been, and is not reasonably likely to be, used in a manner resulting in substantial harm or inconvenience (e.g. because it was protected by encryption).  The Amendments indicated that, if a covered institution reasonably determines that a specificindividual’s sensitive customer information was not accessed or used without authorization, it does not need to notify that individual.  However, if the covered institution is unable to identify which specific individual’s sensitive customer information has been accessed or used, it must notify all individuals whose information resided on the impacted information system.

Implementation

The Amendments will take effect in early August 2024, but covered entities—depending on their size—will have an 18- or 24-month grace period to come into compliance.  Larger entities, which are defined below, will need to come into compliance by December 2025, while smaller entities will have until June 2026.    

EntityQualification to be Considered a Larger Entity
Investment companies together with other investment companies in the same group of related investment companiesNet assets of $1 billion or more as of the end of the most recent fiscal year.
Registered investment advisers$1.5 billion or more in assets under management.
Broker-dealersAll broker-dealers that are not small entities under the Securities Exchange Act for purposes of the Regulatory Flexibility Act.
Transfer agentsAll transfer agents that are not small entities under the Securities Exchange Act for purposes of the Regulatory Flexibility Act.

Takeaways

Though the grace periods will likely lull some entities into near-term complacency—believing they have plenty of time to get their houses in order—prudent entities will place compliance with the Amendments high on their task lists. 

For entities that haven’t already made a significant investment in their incident response programs, development of the robust program the Amendments require will be a heavy lift.  Compliance with the assessment component, for instance, may require entities to conduct extensive data mapping to better understand what data they have, where it’s stored, how it’s safeguarded, and how long it’s retained. 

They may also need to take a close look at their current controls to detect and rapidly investigate and respond to potential Data Breaches, including those that enable the isolation of affected systems, the identification and eradication of ongoing malicious activity, and the restoration of business operations, including potential data recovery from backups. 

Covered entities will also need to prepare to analyze their notification obligations and timely provide requisite notices. 

To many, the above requirements will sound familiar, as they overlap to a degree with obligations imposed by state reasonable safeguard and breach notification laws.  The Amendments’ incident response plan prescriptions, however, are more detailed and onerous than the requirements imposed by most state laws, and their definition of “sensitive customer information” is broader than the definition of “personally identifiable information” (or the comparable term) in most states.  Accordingly, even entities that have mature incident response programs in place would benefit from giving those programs a fresh look to ensure they meet the Amendments’ lofty requirements. 

Jackson Lewis’ Financial Services and Privacy, Data, and Cybersecurity groups will continue to track this development.  Please contact a Jackson Lewis attorney with any questions.

With the Texas Data Privacy and Security Act (TDPSA) on the verge of taking effect on July 1, 2024, the State’s Attorney General, Ken Paxton, recently launched an initiative for “aggressive enforcement of Texas privacy laws.”  As part of the initiative, Paxton has established a team that will focus on the enforcement of Texas’ privacy protection laws, including the TDPSA, along with federal laws like the Children’s Online Privacy Protection Act (COPPA). 

Unlike most of the 15 plus states with comprehensive privacy laws that exclude from their scope organizations that do not meet significant data volume thresholds (e.g., processing data related to at least 100,000 state residents), the TDPSA, with limited exceptions, applies to any organization that conducts business in the state of Texas or produces a product or service consumed by Texas residents. In contrast to the California Consumer Privacy Act (CCPA), the TDPSA excludes Human Resources and Business to Business data. But aside from this exclusion, if an organization processes the personal data of consumers residing in Texas, there is a good chance it will be in scope.

Organizations that have programs in place to comply with the CCPA will have a head start toward compliance with the TDPSA.  That said, there are aspects of the TDPSA that differ from or go beyond the CCPA.  For instance, the TDPSA requires:

  • the inclusion of specific privacy policy disclosures related to the sale of biometric or sensitive personal data;
  • the collection of consent before processing personal data for previously undisclosed purposes or processing sensitive personal data;
  • data protection assessments in connection with processing sensitive personal data, selling personal data, or using it for targeted advertising;
  • the inclusion of specific provisions in vendor agreements; and
  • a mechanism for consumers to appeal the denial of their requests to exercise their TDPSA rights.   

For assistance bringing your organization into compliance with the TDPSA, please contact a member of our Privacy, Data, and Cybersecurity group.

“Cybersecurity” has emerged as one of top risks facing organizations. Considering the steady stream of massive data breaches affecting millions (sometimes billions), the debilitating effects of ransomware on an organization’s information systems, the intrigue of international threat actors, and the mobilization and collaboration of national law enforcement to thwart these attacks, it’s no wonder. Notions of privacy have long underpinned critical principles and rights in our legal system, yet actors in the space typically do not have names like LockBit or Black Basta using applications called Colbalt Strike, and [yawn] may not trigger concerns as seemingly compelling as cybersecurity. But that may be changing, at least in the minds of insurance underwriters and persons focused on compliance.

As a recent DarkReading article points out, there is a growing sense that the “mishandling [of] protected personally identifiable information (PII) could rival the cost of ransomware attacks.” The article discusses several reasons driving this view, citing among other things, the recent uptick in pixel litigation. That is,  litigation concerning the handling of website users’ personal information obtained from tracking technologies on websites without consent.

However, the article also alludes to the vast patchwork of nuanced privacy laws across numerous jurisdictions as support for an increasing number of insurance professionals viewing privacy as the “top insurance concern.” In addition to the onslaught of litigation over the use of website tracking technologies, the challenges of navigating the ever expanding and deepening maze of privacy law seem to present much greater compliance and litigation risks for organizations.

A Insurance Journal article, “The Cyber Risk Pendulum,” echoed these sentiments earlier this month and observed:

In 2024, there is a greater focus [by carriers] on controls related to “wrongful collection” coverage – the collection of data in a manner that could run afoul of privacy regulations – whether it be on a state or federal level.

This makes sense considering the emergence of state comprehensive privacy laws, most notably the California Consumer Privacy Act (CCPA). Consider that the first “Enforcement Advisory” issued by the California Privacy Protection Agency, the agency charged with enforcing the CCPA, focuses on “data minimization” – a requirement that includes assessing the collection, use, retention, and sharing of personal information from the perspective of minimizing the personal information processed for the intended purpose(s).   

For many organizations, different privacy laws can apply depending on a range of factors, including without limitation: industry, business location, categories of customers, types of equipment used, specific services provided, methods of marketing and promotion, the categories of information collected, and employment practices.

Consider a health care organization:

  • Industry: Of course, most if not all have at least heard of the Health Insurance Portability and Accountability Act (HIPAA). Covered entities and business associates (defined terms under HIPAA generally including healthcare providers and service providers to those entities) must comply with a comprehensive set of privacy regulations regulating the use and disclosure of all protected health information, regardless of format.
  • Where it does business: All states have long-standing health laws regulating the use and disclosure of patient medical information. Indeed, HIPAA provides that covered entities and business associates have to comply with more stringent state laws that conflict with HIPAA, a particular challenge for multi-state organizations. In addition to state health laws affecting the use and disclosure of patient information, common law privacy rights and obligations also need to be considered.
  • Types of customers: A healthcare provider might provide services to or on behalf of government entities, in which case it may have to comply with certain contractor mandates. Or, it may focus its health services on minors versus adults, requiring it to understand, for example, the specific rules around consent pertaining to medical information pertaining to minors. Mental healthcare providers may have an additional layer of privacy obligations concerning their patients.
  • Equipment it uses: Whether dealing with medical devices, GPS tracking of vehicles, biometric devices used to verify access certain drugs, or smart cameras for facility surveillance, healthcare organization must consider the privacy issues related to the different types of equipment used in the delivery of care and operations. The increasing use of biometrics, as one example, has become a major risk in and beyond the healthcare industry, particularly in Illinois. By some counts, alleged violations of the Illinois Biometric Information Privacy Act (BIPA) have led to nearly 2,000 putative class action cases. The BIPA, a privacy statute, creates a remedy for, among other things, failing to obtain a consent or written released in connection with collecting a biometric identifier or biometric information.
  • Types of services:
    • University hospitals, for example, also have compliance obligations under the Family Educational Rights and Privacy Act (FERPA).
    • Providers running certain federally assisted programs involving substance use services must comply with the substance abuse confidentiality regulations issued by the Substance Abuse and Mental Health Services Administration. See 42 USC Part 2 (although recent regulations finalized in February strive to align these two privacy frameworks).
    • When treating certain highly contagious diseases, providers also must consider laws regulating the use and disclosure of information related to those diseases which often provider stronger protections and limitations on disclosure.
    • A healthcare provider that performs genetic testing services must consider the applicable genetic information privacy laws, which exist in just about all 50 states. One such law is the Illinois Genetic Information Privacy Act (GIPA) passed in 1998. This law may become the next significant privacy target for the Illinois plaintiffs’ bar. Arguably more nuanced than its sister statute, the BIPA, the GIPA has been the subject of an increasing number of case filings in the past year. Compliance can be challenging. For example, the GIPA incorporates some familiar laws – GINA, ADA, Title VII, FMLA, OSHA, and others – requiring that certain entities, including employers, treat genetic testing and genetic information (including certain family medical history information) in a manner consistent with such laws. So, it is not just the GIPA that organizations need to worry about in order to comply with the GIPA.
  • Marketing its services: In addition to the use of tracking technologies referenced above, other means of collecting and sharing personal information to promote the organization’s business may have significant privacy consequences under federal and state consumer protection laws. Examples include emailing and texting, use of employee and patient images and likeness in advertisements, and sharing personal information with third parties in connection with marketing and promotion activities.
  • Categories of personal information: Not all “personal information” is the same. The post at the link just scratches the surface on the various definitions of data that may drive different compliance obligations, including for healthcare organizations.
  • Employment practices: The processing of personal information pertaining to employees, applicants, contractors, etc. creates an additional layer of privacy obligations that touch on many of the items noted above. Areas of particular concern include – increasing use of AI in hiring and promotion, workplace surveillance, methods of identity verification, managing employee medical information, and maintaining employee benefit plans. Each of these areas raise particular issues under federal and/or state law and which are shaped by the categories of information at issues.

Attempting to track, never mind become compliant with, the various privacy laws affecting each of these facets of the business is no easy task. We have not even considered the broader and more detailed and comprehensive privacy frameworks established internationally, such as the EU General Data Protection Regulation (GDPR). And, of course, it is not just healthcare providers that face these privacy challenges at various levels of their operations. Keeping information secure from cyberattacks is one thing and it too is quite challenging, but there are established frameworks for doing so that share many common threads. In the case of privacy, there seems to be many more subtle considerations that are critical for compliance.

For instance, in most cases establishing a password policy under a cybersecurity law to protect personal information is solving for one issue – requiring persons to develop a relatively strong password that will make it difficult for an unauthorized person to gain access the protected system. This may be oversimplifying, but the point is a good password policy might suffice under many different cybersecurity laws, regardless of state, type of business, category of data, etc. Complying with a privacy law regulating the disclosure of health information, on the other hand, likely will require several factors be considered: the type of entity, where it does business, the specific type of data, the individual’s age or medical condition, the reason for the disclosure, the intended recipient, etc.

Regulatory compliance is not the end of the story for privacy. For example, organizations can cause self-inflicted wounds when they make assertions about the handling and safeguarding of the personal information they collect, and fail to meet those assertions. A good example is the privacy policy on an organization’s website. Stating in such a policy that the organization will “never” disclose the personal information collected on the site may create a binding obligation on the organization, even if there is not a law that requires such a rule concerning disclosure. Check out the Federal Trade Commission’s enforcement of these kinds of issues in its recently issued 2023 Privacy and Data Security Update.

Is privacy a bigger risk than cyber? Maybe. Regardless, trying to keep track of and comply with the wide range of privacy law is no easy task, particularly considering so much of the application of those laws are determined by many factors. For this reason, it is not hard to see why underwriters may view privacy as their top concern, and why organizations need trusted and experienced partners to help navigate the maze.