On Friday, the U.S. Department of Health and Human Services (HHS), Office for Civil Rights (OCR) announced the fifth enforcement action under its Risk Analysis Initiative. In this case, OCR reached a settlement with Health Fitness Corporation (Health Fitness), a wellness vendor providing services to employer-sponsored group health plans.

This announcement is interesting for several reasons. It furthers the OCR’s Risk Analysis Initiative. The enforcement action is a reminder to business associates about HIPAA compliance. The development also points to a significant development under ERISA for plan fiduciaries and service providers to their plans.

The OCR Risk Analysis Initiative

Anyone who takes a look at prior OCR enforcement actions will notice several trends. One of those trends relates to enforcement actions following a data breach. In those cases, the OCR frequently alleges the target of the action failed to satisfy the risk analysis standard under the Security Rule. This standard is fundamental – it involves assessing the threats and vulnerabilities to electronic protected health information (ePHI), a process that helps to shape the covered entity or business associate’s approach to the other standards, and goes beyond a simply gap analysis.

“Conducting an accurate and thorough risk analysis is not only required but is also the first step to prevent or mitigate breaches of electronic protected health information,” said OCR Acting Director Anthony Archeval.  “Effective cybersecurity includes knowing who has access to electronic health information and ensuring that it is secure.”

For those wondering how committed the OCR is to its enforcement initiatives, you need not look further than its Right to Access Initiative. On March 6, 2025, the agency announced its 53rd enforcement action. According to that announcement, it involved a $200,000 civil monetary penalty imposed against a public academic health center and research university for violating an individual’s right to timely access her medical records through a personal representative.

The DOL Cybersecurity Rule

Businesses that sponsor a group health plan or other ERISA employee benefit plans might want to review the OCR’s announcement and resolution agreement concerning Health Fitness a little more carefully. In 2024, the DOL’s Employee Benefits Security Administration (EBSA) issued Compliance Assistance Release No. 2024-01. That release makes clear that the fiduciary obligation to assess the cybersecurity of plan service providers applies to all ERISA-covered employee benefit plans, including wellness programs for group health plans.

OCR commenced it investigation of Health Fitness after receiving four reports from Health Fitness, over a three-month period (October 15, 2018, to January 25, 2019), of breaches of PHI.  According to the OCR, “Health Fitness reported that beginning approximately in August 2015, ePHI became discoverable on the internet and was exposed to automated search devices (web crawlers) resulting from a software misconfiguration on the server housing the ePHI.” Despite these breaches, according to the OCR, Health Fitness had failed to conduct an accurate and thorough risk analysis, until January 19, 2024.

For Health Fitness, it agreed to implement a corrective action plan that OCR will monitor for two years and paid $227,816 to OCR. For ERISA plan fiduciaries, an important question is what they need to do to assess the cybersecurity of plan service providers like Health Fitness during the procurement process and beyond.

We provide some thoughts in our earlier article and want to emphasize that plan fiduciaries need to be involved in the process. Cybersecurity is often a risk left to the IT department.  However, doing so may leave even the most ardent IT professional ill equipped or insufficiently informed about the threats and vulnerabilities of the particular service provider. When it come to ERISA plans, this means properly assessing the threats and vulnerabilities as they relate to the aspects of plan administration being handled by the service provider.

Third-party plan service providers and plan fiduciaries should begin taking reasonable and prudent steps to implement safeguards that will adequately protect plan data. EBSA’s guidance should help the responsible parties get there, along with the plan fiduciaries and plan sponsors’ trusted counsel and other advisors.

In February, a coalition of healthcare organizations sent a letter to President Donald J. Trump and the U.S. Department of Health and Human Services (HHS) (the Letter), urging the immediate rescission of a proposed update to the Security Rule under HIPAA. The update is aimed at strengthening safeguards for securing electronic protected health information.

According to The HIPAA Journal, the data breach trend in the healthcare industry over the past 14 years is up, not down. This is the case despite the HIPAA Security Rule having been in effect since 2005.

The HIPAA Journal goes on to provide some sobering statistics:

Between October 21, 2009, when OCR first started publishing summaries of data breach reports on its “Wall of Shame”, and and December 31, 2023, 5,887 large healthcare data breaches have been reported. On January 22, 2023, the breach portal listed 857 data breaches as still; under investigation. This time last year there were 882 breaches listed as under investigation, which shows OCR has made little progress in clearing its backlog of investigations – something that is unlikely to change given the chronic lack of funding for the department.

There have been notable changes over the years in the main causes of breaches. The loss/theft of healthcare records and electronic protected health information dominated the breach reports between 2009 and 2015. The move to digital record keeping, more accurate tracking of electronic devices, and more widespread adoption of data encryption have been key in reducing these data breaches. There has also been a downward trend in improper disposal incidents and unauthorized access/disclosure incidents, but data breaches continue to increase due to a massive increase in hacking incidents and ransomware attacks. In 2023, OCR reported a 239% increase in hacking-related data breaches between January 1, 2018, and September 30, 2023, and a 278% increase in ransomware attacks over the same period. In 2019, hacking accounted for 49% of all reported breaches. In 2023, 79.7% of data breaches were due to hacking incidents.

The letter, signed by numerous healthcare organizations, outlines several key concerns regarding the proposed HIPAA Security Rule update, including:

  1. Financial and Operational Burdens: The letter argues that the proposed regulation would impose significant financial and operational burdens on healthcare providers, particularly those in rural areas. The unfunded mandates associated with the new requirements could strain the resources of hospitals and healthcare systems, leading to higher healthcare costs for patients and reduced investment in other critical areas.
  2. Conflict with Existing Law: The Letter points to an amendment to the Health Information Technology for Economic and Clinical Health (HITECH) Act, arguing the proposed enhancements to the Security Rule conflict with the HITECH Act amendment. However, the HITECH Act amendment sought to incentivize covered entities to adopt “recognized security practices” that might minimize (not necessarily eliminate) remedies for HIPAA Security Rule violations and the length and extent of audits and investigations.
  3. Timeline and Feasibility: The letter highlights concerns about the timeline for implementing the proposed requirements. The depth and breadth of the new mandates, combined with an unreasonable timeline, present significant challenges for healthcare providers. 

No doubt, the Trump Administration is intent on reducing regulation on business. However, it will be interesting to see whether it softens or even eliminates the proposed rule in response to the Letter, despite the clear trend of more numerous and damaging data breaches in the healthcare sector, and an increasing threat landscape facing all U.S. businesses.

According to one survey, Florida is fourth on the list of states with the most reported data breaches. No doubt, data breaches continue to be a significant risk for all business, large and small, across the U.S., including the Sunshine State. Perhaps more troubling is that class action litigation is more likely to follow a data breach. A common claim in those cases – the business did not do enough to safeguard personal information from the attack. So, Florida businesses need to know about the Florida Information Protection Act (FIPA) which mandates that certain entities implement reasonable measures to protect electronic data containing personal information.

According to a Law.com article:

The monthly average of 2023 data breach class actions was 44.5 through the end of August, up from 20.6 in 2022.

While a business may not be able to completely prevent a data breach, adopting reasonable safeguards can minimize the risk of one occurring, as well as the severity of an attack. Additionally, maintaining reasonable safeguards to protect personal information strengthens the businesses’ defensible position should it face an government agency investigation or lawsuit after an attack.  

Entities Subject to FIPA

FIPA applies to a broad range of organizations, including:

   •    Covered Entities: This encompasses any sole proprietorship, partnership, corporation, or other legal entity that acquires, maintains, stores, or uses personal information…so, just about any business in the state. There are no exceptions for small businesses.

   •    Governmental Entities: Any state department, division, bureau, commission, regional planning agency, board, district, authority, agency, or other instrumentality that handles personal information.

   •    Third-Party Agents: Entities contracted to maintain, store, or process personal information on behalf of a covered entity or governmental entity. This means that just about any vendor or third party service provider that maintains, stores, or processes personal information for a covered entity is also covered by FIPA.

Defining “Reasonable Measures” in Florida

FIPA requires:

Each covered entity, governmental entity, or third-party agent shall take reasonable measures to protect and secure data in electronic form containing personal information.

While FIPA mandates the implementation of “reasonable measures” to protect personal information, it does not provide a specific definition, leaving room for interpretation. However, guidance can be drawn from various sources:

  •    Regulatory Guidance: For businesses that are more heavily regulated, such as healthcare entities, they can looked to federal and state frameworks that apply to them, such as the Health Insurance Portability and Accountability Act (HIPAA). Entities in the financial sector may be subject to both federal regulations, like the Gramm-Leach-Bliley Act, and state-imposed data protection requirements. The Florida Attorney General’s office may offer insights or recommendations on what constitutes reasonable measures. Here is one example, albeit not comprehensive.
  •   Standards in Other States: Several other states have outlined more specific requirements for protecting personal information. Examples include New York and Massachusetts

Best Practices for Implementing Reasonable Safeguards

Very often, various data security frameworks have several overlapping provisions. With that in mind, covered businesses might consider the following nonexhaustive list of best practices toward FIPA compliance. Many of the items on this list will seem obvious, even basic. But in many cases, these measures either simply have not been implemented or are not covered in written policies and procedures.

  • Conduct Regular Risk Assessments: Identify and evaluate potential vulnerabilities within your information systems to address emerging threats proactively.
  • Implement Access Controls: Restrict access to personal information to authorized personnel only, ensuring that employees have access solely to the data necessary for their roles.
  • Encrypt Sensitive Data: Utilize strong encryption methods for personal information both at rest and during transmission to prevent unauthorized access.
  • Develop and Enforce Written Data Security Policies, and Create Awareness: Establish comprehensive data protection policies and maintain them in writing. Once completed, information about relevant policies and procedures need to shared with employees, along with creating awareness about the changing risk landscape.
  • Maintain and Practice Incident Response Plans: Prepare and regularly update a response plan to address potential data breaches promptly and effectively, minimizing potential damages. Letting this plan sit on the shelf will have minimal impact on preparedness when facing a real data breach. It is critical to conduct tabletop and similar exercises with key members of leadership.
  • Regularly Update and Patch Systems: Keep all software and systems current with the latest security patches to protect against known vulnerabilities.

By diligently implementing these practices, entities can better protect personal information, comply with Florida’s legal requirements, and minimize risk.

Businesses that track the geolocation of individuals—whether for fleet management, sales and promotion, logistics, risk mitigation, or other reasons—should closely monitor the progress of California Assembly Bill 1355 (AB 1355), also known as the California Location Privacy Act. If passed, this bill would impose significant restrictions on the collection and use of geolocation data, requiring many businesses to overhaul their location tracking policies and procedures.

California has long been at the forefront of data privacy regulation, particularly in the area of location tracking. Section 637.7 of the California Penal Code, for example, provides that no person or entity in California may use an electronic tracking device to determine the location or movement of a person. Notably the law does not apply when the registered owner, lessor, or lessee of a vehicle has consented to the use such a device with respect to that vehicle.

More recently, the California Consumer Privacy Act of 2018 (CCPA) established a comprehensive privacy and security framework for personal information of California consumers, which includes granting consumers rights over their personal information. Under the CCPA, consumers have the right, subject to some exceptions, to limit the use of their “sensitive personal information,” a defined term which includes geolocation data. The California Privacy Rights Act of 2020 (CPRA) amended the CCPA, further strengthening these protections by enhancing consumer rights and enforcement mechanisms.

Importantly, employees and contractors are considered “consumers” under the CCPA.

Key Provisions of AB 1355

If enacted, AB 1355 would place strict limits on how businesses collect, use, and retain location information. Here are the major takeaways for businesses that track geolocation data.

Who Does the Law Apply To?  The law would apply to any business (referred to as a “covered entity”) that collects or uses location data from individuals in California, although there is an exception for the location information of patients if the information is protected by HIPAA or similar laws. Government agencies are not considered covered entities but are prohibited from monetizing location information.

The bill defines “individual” as a “natural person located within the State of California.” So, it looks like the individual need not be a California resident. In addition, the collection or use of location data must be necessary to provide goods or services requested by that individual. It is unclear how this provision would apply in the employment context.

Express Opt-In Requirement. Individuals would be required to expressly opt in before their location data could be collected; businesses would not be permitted to infer consent or use pre-checked boxes.

Prohibited Actions. Businesses would not be permitted to:

  • Collect more precise location data than is necessary.
  • Retain location data longer than necessary.
  • Sell, rent, trade, or lease location data to third parties.
  • Infer additional data from collected location information beyond what is necessary.
  • Disclose location data to government agencies without a valid court order issued by a California court.

Notice and Policy Requirement. Under AB 1355, businesses would be required to provide clear, prominent notice at the point where location data is collected. The notice would need to include the name of the covered entity and service provider collecting the information, and a phone number and an internet website where the individual can obtain more information. Companies also would need to maintain a location privacy policy detailing, among other things:

  • What location data is collected.
  • The retention and deletion policies.
  • Whether the data is used for targeted advertising.
  • The identities of third parties or service providers with access to the data.

Any changes to this policy would require at least 20 days’ notice and renewed consent.

Enforcement and Legal Remedies. If enacted, AB 1355 would permit the California Attorney General, district attorneys, and other public prosecutors to bring lawsuits against non-compliant businesses. Remedies could include all of the following:

  • Actual damages suffered by affected individuals.
  • A civil penalty of $25,000.
  • Court-ordered injunctions and attorney’s fees for prevailing plaintiffs.

Implications for Businesses Engaged in Location Tracking

This bill represents a major shift in how businesses must approach location tracking. If enacted, businesses relying on geolocation data for purposes such as monitoring employees, connecting with customers, improving logistics, or managing risk must:

  • Implement new opt-in procedures before collecting location data.
  • Reevaluate their data retention policies to ensure compliance.
  • Review agreements with third-party vendors that process location data.
  • Update their privacy policies and internal procedures to align with the new legal requirements.

In addition to monitoring the path of this legislation, businesses also should consider revisiting their current electronic monitoring and tracking activities. Data privacy and security laws have expanded in recent years, with geolocation data being one of the more sensitive categories of personal information protected.

Employee security awareness training is a best practice and a “reasonable safeguard” for protecting the privacy and security of an organization’s sensitive data.  The list of data privacy and cybersecurity laws mandating employee data protection training continues to grow and now includes the EU AI Act.  The following list is a high-level sample of employee training obligations. 

EU AI Act. Effective February 2, 2025, Article 4 of the Act requires that all providers and deployers of AI models or systems must ensure their workforce is “AI literate”.  This means training workforce members to achieve a sufficient level of AI literacy considering various factors such as the intended use of the AI system. Training should incorporate privacy and security awareness given the potential risks. Notably, the Act applies broadly and has extraterritorial reach. As a result, this training obligation may apply to organizations including but not limited to:

  • providers placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the Union, irrespective of whether those providers are established or located within the Union or in a third country (e.g., U.S.);
  • deployers of AI systems that have their place of establishment or are located within the Union; and
  • providers and deployers of AI systems that have their place of establishment or are located in a third country (e.g., U.S.), where the output produced by the AI system is used in the Union.

California Consumer Privacy Act, as amended (CCPA). Cal. Code Regs. Tit. 11 sec. 7100 requires that all individuals responsible for the business’s compliance with the CCPA, or involved in handling consumer inquiries about the business’s information practices, must be informed of all of the requirements in the CCPA including how to direct consumers to exercise their rights under the CCPA. Under the CCPA, “consumer” means a California resident and includes employees, job applicants and individuals whose personal data is collected in the business to business context.

HIPAA. Under HIPAA, a covered entity or business associate must provide HIPAA privacy training as well as security awareness training to all workforce members. Note that this training requirement may apply to employers in their role as a plan sponsor of a self-insured health plan.

Massachusetts WISP law (201 CMR 17.03 201). Organizations that own or license personal information about a resident of the Commonwealth are subject to a duty to protect that information. This duty includes implementing a written information security program that addresses ongoing employee training. 

23 NYCRR 500. The New York Department of Financial Services’ cybersecurity requirement for financial services companies requires that covered entities provide cybersecurity personnel with cybersecurity updates and sufficient training to address relevant cybersecurity risks. 

Gramm-Leach-Bliley Act and the Safeguards Rule. The Safeguards Rule requires covered financial institutions to implement a written information security program to safeguard non-public information. The program must include employee security awareness training. In 2023, the FTC expanded the definition of financial institutions to include additional industries such as automotive dealerships and retailers that process financial transactions. 

EU General Data Protection Regulation (“EU GDPR”). Under Art. 39 of the EU GDPR, the tasks of a Data Protection Officer include training staff involved in the organization’s data processing activities.

In addition to the above, there are express or implied security awareness training obligations in numerous other laws and regulations including certain Department of Homeland Security contractors, licensees under state insurance laws modelled on the NAIC Insurance Data Security Model Law, and organizations that process payments via credit cards in accordance with PCI DSS.

Whether mandated by law or implemented as a best practice, ongoing employee privacy and security training plays a key role in safeguarding an organization’s sensitive data. Responsibility for protecting data is no longer the sole province of IT professionals. All workforce members with access to the organization’s sensitive data and information systems share that responsibility. And various stakeholders, including HR professionals, play a vital role in supporting that training.  

For more information on developing employee training check out our prior posts.

A California federal district court recently granted class certification in a lawsuit against a financial services company.  The case involves allegations that the company’s website used third-party technology to track users’ activities without their consent, violating the California Invasion of Privacy Act (CIPA). Specifically, the plaintiffs allege that the company along with its third-party marketing software platform, intercepted and recorded visitors’ interactions with the website, creating “session replays” which are effectively video recordings of the users’ real-time interaction with the website forms. The technology at issue in the suit is routinely utilized by website operators to provide a record of a user’s interactions with a website, in particular web forms and marketing consents. 

The plaintiffs sought class certification for individuals who visited the company’s website, provided personal information, and for whom a certificate associated with their website visit was generated within a roughly year time frame. The company argued that users’ consent must be determined on an individual and not class-wide, basis.  The company asserted that implied consent could have come from multiple different sources including its privacy policies and third-party materials provided notice of data interception and thus should be viewed as consent. Some of the sources the company pointed to as notice included third-party articles on the issue.

The district court found those arguments insufficient and held that common questions of law and fact predominated as to all users. Specifically, the court found whether any of the sources provided notice of the challenged conduct in the first place to be a common issue. Further, the court found that it could later refine the class definition to the extent a user might have viewed a particular source that provided sufficient notice. The court also determined plaintiffs would be able to identify class members utilizing the company’s database, including cross-referencing contact and location information provided by users.

While class certification is not a decision on the merits and it is not determinative whether the company failed to provide notice or otherwise violated CIPA, it is a significant step in the litigation process. If certification is denied, the potential damages and settlement value are significantly lower.  However, if plaintiffs make it over the class certification hurdle, the potential damages and settlement value of the case increase substantially.

This case is a reminder to businesses to review their current website practices and implement updates or changes to address issues such as notice (regarding tracking technologies in use) and consent (whether express or implied) before collecting user data. It is also important when using third-party tracking technologies, to audit if vendors comply with privacy laws and have data protection measures in place.

If you have questions about website tracking technology and privacy compliance, contact a Jackson Lewis attorney to discuss.

President Trump recently fired the three democrats on the Privacy and Civil Liberties Oversight Board (PCLOB). Since these firings bring the Board to a sub-quorum level, they have the potential to significantly disrupt transatlantic transfers of employee and other personal data from the EU to the US under the EU-US Data Privacy Framework (DPF).

The PCLOB is an independent board tasked with oversight of the US intelligence community. It is a bipartisan board consisting of five members, three of whom represent the president’s political party and two represent the opposing party. The PCLOB’s oversight role was a significant element in the Trans-Atlantic Data Privacy Framework (TADPF) negotiations, helping the US demonstrate its ability to provide an essentially equivalent level of data protection to data transferred from the EU. Without this key element, it is highly likely there will be challenges in the EU to the legality of the TADPF. If the European Court of Justice invalidates the TADPF or the EU Commission annuls it, organizations that certify to the EU-US Data Privacy Framework will be without a mechanism to facilitate transatlantic transfers of personal data to the US. This could potentially impact transfers from the UK and Switzerland as well.

Organizations that rely on their DPF certification for transatlantic data transfers should consider developing a contingency plan to prevent potential disruption to the transfer of essential personal data. Steps to prepare for this possibility include reviewing existing agreements to identify what essential personal data is subject to ongoing transfers and the purpose(s), determining whether EU Standard Contractual Clauses would be an appropriate alternative and, if so, conducting a transfer impact assessment to ensure the transferred data will be subject to reasonable and appropriate safeguards.

As the integration of technology in the workplace accelerates, so do the challenges related to privacy, cybersecurity, and the ethical use of artificial intelligence (AI). Human resource professionals and in-house counsel must navigate a rapidly evolving landscape of legal and regulatory requirements. This National Privacy Day, it’s crucial to spotlight emerging issues in workplace technology and the associated implications for data privacy, cybersecurity, and compliance.

We explore here practical use cases raising these issues, highlight key risks, and provide actionable insights for HR professionals and in-house counsel to manage these concerns effectively.

1. Wearables and the Intersection of Privacy, Security, and Disability Law

Wearable devices have a wide range of use cases including interactive training, performance monitoring, and navigation tracking. Wearables such as fitness trackers and smartwatches became more popular in HR and employee benefits departments when they were deployed in wellness programs to monitor employees’ health metrics, promote fitness, and provide a basis for doling out insurance premium incentives. While these tools offer benefits, they also collect sensitive health and other personal data, raising significant privacy and cybersecurity concerns under the Health Insurance Portability and Accountability Act (HIPAA), the Americans with Disabilities Act (ADA), and state privacy laws.

Earlier this year, the Equal Employment Opportunity Commission (EEOC) issued guidance emphasizing that data collected through wearables must align with ADA rules. More recently, the EEOC withdrew that guidance in response to an Executive Order issued by President Trump. Still, employers should evaluate their use of wearables and whether they raise ADA issues, such as voluntary use of such devices when collecting confidential medical information, making disability-related inquiries, and using aggregated or anonymized data to prevent discrimination claims.

Beyond ADA compliance, cybersecurity is critical. Wearables often collect sensitive data and transmit same to third-party vendors. Employers must assess these vendors’ data protection practices, including encryption protocols and incident response measures, to mitigate the risk of breaches or unauthorized access.

Practical Tip: Implement robust contracts with third-party vendors, requiring adherence to privacy laws, breach notification, and security standards. Also, ensure clear communication with employees about how their data will be collected, used, and stored.

2. Performance Management Platforms and Employee Monitoring

Platforms like Insightful and similar performance management tools are increasingly being used to monitor employee productivity and/or compliance with appliable law and company policies. These platforms can capture a vast array of data, including screen activity, keystrokes, and time spent on tasks, raising significant privacy concerns.

While such tools may improve efficiency and accountability, they also risk crossing boundaries, particularly when employees are unaware of the extent of monitoring and/or where the employer doesn’t have effective data minimization controls in place. State laws like the California Consumer Privacy Act (CCPA) can place limits on these monitoring practices, particularly if employees have a reasonable expectation of privacy. They also can require additional layers of security safeguards and administration of employee rights with respect to data collected and processed using the platform.

Practical Tip: Before deploying such tools, assess the necessity of data collection, ensure transparency by notifying employees, and restrict data collection to what is strictly necessary for business purposes. Implement policies that balance business needs with employee rights to privacy.

3. AI-Powered Dash Cams in Fleet Management

AI-enabled dash cams, often used for fleet management, combine video, audio, GPS, telematics, and/or biometrics to monitor driver behavior and vehicle performance, among other things. While these tools enhance safety and efficiency, they also present significant privacy and legal risks.

State biometric privacy laws, such as Illinois’s Biometric Information Privacy Act (BIPA) and similar laws in California, Colorado, and Texas, impose stringent requirements on biometric data collection, including obtaining employee consent and implementing robust data security measures. Employers must also assess the cybersecurity vulnerabilities of dash cam providers, given the volume of biometric, location, and other data they may collect.

Practical Tip: Conduct a legal review of biometric data collection practices, train employees on the use of dash cams, and audit vendor security practices to ensure compliance and minimize risk.

4. Assessing Vendor Cybersecurity for Employee Benefits Plans

Third-party vendors play a crucial role in processing data for retirement plans, such as 401(k) plan, as well as health and welfare plans. The Department of Labor (DOL) emphasized in recent guidance the importance of ERISA plan fiduciaries’ role to assess the cybersecurity practices of such service providers.

The DOL’s guidance underscores the need to evaluate vendors’ security measures, incident response plans, and data breach notification practices. Given the sensitive nature of data processed as part of plan administration—such as Social Security numbers, health records, and financial information—failure to vet vendors properly can lead to breaches, lawsuits, and regulatory penalties, including claims for breach of fiduciary duty.

Practical Tip: Conduct regular risk assessments of vendors, incorporate cybersecurity provisions into contracts, and document the due diligence process to demonstrate compliance with fiduciary obligations.

5. Biometrics for Access, Time Management, and Identity Verification

Biometric technology, such as fingerprint or facial recognition systems, is widely used for identity verification, physical access, and timekeeping. While convenient, the collection of biometric data carries significant privacy and cybersecurity risks.

BIPA and similar state laws require employers to obtain written consent, provide clear notices about data usage, and adhere to stringent security protocols. Additionally, biometrics are uniquely sensitive because they cannot be changed if compromised in a breach.

Practical Tip: Minimize reliance on biometric data where possible, ensure compliance with consent and notification requirements, and invest in encryption and secure storage systems for biometric information. Check out our Biometrics White Paper.

6. HIPAA Updates Affecting Group Health Plan Compliance

Recent changes to the HIPAA Privacy Rule, including provisions related to reproductive healthcare, significantly impact group health plans. The proposed HIPAA Security Rule amendments also signal stricter requirements for risk assessments, access controls, and data breach responses.

Employers sponsoring group health plans must stay ahead of these changes by updating their HIPAA policies and Notice of Privacy Practices, training staff, and ensuring that business associate agreements (BAAs) reflect the new requirements.

Practical Tip: Regularly review HIPAA compliance practices and monitor upcoming changes to ensure your group health plan aligns with evolving regulations.

7. Data Breach Notification Laws and Incident Response Plans

Many states have updated their data breach notification laws, lowering notification thresholds, shortening notification timelines, and expanding the definition of personal information. Employers should revise their incident response plans (IRPs) to align with these changes.

Practical Tip: Ensure IRPs reflect updated laws, test them through simulated breach scenarios, and coordinate with legal counsel to prepare for reporting obligations in case of an incident.

8. AI Deployment in Recruiting and Retention

AI tools are transforming HR functions, from recruiting to performance management and retention strategies. However, these tools require vast amounts of personal data to function effectively, increasing privacy and cybersecurity risks.

The EEOC and other regulatory bodies have cautioned against discriminatory impacts of AI, particularly regarding protected characteristics like disability, race, or gender. (As noted above, the EEOC recently withdrew its AI guidance under the ADA and Title VII following an Executive Order by the Trump Administration.) For example, the use of AI in hiring or promotions may trigger compliance obligations under the ADA, Title VII, and state laws.

Practical Tip: Conduct bias audits of AI systems, implement data minimization principles, and ensure compliance with applicable anti-discrimination laws.

9. Employee Use of AI Tools

Moving beyond the HR department, AI tools are fundamentally changing how people work.  Tasks that used to require time-intensive manual effort—creating meeting minutes, preparing emails, digesting lengthy documents, creating PowerPoint decks—can now be completed far more efficiently with assistance from AI.  The benefits of AI tools are undeniable, but so too are the associated risks.  Organizations that rush to implement these tools without thoughtful vetting processes, policies, and training will expose themselves to significant regulatory and litigation risk.     

Practical Tip: Not all AI tools are created equal—either in terms of the risks they pose or the utility they provide—so an important first step is developing criteria to assess, and then going through the process of assessing, which AI tools to permit employees to use.  Equally important is establishing clear ground rules for how employees can use those tools.  For instance, what company information are they permitted to use to prompt the tool; what are the processes for ensuring the tool’s output is accurate and consistent with company policies and objectives; and should employee use of AI tools be limited to internal functions or should they also be permitted to use these tools to generate work product for external audiences. 

10. Data Minimization Across the Employee Lifecycle

At the core of many of the above issues is the principle of data minimization. The California Privacy Protection Agency (CPPA) has emphasized that organizations must collect only the data necessary for specific purposes and ensure its secure disposal when no longer needed.

From recruiting to offboarding, HR professionals must assess whether data collection practices align with the principle of data minimization. Overcollection not only heightens privacy risks but also increases exposure in the event of a breach.

Practical Tip: Develop a data inventory mapping employee information from collection to disposal. Regularly review and update policies to limit data retention and enforce secure deletion practices.

Conclusion

The rapid adoption of emerging technologies presents both opportunities and challenges for employers. HR professionals and in-house counsel play a critical role in navigating privacy, cybersecurity, and AI compliance risks while fostering innovation.

By implementing robust policies, conducting regular risk assessments, and prioritizing data minimization, organizations can mitigate legal exposure and build employee trust. This National Privacy Day, take proactive steps to address these issues and position your organization as a leader in privacy and cybersecurity.

Insider threats continue to present a significant challenge for organizations of all sizes. One particularly concerning scenario involves employees who leave an organization and impermissibly take or download sensitive company data. These situations can severely impact a business, especially when departing employees abscond with confidential business information or trade secrets. Focusing on how the theft of such information could cripple a business’s operations, competitive advantage, etc. is warranted. It is critical not to overlook, however, other legal and regulatory implications stemming from the theft of certain data, including potential data breach notification obligations.

The Importance of Safeguarding Trade Secrets

Trade secrets generally refer to information that has commercial value because it’s kept secret. Examples include formulas, patterns, programs, devices, methods, and other valuable business data. Such data are often the lifeblood of a company’s competitive edge. These secrets must be safeguarded to retain their value and legal protections under the Uniform Trade Secrets Act (UTSA) which has been adopted by most states. Businesses will need to demonstrate that they took reasonable measures to protect their trade secrets.

Reasonable safeguards under the UTSA can include:

  • Implementing access controls to restrict employees’ ability to download or share sensitive information.
  • Requiring employees to sign confidentiality agreements and restrictive covenants.
  • Regularly training employees on the importance of data security and confidentiality.
  • Using monitoring tools to detect unusual access or downloads of sensitive data.

Failing to adopt such safeguards can jeopardize a company’s ability to claim protection for trade secrets and pursue legal remedies if those secrets are stolen. Companies should consult with trusted IT and legal advisors to ensure they have adequate safeguards.

Beyond Trade Secrets: Data Breach Concerns

While the theft of confidential business and trade secret information rightly garners attention, focusing exclusively on this aspect may cause companies to miss another critical risk: the theft of personal information. As part of their efforts to remove company information, departing employees may inadvertently or intentionally take personal information, such as employee or customer data, which could trigger significant legal obligations, particularly if accessed or acquired without authorization.

Contrary to common assumptions, data breach notification laws do not solely apply to stolen Social Security numbers. Most state data breach laws define “personal information” broadly to include elements such as:

  • Financial account information, including debit or credit card numbers.
  • Driver’s license or state identification numbers.
  • Health insurance and medical information.
  • Dates of birth.
  • Online account credentials, such as usernames and passwords.
  • Biometric data, such as fingerprints or facial recognition profiles.

The unauthorized access or acquisition of these data elements together with the individual’s name can constitute a data breach, requiring timely notification to affected individuals and, in some cases, regulatory authorities.

Broader Regulatory and Contractual Implications

In addition to state breach notification laws that seek to protect personal information, companies must consider other regulatory and contractual obligations when sensitive data is stolen. For example:

  • Publicly traded companies: Theft of critical business information by a departing employee may require disclosure under U.S. Securities and Exchange Commission (SEC) regulations if the theft is deemed material. If a company determines the materiality threshold has been reached, it has four days to report to the public.
  • Critical infrastructure businesses: Companies providing services in regulated industries, such as energy or healthcare, may have reporting obligations to regulatory authorities if sensitive confidential business data is compromised.
  • Contractual obligations: Many businesses enter into agreements with business customers that require notification if confidential business information or personal data is compromised.

Ignoring these obligations could expose organizations to fines, lawsuits, and reputational harm, compounding the difficulties already created by the theft of an organization’s confidential business information.

Taking a Comprehensive Approach to Data Theft

The theft of confidential business information by a departing employee can be devastating for a business. However, focusing solely on restrictive covenants, trade secrets, or business information risks overlooking the full scope of legal and regulatory obligations. To effectively respond to such incidents, companies should:

  1. Identify the nature of the stolen data: Assess whether the data includes personal information, trade secrets, or other sensitive information that could trigger specific legal obligations.
  2. Evaluate legal and regulatory obligations: Determine whether notification is required under state breach laws, SEC or other regulations (if applicable), industry-specific rules, or contractual agreements.
  3. Leverage restrictive covenant agreements: Assess appropriate legal or contractual remedies, including under restrictive covenant, confidentiality, and other agreements, as part of a broader strategy to address the theft.
  4. Implement safeguards: Strengthen data protection measures to mitigate the risk of future incidents, including employee training, enhanced monitoring, and robust exit procedures.

While dealing with insider threats is undoubtedly challenging, taking a comprehensive and proactive approach can help businesses protect their interests and minimize legal exposure. In today’s interconnected and highly regulated world, understanding the full scope of risks and obligations tied to data theft is essential for any business.

If you are looking for a high-level summary of California laws regulating artificial intelligence (AI), check out the two legal advisories issued by California Attorney General Rob Bonta. The first advisory is directed at consumers and entities about their rights and obligations under the state’s consumer protection, civil rights, competition, and data privacy laws. The second advisory focuses on healthcare entities.

“AI might be changing, innovating, and evolving quickly, but the fifth largest economy in the world is not the wild west; existing California laws apply to both the development and use of AI.” Attorney General Bonta

The advisories summarize existing California laws that may apply to entities who develop, sell, or use AI. They also address several new California AI laws that went into effect on January 1, 2025.

The first advisory points to several existing laws, such as California’s Unfair Competition Law and Civil Rights Laws, designed to protect consumers from unfair and fraudulent business practices, anticompetitive harm, discrimination and bias, and abuse of their data.

California’s Unfair Competition Law, for example, protects the state’s residents against unlawful, unfair, or fraudulent business acts or practices. The advisory notes that “AI provides new tools for businesses and consumers alike, and also creates new opportunity to deceive Californians.” Under a similar federal law, the Federal Trade Commission (FTC) recently ordered an online marketer to pay $1 million resulting from allegations concerning deceptive claims that the company’s AI product could make websites compliant with accessibility guidelines. Considering the explosive growth of AI products and services, organizations should be revisiting their procurement and vendor assessment practices to be sure they are appropriately vetting vendors of AI systems.

Additionally, the California Fair Employment and Housing Act (FEHA) protects Californians from harassment or discrimination in employment or housing based on a number of protected characteristics, including sex, race, disability, age, criminal history, and veteran or military status. These FEHA protections extend to uses of AI systems when developed for and used in the workplace. Expect new regulations soon as the California Civil Rights Counsel continues to mull proposed AI regulations under the FEHA.

Recognizing that “data is the bedrock underlying the massive growth in AI,” the advisory points to the state’s constitutional right to privacy, applicable to both government and private entities, as well as to the California Consumer Privacy Act (CCPA). Of course, California has several other privacy laws that may need to be considered when developing and deploying AI systems – the California Invasion of Privacy Act (CIPA), the Student Online Personal Information Protection Act (SOPIPA), and the Confidentiality of Medical Information Act (CMIA).

Beyond these existing laws, the advisory also summarizes new laws in California directed at AI, including:

  • Disclosure Requirements for Businesses
  • Unauthorized Use of Likeness
  • Use of AI in Election and Campaign Materials
  • Prohibition and Reporting of Exploitative Uses of AI

The second advisory recounts many of the same risks and concerns about AI as relevant to the healthcare sector. Consumer protection, anti-discrimination, patient privacy and other concerns all are challenges entities in the healthcare sector face when developing or deploying AI. The advisory provides examples of applications of AI systems in healthcare that may be unlawful, here are a couple:

  • Denying health insurance claims using AI or other automated decisionmaking systems in a manner that overrides doctors’ views about necessary treatment.
  • Use generative AI or other automated decisionmaking tools to draft patient notes, communications, or medical orders that include erroneous or misleading information, including information based on stereotypes relating to race or other protected classifications.

The advisory also addresses data privacy, reminding readers that the state’s CMIA may be more protective in some respects than the popular federal healthcare privacy law, HIPAA. It also discusses recent changes to the CMIA that require providers and electronic health records (EHR) and digital health companies enable patients to keep their reproductive and sexual health information confidential and separate from the rest of their medical records. These and other requirements need to be taken into account when incorporating AI into EHRs and related applications.

In both advisories, the Attorney General makes clear that in addition to the laws referenced above, other California laws—including tort, public nuisance, environmental and business regulation, and criminal law—apply to AI. In short:  

Conduct that is illegal if engaged in without the involvement of AI is equally unlawful if AI is involved, and the fact that AI is involved is not a defense to liability under any law.

Both advisories provide a helpful summary of laws potentially applicable to AI systems, and can be useful resources when building policies and procedures around the development and/or deployment of AI systems.