The U.S. Senate voted early Tuesday to remove a proposed moratorium from the federal budget bill. This outcome marks a pivotal moment in the ongoing debate over artificial intelligence regulation in the United States.

The AI moratorium, initially proposed as part of the One Big Beautiful Bill Act, proposed a 10-year moratorium on the enforcement of AI-related legislation by states or other entities. Specifically, it was designed to restrict the regulation of AI models, systems, or automated decision systems involved in interstate commerce.

The provision faced strong bipartisan opposition, with critics warning that it would leave consumers vulnerable to AI-related harm and undermine state-level consumer protection laws.

After extensive negotiations and public outcry, an amendment to strip the provision from the budget bill was introduced. The Senate voted overwhelmingly in favor of the amendment, thus ending the moratorium effort.

Support for the removal was partially based on states having the ability to enforce their own AI regulations, particularly in areas such as robocalls, deepfakes, and autonomous vehicles. Currently, several state and local jurisdictions have AI protections planned or already in place.

These developments also reflect a growing recognition of the complexities and potential risks associated with AI technologies. Both the federal government and states will likely be grappling with balancing regulation with innovation when it comes to AI for the foreseeable future.

Jackson Lewis will continue to monitor legislative developments related to AI and related technologies.

The Senate recently voting 99-1 to remove a 10-year moratorium on state regulation of AI says something about the impact of AI, but also its challenges.

A new MIT study, presented at the ACM Conference on Fairness, Accountability and Transparency, demonstrates that large language models (LLMs) used in healthcare can be surprisingly “brittle.” As discussed in Medical Economies, the researchers evaluated more than 6,700 clinical scenarios and introduced nine types of minor stylistic variations—typos, extra spaces, informal tone, slang, dramatic language, or even removing gender markers. Across all variants, these small changes altered AI treatment recommendations in clinically significant ways, with a 7‑9% increase in the AI advising patients to self‑manage rather than seek care—even when medical content remained identical.

“These models are often trained and tested on medical exam questions but then used in tasks that are pretty far from that, like evaluating the severity of a clinical case. There is still so much about LLMs that we don’t know,” Abinitha Gourabathina, Lead author of the study

Notably, these misinterpretations disproportionately impacted women and other vulnerable groups—even when gender cues were stripped from the input. In contrast, according to the study’s findings, human clinicians remained unaffected by such stylistic changes in deciding whether care was needed.

Why This Matters For Healthcare and Beyond

Healthcare Providers

  • Patient safety risk: If patients’ informal language or typos unintentionally trigger incorrect triage outcomes, patients may be more likely to engage in self-care when in-person care is recommended, and serious conditions may go unflagged.
  • Health equity concerns: The disproportionately poorer outcomes linked to female or vulnerable‐group messages highlight amplified bias through sensitivity to style, not substance.

Professional Services

  • Faulty legal, compliance, or regulatory advice: The same kind of “brittleness” that the MIT study suggests could compromise patient care, may also lead to varied and inaccurate and/or incomplete legal or compliance recommendations.

Human Resources

  • Compliance and bias risks: There are many use cases for LLMs in the employment arena which also rely on prompts from employees. And, there are lots of concerns about bias. Without sufficient training, auditing, and governance, they too may fall prey to the same kind of limitations witnessed in the MIT study.  

Governance, Testing & Accountability

Healthcare surely is not the only industry grappling with the new challenges LLMs present. Establishing a governance framework is critical, and organizations might consider the following measures as part of that framework:

  1. Pre‑deployment auditing: LLMs should undergo rigorous testing across demographic subgroups and linguistic variants—not just idealized text—but also informal, error‐ridden inputs.
  2. Prompt perturbation testing: Simulate typos, tone shifts, missing or added markers—even swapping gender pronouns—to check for output stability.
  3. Human validation oversight: At minimum, have subject matter experts (SMEs) review AI outputs, especially where high‐stakes decisioning is involved.
  4. Developer scrutiny and certification: Organizations deploying LLMs should understand and assess to the extent necessary and appropriate efforts made by their developers to address these and other issues before adoption and implementation.

The MIT research reveals that small changes in input—like a typo or a slangy phrase—can meaningfully skew outputs in potentially high‑stakes contexts. For users of LLMs, including healthcare providers, lawyers, and HR professionals, that “brittleness” may heighten risks to safety, accuracy, fairness, and compliance. Strong governance is needed. And with the moratorium on state regulation of AI removed from the Big Beautiful Bill, should its removal hold, organizations are like to see more attention given to governance, risk, and compliance requirements as legislation develops.  

Explained in more detail below, under the recent vacatur of most of the HIPAA Privacy Rule to Support Reproductive Health Care Privacy (the “Reproductive Health Rule”):

  • The broad prohibitions on disclosing protected health information (“PHI”) relating to reproductive health for law enforcement or investigatory purposes are vacated nationally.
  • The attestation requirement that was included as part of the Reproductive Health Rule no longer applies to requests for such information.

Note, however, the more recent U.S. Supreme Court decision in Trump v. CASA, Inc. (S. Ct. 2025) may limit the scope of the Texas District Court’s decision.

What This Means for HIPAA-Covered Entities

  • Providers and plans should:
    • Revisit their policies, notices, and related materials to determine what  changes should be made. They also should be looking at the activities of business associates acting on their behalf (and consider any recent changes to business associate agreements).
    • Audit previous attestation workflows, disclosures, and programming.
    • Communicate any implemented changes.
    • Re-train staff who handle PHI or receive subpoenas to ensure workflows align with and policy revisions.
  • HIPAA’s core Privacy Rule remains unchanged, so protected uses/disclosures (e.g., treatment, payment, health oversight or law enforcement under the usual standards) still apply.
  • Compliance with substance use disorder (“SUD”)-related Notice of Privacy Practices updates, per the CARES Act, must continue.

State Law Dynamics

  • Remember that more stringent protections for health information under state law that conflicts with HIPAA’s Privacy Rule are not preempted by HIPAA.
  • For example, many states have enacted their own enhanced protections, like limits on geofencing or disclosure of reproductive health data.
  • As a result, providers and plans must make sure that their policies comply not only with HIPAA’s Privacy Rule, but with applicable state law requirements.

2024 Reproductive Health Rule

In 2024, HIPAA covered entities, including healthcare providers and health plans, began taking steps to comply with the Department of Health and Human Services 2024 Reproductive Health Rule under the HIPAA Privacy Rule. See our prior articles on these regulations:  New HIPAA Final Rule Imposes Added Protections for Reproductive Health Care Privacy | Workplace Privacy, Data Management & Security Report and HIPAA Final Rule For Reproductive Health Care Privacy with December 23, 2024, Compliance Deadline | Benefits Law Advisor

Fast forward to June 18, 2025, when U.S. District Judge Matthew Kacsmaryk (N.D. Tex.), issued a nationwide injunction in Purl v. HHS, vacating most of the Reproductive Health Rule. Key holdings include:

  • HHS exceeded its statutory authority, overstepped via the “major‑questions doctrine,” and unlawfully redefined terms like “person” and “public health.”
  • The agency impermissibly intruded on state authority to enforce laws, including child abuse reporting.

The essence of the ruling is that it blocks most of the protections in the Reproductive Health Rule, including the attestation requirement. However, unrelated SUD-related Notice of Privacy Practices provisions are not affected and remain in force.

Although the court determined that vacatur is the default remedy for unlawful agency actions challenged under the Administrative Procedures Act, it also noted that the Supreme Court would need to address vacatur and universal injunctions. On June 27, 2025, the Supreme Court did just that in Trump v. CASA, Inc., when it held that universal injunctions likely exceed the equitable authority that Congress gave to federal courts. In Casa, the Court granted the government’s applications for a partial stay of the injunctions at issue, but only to the extent that the injunctions were broader than necessary to provide complete relief to each plaintiff with standing to sue. Stay tuned for further developments on how Casa will impact the nation-wide vacatur discussed in this post.

Update as of July 1, 2025.

The federal budget bill titled One Big Beautiful Bill aims to unharness artificial intelligence (AI) development in the U.S. The current draft of the bill, which has passed the House, proposes a 10-year moratorium on the enforcement of AI-related legislation by states or other entities. Specifically, it restricts the regulation of AI models, systems, or automated decision systems involved in interstate commerce.

Supporters believe the moratorium will encourage innovation and prevent a fragmented landscape of state AI laws, which are already emerging. However, opponents express concerns about the potential impact on existing state laws that regulate issues such as deepfakes and discrimination in automated hiring processes.

The ultimate outcome of this provision of the federal budget bill remains uncertain. However, if the budget bill passes with the moratorium, then it will take effect upon enactment.

Meanwhile, states continue to propose their own legislation to regulate AI in the workplace and other areas. Jackson Lewis will continue to monitor this and other related legislative developments.

For businesses subject to the California Consumer Privacy Act (CCPA), a compliance step often overlooked is the requirement to annually update the businesses online privacy policy. Under Cal. Civ. Code § 1798.130(a)(5), CCPA-covered businesses must among other things update their online privacy policies at least once every 12 months. Note that CCPA regulations establish content requirements for online privacy policies, one of which is that the policy must include “the date the privacy policy was last updated.” See 11 CCR § 7011(e)(4).

As businesses continue to grow, evolve, adopt new technologies, or otherwise make online and offline changes in their business, practices, and/or operations, CCPA required privacy policies may no longer accurately or completely reflect the collection and processing of personal information. Consider, for example, the adoption of emerging technologies, such as so-called “artificial intelligence” tools. These tools may be collecting, inferring, or processing personal information in ways that were not contemplated when preparing the organization’s last privacy policy update.

The business also may have service providers that collect and process personal information on behalf of the business in ways that are different than they did when they began providing services to the business.

Simply put: If your business (or its service providers) has adopted any new technologies or otherwise changed how it collects or processes personal information, your privacy policy may need an update.

Practical Action Items for Businesses

Here are some steps businesses can take to comply with the annual privacy policy review and update requirement under the CCPA:

  • Inventory Personal Information
    Reassess what categories of personal information your organization collects, processes, sells, and shares. Consider whether new categories—such as biometric, geolocation, or video —have been added.
  • Review Data Use Practices
    Confirm whether your uses of personal information have changed since the last policy update. This includes whether you are profiling, targeting, or automating decisions based on the data.
  • Assess adoption of new technologies, such as AI and New Tech Tools
    Has your business adopted any new technologies or systems, such as AI applications? Examples may include:
    • AI notetakers, transcription, or summarization tools for use in meetings (e.g., Otter, Fireflies)
    • AI used for chatbots, personalized recommendations, or hiring assessments
  • Evaluate Third Parties and Service Providers
    Are you sharing or selling information to new third parties? Has your use of service providers changed, or have service providers changed their practices around the collection or processing of personal information?
  • Review Your Consumer Rights Mechanisms
    Are the methods for consumers to submit access, deletion, correction, or opt-out requests clearly stated and functioning properly?

These are only a few of the potential recent developments that may drive changes in an existing privacy policy. There may be additional considerations for businesses in certain industries and departments within those businesses that should be considered as well. Here are a few examples:

Retail Businesses

  • Loyalty programs collecting purchase history and predictive analytics data.
  • More advanced in-store cameras and mobile apps collecting biometric or geolocation information.
  • AI-driven customer service bots that gather interaction data.

Law Firms

  • Use of AI notetakers or transcription tools during client calls.
  • Remote collaboration tools that collect device or location data.
  • Marketing platforms that profile client interests based on website use.

HR Departments (Across All Industries)

  • AI tools used for resume screening and candidate profiling.
  • Digital onboarding platforms collecting sensitive identity data.
  • Employee productivity and monitoring software that tracks usage, productivity, or location.

The online privacy policy is not just a static compliance document—it’s a dynamic reflection of your organization’s data privacy practices. As technologies evolve and regulations expand, taking time once a year to reassess and update your privacy disclosures is not only a legal obligation in California but a strategic risk management step. And, while we have focused on the CCPA in this article, inaccurate or incomplete online privacy policies can elevate compliance and litigation risks under other laws, including the Federal Trade Commission Act and state protections against deceptive and unfair business practices.

On June 20, 2025, Texas Governor Greg Abbott signed SB 2610 into law, joining a growing number of states that aim to incentivize sound cybersecurity practices through legislative safe harbors. Modeled on laws in states like Ohio and Utah, the new Texas statute provides that certain businesses that “demonstrate[] that at the time of the breach the entity implemented and maintained a cybersecurity program” meeting the requirements in the new law may be shielded from exemplary (punitive) damages in the event of a data breach lawsuit.

This development comes amid a clear uptick in data breach class action litigation across the country. Notably, plaintiffs’ attorneys are no longer just targeting large organizations following breaches that expose millions of records. Recent cases have been filed against small and midsize businesses, even when the breach affected relatively few individuals.

What the Texas Law Does

SB 2610 erects a shield from liability to protect certain businesses (those under 250 employees) from exemplary damages in a tort action resulting from a data breach. That shield applies only if the business demonstrates that at the time of the breach the entity implemented and maintained a cybersecurity program that meets certain requirements, which may include compliance with a recognized framework (e.g., NIST, ISO/IEC 27001). This is not immunity from all liability—it applies only to punitive damages—but it can be a significant limitation on financial exposure.

This is a carrot, not a stick. The law does not impose new cybersecurity obligations or penalties. Instead, it encourages proactive investment in cybersecurity by offering meaningful protection when incidents occur despite those efforts.

Why the Size of the Entity Isn’t the Whole Story

A unique aspect of the Texas law is that it scales cybersecurity expectations in part based on business size. Specifically, for businesses with fewer than 20 employees, a “reasonable” cybersecurity program may mean something different than it does for one between 100 and 250 employees. But here’s the problem: Many businesses with small employee counts handle large volumes of sensitive data.

Consider:

  • A 10 employee law firm managing thousands of client files, including Social Security numbers and health records;
  • A small dental practice storing patient health histories and billing information;
  • A title or insurance agency processing mortgage, escrow, or policy documents for hundreds of customers each month.

These entities may employ fewer than 20 people but process exponentially more personal information than a 250-employee manufacturing plant. In this context, determining what qualifies as “reasonable” cybersecurity must focus on data risk, not just employee headcount.

Takeaways for Small and Midsize Organizations

  • Don’t assume you’re too small to be a target: Plaintiffs’ firms are increasingly focused on any breach with clear damages and weak safeguards—regardless of business size.
  • Adopt a framework: Implementing a recognized cybersecurity framework not only enhances your defense posture but could also help limit damages in litigation.
  • Document, document, document: The presumption under SB 2610 is available only if the business can demonstrate it created and followed a written cybersecurity program at the time of the breach.
  • Review annually: As threat landscapes evolve, your security program must adapt. Static programs are unlikely to satisfy the “reasonable conformity” standard over time.

Final Thought

Texas’s new law reinforces a growing national trend: states are rewarding—not just punishing—cybersecurity efforts. But the law also raises the bar for smaller businesses that may have historically viewed cybersecurity as a lower priority. If your organization handles personal data, no matter how many employees you have, it’s time to treat cybersecurity as a critical business function—and an essential legal shield.

Montana recently amended its privacy law through Senate Bill 297, effective October 1, 2025, strengthening consumer protections and requiring businesses to revisit their privacy policies that apply to citizens of Montana. Importantly, it lowered the threshold for applicability to persons and businesses who control or process the personal data of 25,000 or more consumers (previously 50,000), unless the controller uses that data solely for completing payments. For those who derive more than 25% of gross revenue from the sale of personal data, the threshold is now 15,000 or more consumers (previously 25,000).

With the amendments, nonprofits are no longer exempt unless they are set up to detect and prevent insurance fraud. Insurers are now similarly exempt.

When a consumer requests confirmation that a controller is processing their data, the controller can no longer disclose but must identify possession of: (1) social security numbers, (2) ID numbers, (3) financial account numbers, (4) health insurance or medical identification numbers, (5) passwords, security questions, or answers, or (6) biometric data.

Privacy notices must now include: (1) personal data categories, (2) controller’s purpose in possessing personal data, (3) categories controller sells or shares with third parties, (4) categories of third parties, (5) contact information for the controller, (6) explanation of rights and how to exercise them, and (7) the date privacy notice was last updated. Privacy notices must be accessible to and usable to people with disabilities and available in each language in which the controller provides a product or service. Any material changes to the controller’s privacy notice or practices require notices to affected consumers and the opportunity to withdraw consent. Notices need not be Montana-specific, but controllers must conspicuously post them on websites, in mobile applications, or through whatever medium the controller interacts with customers.

The amendments further clarified information the attorney general must publicly provide, including an online mechanism for consumers to file complaints. Further, the attorney general may now issue civil investigative demands and need not issue any notice of violation or provide a 60-day period for the controller to correct the violation.

Artificial Intelligence (AI) is transforming businesses—automating tasks, powering analytics, and reshaping customer interactions. But like any powerful tool, AI is a double-edged sword. While some adopt AI for protection, attackers are using it to scale and intensify cybercrime. Here’s a high-level discussion at emerging AI-powered cyber risks in 2025—and steps organizations can take to defend.

AI-Generated Phishing & Social Engineering

Cybercriminals now use generative AI to craft near-perfect phishing messages—complete with accurate tone, logos, and language—making them hard to distinguish from real communications . Voice cloning tools enable “deepfake” calls from executives, while deepfake video can simulate someone giving fraudulent instructions.

Thanks to AI, according to Tech Advisors, phishing attacks are skyrocketing—phishing surged 202% in late 2024, and over 80% of phishing emails now incorporate AI, with nearly 80% of recipients opening them. These messages are bypassing filters and fooling employees.

Adaptive AI-Malware & Autonomous Attacks

It is not just the threat actors but the AI itself that drives the attack. According to Cyber Defense Magazine reporting:  

Compared to the traditional process of cyber-attacks, the attacks driven by AI have the capability to automatically learn, adapt, and develop strategies with a minimum number of human interventions. These attacks proactively utilize the algorithms of machine learning, natural language processing, and deep learning models. They leverage these algorithms in the process of determining and analyzing issues or vulnerabilities, avoiding security and detection systems, and developing phishing campaigns that are believable.

As a result, attacks that once took days now unfold in minutes, and detection technology struggles to keep up, permitting faster, smarter strikes to slip through traditional defenses.

Attacks Against AI Models Themselves

Cyberattacks are not limited to business email compromises designed to effect fraudulent transfers or to demand a ransom payment in order to suppress sensitive and compromising personal information. Attackers are going after AI systems themselves. Techniques include:

  • Data poisoning – adding harmful or misleading data into AI training sets, leading to flawed outputs or missed threats.
  • Prompt injection – embedding malicious instructions in user inputs to hijack AI behavior.
  • Model theft/inversion – extracting proprietary data or reconstructing sensitive training datasets.

Compromised AI can lead to skipped fraud alerts, leaked sensitive data, or disclosure of confidential corporate information. Guidance from NIST, Adversarial Machine Learning A Taxonomy and Terminology of Attacks and Mitigations, digs into these quite a bit more, and outlines helpful mitigation measures.

Deepfakes & Identity Fraud

Deepfake audio and video are being used to mimic executives or trusted contacts, instructing staff to transfer funds, disclose passwords, or bypass security protocols.

Deepfakes have exploded—some reports indicate a 3,000% increase in deepfake fraud activity. These attacks can erode trust, fuel financial crime, and disrupt decision-making.

Supply Chain & Third-Party Attacks

AI accelerates supply chain attacks, enabling automated scanning and compromise of vendor infrastructures. Attackers can breach a small provider and rapidly move across interconnected systems. These ripple-effect attacks can disrupt entire industries and critical infrastructure, far beyond the initial target. We have seen these effects with more traditional supply chain cyberattacks. AI will only amplify these attacks.  

Enhancing Cyber Resilience, Including Against AI Risks

Here’s some suggestions for stepping up defenses and mitigating risk:

  1. Enhance Phishing Training for AI-level deception
    Employees should recognize not just misspellings, but hyper-realistic phishing, voice calls, and video
 impersonations. Simulations should evolve to reflect current AI tactics.
  2. Inventory, vet, and govern AI systems
    Know which AI platforms you use—especially third-party tools. Vet them for data protection, model integrity, and update protocols. Keep a detailed registry and check vendor security practices. Relying on a vendor’s SOC report simply may not be sufficient, particularly is not read carefully and in context.
  3. Validate AI inputs and monitor outputs
    Check training data for poisoning. Test and stress AI models to spot vulnerabilities. Use filters and anomaly detection to flag suspicious inputs or outputs.
  4. Use AI to defend against AI
    Deploy AI-driven defensive tools—like behavior-based detection, anomaly hunting, and automated response platforms—so you react in real time.
  5. Adopt zero trust and multi-factor authentication (MFA)
    Require authentication for every access, limit internal privileges, and verify every step—even when actions appear internal.
  6. Plan for AI-targeted incidents
    Update your incident response plan with scenarios like model poisoning, deepfake impersonation, or AI-driven malware. Include legal, communications, and other relevant stakeholders in your response teams.
  7. Share intelligence and collaborate
    Tap into threat intelligence communities, “Information Sharing and Analysis Centers” or “ISACs”, to share and receive knowledge of emerging AI threats.

Organizations that can adapt to a rapidly changing threat landscape will be better position to defend against these emerging attack vectors and mitigate harm.

A recent breach involving Indian fintech company Kirana Pro serves as a reminder to organizations worldwide: even the most sophisticated cybersecurity technology cannot make up for poor administrative data security hygiene.

According to a June 7 article in India Today, KiranaPro suffered a massive data wipe affecting critical business information and customer data. The company’s CEO believes the incident was likely the result of a disgruntled former employee, though he has not ruled out the possibility of an external hack, according to reporting. TechCrunch explained:

The company confirmed it did not remove the employee’s access to its data and GitHub account following his departure. “Employee offboarding was not being handled properly because there was no full-time HR,” KiranaPro’s chief technology officer, Saurav Kumar, confirmed to TechCrunch.

Unfortunately, this is not a uniquely Indian problem. Globally, organizations invest heavily in technical safeguards—firewalls, multi-factor authentication, encryption, endpoint detection, and more. These tools are essential, but not sufficient.

The Silent Risk of Inactive Accounts

One of the most common (and preventable) vectors for insider incidents or credential abuse is failure to promptly deactivate system access when an employee departs. Whether termination is amicable or not, if a former employee retains credentials to email, cloud storage, or enterprise software, the organization is vulnerable. These accounts may be exploited intentionally (as suspected in the KiranaPro case) or unintentionally if credentials are stolen or phished later.

Some organizations assume their IT department is handling these terminations automatically. Others rely on inconsistent handoffs between HR, legal, and IT teams. Either way, failure to follow a formal offboarding checklist—and verify deactivation—may be a systemic weakness, not a fluke.

It’s Not Just About Tech—It’s About Governance

This breach illustrates the point that information security is as much about governance and process as it is about technology. Managing who has access to what systems, when, and why is a core component of security frameworks such as NIST, ISO 27001, and the CIS Controls. In fact, user access management—including timely revocation of access upon employee separation—is a foundational expectation in every major cybersecurity risk assessment.

Organizations should implement the following best practices:

  1. Establish a formal offboarding procedure. Involve HR, IT, and Legal to ensure immediate deactivation of all accounts upon separation.
  2. Automate user provisioning and deprovisioning where possible, using identity and access management (IAM) tools.
  3. Maintain a system of record for all access rights. Periodically audit active accounts and reconcile them against current employees and vendors.
  4. Train supervisors and HR personnel to notify IT or security teams immediately upon termination or resignation. There also may be cases where monitoring an employee’s system activity in anticipation of termination may be prudent.

The Takeaway

Wherever your company does business and regardless of industry, the fundamentals are the same: a lapse in basic access control can cause as much damage as a ransomware attack. The KiranaPro incident is a timely cautionary tale. Organizations must view cybersecurity not only as a technical discipline but as an enterprise-wide responsibility.

In today’s hybrid and remote work environment, organizations are increasingly turning to digital employee management platforms that promise productivity insights, compliance enforcement, and even behavioral analytics. These tools—offered by a growing number of vendors—can monitor everything from application usage and website visits to keystrokes, idle time, and screen recordings. Some go further, offering video capture, geolocation tracking, AI-driven risk scoring, sentiment analysis, and predictive indicators of turnover or burnout.

While powerful, these platforms also carry real legal and operational risks if not assessed, configured, and governed carefully.

Capabilities That Go Beyond Traditional Monitoring

Modern employee management tools have expanded far beyond “punching in,” reviewing emails, and tracking websites visited. Depending on the features selected and how the platform is configured, employers may have access to:

  • Real-time screen capture and video recording
  • Automated time tracking and productivity scoring
  • Application and website usage monitoring
  • Keyword or behavior-based alerts (e.g., data exfiltration risks)
  • Behavioral biometrics or mouse/keyboard pattern analysis
  • AI-based sentiment or emotion detection
  • Geolocation or IP-based presence tracking
  • Surveys and wellness monitoring tools

Not all of these tools are deployed in every instance, and many vendors allow companies to configure what they monitor. Some important questions arise, such as who at the company is making the decisions on how to configure the tool, what data is collected, is the collection permissible, who has access , how are decisions made using that data, and what safeguards are in place to protect the data. But even limited use can present privacy and employment-related risks if not governed effectively.

Legal and Compliance Risks

While employers generally have some leeway to monitor their employees on company systems, existing and emerging law, particularly concerning AI, along with considering best practices, employee relations, and other factors should help with developing some guidelines.

  • Privacy Laws: State and international privacy laws (like the California Consumer Privacy Act, GDPR, and others) may require notice, consent, data minimization, and purpose limitation. Even in the U.S., where workplace privacy expectations are often lower, secretive or overly broad monitoring can trigger complaints or litigation.
  • Labor and Employment Laws: Monitoring tools that disproportionately affect certain groups or are applied inconsistently may prompt discrimination or retaliation claims. Excessive monitoring activities could trigger bargaining obligations and claims concerning protected concerted activity.
  • AI-Driven Features: Platforms that employ AI or automated decision-making—such as behavioral scoring or predictive analytics—may be subject to emerging AI-specific laws and guidance, such as New York City’s Local Law 144, Colorado’s AI Act, and AI regulations recently approved by the California Civil Rights Department under the Fair Employment and Housing Act (FEHA) concerning the use of automated decision-making systems.
  • Data Security and Retention: These platforms collect sensitive behavioral data. If poorly secured or over-retained, that data could become a liability in the event of a breach or internal misuse.

Governance Must Extend Beyond IT

Too often, these tools are procured and managed primarily, sometimes exclusively, by IT or security teams without broader organizational involvement. Given the nature of data these tools collect and analyze, as well as their potential impact on members of a workforce, a cross-functional approach is a best practice.

Involving stakeholders from HR, legal, compliance, data privacy, etc., can have significant benefits not only at the procurement and implementation stages, but also throughout the lifecycle of these tools. This includes regular reviews of feature configurations, access rights, data use, decision making, and staying abreast of emerging legal requirements.

Governance considerations should include:

  • Purpose Limitation and Transparency: Clear internal documentation and employee notices should explain what is being monitored, why, and how the information will be used.
  • Access Controls and Role-Based Permissions: Not everyone needs full access to dashboards or raw monitoring data. Access should be limited to what’s necessary and tied to a specific function.
  • Training and Oversight: Employees who interact with the monitoring dashboards must understand the scope of permitted use. Misuse of the data—whether for personal curiosity, retaliation, or outside policy—should be addressed appropriately.
  • Data Minimization and Retention Policies: Avoid “just in case” data collection. Align retention schedules with actual business need and regulatory requirements.
  • Ongoing Review of Vendor Practices: Some vendors continuously add or enable new features that may shift the risk profile. Governance teams should review vendor updates and periodically reevaluate what’s enabled and why.

A Tool, Not a Silver Bullet

Used thoughtfully, employee management platforms can be a valuable part of a company’s compliance and productivity strategy. But they are not “set it and forget it” solutions. The insights they provide can only be trusted—and legally defensible—if there is strong governance around their use.

Organizations must manage not only their employees, but also the people and tools managing their employees. That means recognizing that tools like these sit at the intersection of privacy, ethics, security, and human resources—and must be treated accordingly.