A recent Inc. article highlights an unsettling controversy involving Delve, a Y Combinator-backed compliance startup, and allegations that strike at the heart of how organizations rely on SOC (System and Organization Controls) 2 reports which evaluate an organization’s internal controls over security, availability, and privacy.

According to the report, a whistleblower investigation alleges that Delve generated fraudulent audit reports, fabricated evidence of controls, and created the appearance of compliance for hundreds of customers. Delve has disputed aspects of these claims, and the situation is still unfolding. Regardless of the ultimate outcome, the incident offers an important—and uncomfortable—lesson for organizations that rely on SOC 2 reports as part of vendor due diligence.

Hopefully Not the Norm…

Let’s start with an important point: there is no way to tell how widespread these practices exist in the vendor management space. We suspect the allegations are not the norm. The SOC framework, when properly executed, remains a widely trusted and valuable tool as part of the process for assessing controls.

But “not the norm” is not the same as “impossible,” and there indeed may be critical and material gaps not adequately addressed in SOC 2 reports either by design or inadvertence. When managing cybersecurity risks—particularly where third-party vendors are involved—low probability events can still carry high impact consequences.

What the Allegations Reveal About Systemic Risk

The Delve situation, at its core, is not just about one company. It exposes structural weaknesses in how SOC 2 reports are often consumed:

  • Organizations may accept reports without scrutinizing scope or methodology.
  • Procurement teams may prioritize speed of certification over rigor and cost, particularly when correlated with a vendor that has a strong reputation or “must know what they are doing!”
  • Stakeholders may assume that a SOC 2 report equals real-time security assurance.

So, while organizations may have difficulty assessing a SOC 2 or similar report on its face, there are reasonable steps organizations can and should be taking to probe the representations in such reports. That effort, again, can and should correspond to the risk the vendor presents to the organization, a determination based on several factors, including the nature and volume of the data processed.

Key Questions Organizations Should Be Asking

Organizations need to shift from passive receipt to active evaluation of SOC 2 reports. These reports should trigger questions including:

  • What is actually in scope?
    Are the systems and services you depend on covered in the report—or carved out?
  • When did the testing occur?
    How stale is the observation period relative to current operations?
  • What has changed since the report was issued?
    New infrastructure, new security team, new vendors, new risks?
  • How independent was the audit?
    Who performed it—and did they have any evident conflicts of interest?
  • Do the findings make sense?
    “Zero incidents” across dozens (or hundreds) of organizations should invite scrutiny, or at least curiosity, not comfort.
  • What ongoing assurance exists?
    Is there continuous monitoring—or just a static document?

These are not theoretical concerns. As some observers have noted, if compliance attestations are flawed, liability may ultimately sit with the organizations that relied on them.  

We recently explored many of these themes on our We Get Privacy podcast –
Moving Beyond Checkbox Diligence with SOC Reports – joined by Eric Ratcliffe of 360 Advanced, an auditing firm that performs SOC 2 audits. One of the key takeaways: SOC 2 reports must be interpreted, not simply collected.

In a world of automation and AI-enabled compliance tooling, the temptation is to move faster—to treat certification as a milestone rather than a process. The Delve allegations suggest that mindset can create blind spots.

The ERISA Angle: Fiduciary Duty Still Applies

For ERISA plan fiduciaries, the implications are even more direct. The duty of prudence may require more than obtaining a SOC 2 report. Plan fiduciaries should be evaluating:

  • what the report actually covers,
  • whether controls align with plan risks,
  • gaps and inconsistencies, and
  • ongoing monitoring of risks to plan data, not one-time diligence.

Simply collecting a SOC 2 report—without evaluating its substance—may not satisfy that obligation of prudence.

The Bottom Line

SOC 2 reports remain an important tool. But they are just that—a tool.

The Delve incident is a reminder that:

  • A SOC 2 report is a point-in-time snapshot, not a guarantee
  • Not all reports are created equal
  • And most importantly, trust without verification is not risk management

Organizations should not abandon SOC 2 reports—but they should stop treating them as the finish line. Instead, they should be the beginning of a deeper conversation about risk, controls, and accountability.

When assisting businesses with the commercial aspects of the California Consumer Privacy Act, we advise them that this same law, with “consumer” in its name, also applies to data related to job applicants, employees, contractors, and other California state residents. Some are surprised, but we get to work addressing some nuanced issues, as some CCPA provisions do not neatly fit the employment relationship.

Fortunately, last month, the California Privacy Protection Agency (CPPA) issued an invitation for preliminary comments on potential updates to CCPA regulations addressing notices and disclosures and the handling of employee data. So, if you have questions or concerns about the CCPA’s application to employment information, you can submit that feedback by May 20.

The CPPA is considering whether to amend existing regulations or adopt new rules governing privacy notices (e.g., privacy policies, notices at collection, and rights notices) and their application to workforce data.  In short, the CPPA is seeking stakeholder input on both consumer-facing disclosures and employment lifecycle data practices, including hiring, active employment, and offboarding. Notably, the agency is offering this opportunity not only to businesses, but also to employees, applicants, and other consumers.

Key Areas for Consideration

Employee Notice Timing and Delivery: The CPPA asks when and how employees receive notices (e.g., at hiring, during employment, or at offboarding), highlighting uncertainty around optimal timing and format for workforce-specific disclosures.

Application of CCPA Rights in the Employment Context: The CPPA also is seeking input on a pain point for employers, namely managing the exercise of consumer rights under the CCPA. This includes questions about applicant and employees’ experiences exercising access, deletion, or correction rights suggesting a need for clearer rules on scope, verification, and operational workflows for HR data. An example of one question:

Have you exercised your CCPA rights as a job applicant or employee?

a. Describe your experience exercising your rights.

b. Describe any challenges you experienced when exercising your rights.

c. Do you have any suggestions on how to improve the experience?

In some cases, employers face challenges with the nature, scope, and purpose of such consumer rights requests from applicants and employees (including former employees as well as independent contractors).

Oversight of Service Providers and Contractors: The CPPA is probing how businesses monitor vendors’ compliance (e.g., audits, testing), indicating potential future guidance on accountability frameworks and due diligence expectations in the employment data ecosystem.

As noted, the CPPA is accepting preliminary comments through May 20, 2026, and feedback at this stage may shape future proposed regulations. Contact us if you would like to discuss how these developments may impact your organization or are interested in submitting comments to help shape the regulatory process to address your business needs.

Every so often a law that was passed years ago quietly becomes a present-day compliance reality. Section 24220 of the 2021 Infrastructure Investment and Jobs Act is one of those laws. Tucked into an eleven-hundred-page infrastructure bill with little public debate, the “kill switch law” as it has come to be known by some, awaits implementing regulations. The law has triggered debates in Congress seeking to defund the law, as well as lots of hand wringing around privacy and data governance questions that businesses, fleet operators, and their legal counsel are trying to answer before the technology becomes standard equipment in new vehicles.

What the Law Actually Requires

Section 24220 directs the National Highway Traffic Safety Administration (NHTSA) to require that all new passenger vehicles be equipped with what the statute calls “advanced drunk and impaired driving prevention technology.” In practical terms, the law contemplates two types of systems:

  • A passive performance-monitoring system that continuously observes a driver’s behavior and restricts or prevents vehicle operation if the system determines the driver may be impaired; or
  • A blood-alcohol detection system that prevents or limits operation when BAC meets or exceeds the legal limit of 0.08%.

Manufacturers can deploy either type, or a combination. The technology could involve cameras monitoring eye movement, sensors analyzing steering and braking patterns, or touch-based biometric readers built into the steering wheel or ignition surface. It also could leverage AI. NHTSA is still finalizing the technical standards — a detail that matters, because the specific data collection methods will drive (no pun intended) privacy and security compliance. Notably, many of these features and capabilities – often embedded in devices referred to as “dashcams” – have already become popular in fleet vehicles.

The January 2026 Vote — and What It Means That It Failed

Earlier this year, Representative Thomas Massie introduced an amendment to a budget bill that would have defunded Section 24220 entirely, blocking NHTSA from spending any funds on implementation or enforcement. The amendment failed 229–201, with 57 Republicans joining 211 Democrats in opposition. Repeal legislation (the No Kill Switches in Cars Act, H.R. 1137) remains stalled. Barring an unexpected reversal, the mandate goes forward.

Why Privacy Lawyers Are Paying Attention

Despite concerns about “Big Brother” and references to Orwell’s novel, 1984, the statute does not give the government a remote kill switch. No federal agency can log into your vehicle and disable it. The technology would operate through onboard software, and the decision to restrict operation is made by the vehicle’s own algorithms — not by a government operator.

That distinction is real and legally significant. But it does not exhaust the privacy concerns, not by a long shot. A decision is still being made other than by the driver to restrict operation of the vehicle.

Whether the system uses cameras, eye-tracking, biometrics, or driving pattern analysis, it is continuously collecting sensitive behavioral and physiological data about the driver. It is generated, stored — somewhere — and potentially transmitted. To whom? Under what retention schedule? With what security controls? The statute is silent. NHTSA’s rules are not yet final. The answers will depend heavily on what manufacturers build and what their privacy policies and terms of service say.

Additionally, new vehicles are networked, able to connect to manufacturer cloud infrastructure, and many connect to insurers, fleet management platforms, and dealership service systems. An open question raised during the funding debate, could insurance companies or law enforcement access impairment event data without the driver’s knowledge or a warrant. The Fourth Amendment analysis in that context is genuinely unsettled.

Beyond privacy concerns, some have raised the potential for fleet-wide attacks:

Unlike traditional vehicle theft or individual hacks, networked kill switch systems create the potential for mass-casualty cyberattacks. Research from Georgia Tech has modeled scenarios where:

Simultaneously activating kill switches on millions of vehicles could shut down entire transportation networks- Supply chain disruptions from disabled commercial vehicles could affect food, fuel, and medical supply delivery- A Consumer Watchdog report estimated a fleet-wide hack could cause approximately 3,000 deaths from a single coordinated breach.

The “kill switch jail” problem.

The statute contains no provision defining how a driver challenges or overrides a lockout once the system flags impairment. There is no appeal mechanism, no defined waiting period, no human review. A false positive — a sober driver whose steering pattern triggers the algorithm — could leave that person stranded with no clear recourse, raising significant liability, worker safety, and consumer protection concerns.

The fleet and employer liability problem.

Businesses that operate vehicle fleets — delivery companies, field services organizations, transportation providers — will have vehicles generating continuous data streams about their drivers, raising employment privacy considerations: What does the employer know? When do they know it? What state monitoring disclosure obligations apply? Will the technology trigger policy and consent obligations, such as in states with strong biometric privacy laws? Are risk assessments required?

What Businesses Can Be Doing Now

As the NHTSA continues its work on implementing regulations, a few action items worth considering:

  • If your organization currently leverages similar technology in vehicles used in the business, take a look at Dashcams: There’s More Risk To Manage Than You’d Expect.
  • Fleet operators should assess what data their vehicle management agreements and manufacturer privacy policies say about impairment event data — specifically who receives it, how long it is retained, and under what circumstances it is disclosed to third parties including law enforcement. Existing driver monitoring policies may need to be reviewed and updated.
  • HR and employment counsel should evaluate whether the passive monitoring and biometric data components of compliant vehicles trigger state-level employee monitoring notification laws (several states require advance notice before monitoring employees’ electronic activity) or biometric data statutes like Illinois BIPA. The analysis will vary by jurisdiction, but the risk of inaction is higher in states with private rights of action.
  • Privacy program managers should flag newly acquired vehicles as a data asset in enterprise data inventories. Vehicle-generated data — particularly behavioral and biometric data about identified individuals — may fall within the scope of state consumer privacy laws depending on how it is collected, processed, and shared.
  • Risk and compliance teams should watch NHTSA’s rulemaking closely. The final technical standards will determine which specific data elements are collected and by what methods.

The Broader Trend

Section 24220 is not an isolated development. It reflects a broader pattern of embedded sensors and passive monitoring becoming standard infrastructure in physical environments — vehicles, workplaces, commercial buildings — generating continuous data streams about individuals going about their ordinary daily activities. The challenge, which legislatures and regulators, and businesses, are only beginning to confront, is how to govern systems that never stop collecting.

The U.S. Department of Health and Human Services (HHS) Office for Civil Rights (OCR) recently announced a HIPAA enforcement action against an employer-sponsored group health plan. The action resulted in a payment to HHS of $245,000 and a two-year corrective action plan. While HIPAA enforcement is common in the healthcare sector, actions directly against employer-sponsored group health plans are not as common. This case, coupled with DOL guidance for ERISA fiduciaries concerning cybersecurity, underscores a growing regulatory focus not only on traditional healthcare entities, but also on the plans and ecosystems maintained by employers under ERISA.

Check out the full post in our Benefits Law Advisor.

In recent years, many organizations have installed dashcams in their vehicles to improve safety and compliance, reduce costs, and better understand what’s happening in the field.  Dashcams can be extremely useful for these purposes, giving organizations visibility into risky driver behaviors and misuse of company property.  They can also lower insurance costs and provide valuable evidence in litigation.  To provide these benefits, though, dashcams collect a lot of data—including data organizations didn’t intend to collect and/or that triggers legal obligations they didn’t intend to assume.

Why Organizations Are Using Dashcams

Dashcams serve a number of functions.  For example:

  1. Their use can lower insurance costs.  The video and audio recordings dashcams collect can help favorably resolve disputes and their AI-powered driver behavior monitoring capabilities can help flag risky activity before it results in costly incidents. 
  1. Dashcams can also help organizations monitor compliance with internal policies (e.g., no phone use while driving) and external requirements (e.g., hours-of-service rules in regulated industries).  They also create a record that can be useful in audits or investigations.
  1. When accidents occur, dashcam footage can help clarify fault, rebut inaccurate claims, and, in some cases, prevent litigation altogether or significantly reduce exposure.
  1. Many dashcams now incorporate AI tools that evaluate driver behavior and generate performance scores.  For some organizations, this information influences coaching, discipline, promotion, and compensation decisions. 

The Risks Dashcams Pose

To deliver these benefits, dashcams collect and process significant volumes of data, the management of which can be challenging.  For instance:

  1. In certain jurisdictions, prior consent is required to audio record communications.  Organizations that deploy dashcams without a clear process for obtaining and documenting consent may find themselves out of compliance.
  1. Some dashcams use facial recognition or similar technologies to identify drivers or monitor attentiveness.  Collection of this data can trigger notice and consent obligations—e.g. in California, Colorado, Illinois, and Texas—as well as obligations to maintain reasonable safeguards to protect the data from unauthorized access or acquisition.
  1. Dashcams capture extraneous information, such as employees’ discussions about medical conditions, religious beliefs, sexual orientation, or legal off-duty activities (like drinking or gambling), or the fact that, while using the vehicle, they visited their doctor or attended their AA meeting.  Collection of this information can complicate employment decisions—e.g., by imputing to an employer knowledge of an employee’s protected characteristics—and heighten the risk of invasion of privacy claims.
  1. Dashcams increasingly use AI to evaluate driver behavior or generate performance metrics.  In certain jurisdictions (e.g., California, Colorado, Illinois, New York City), the use of AI-generated performance data may trigger notification, risk assessment, and other compliance requirements. 
  1. Dashcams are typically deployed and managed by third-party vendors, which means the data they collect is often processed outside the employer’s information systems.  Nevertheless, the employer remains responsible for the protection and proper handling of that data.  If the vendor experiences a breach, or misuses the data, impacted employees and/or regulators will likely seek to hold the employer—not just the vendor—accountable.   

How To Manage Dashcam Risk

For many organizations, dashcams are a major value add.  And the good news is that the risks their use presents—though significant—are manageable, provided you have a solid program in place to do so.

Below are some practical steps to consider:

Inventory Your Technology

  • Identify what dashcams are in use across the organization
  • Understand what features are enabled (e.g., video, audio, AI, facial recognition, geolocation tracking, etc.)
  • Confirm the approved use cases

Map the Data

  • What data is being collected?
  • Where is it stored (including vendor environments)?
  • Who has access to it, both internally and externally?
  • How long is it retained?

Address Notice and Consent Requirements

  • Implement clear notice to drivers and passengers
  • Obtain consent where required (e.g., before recording audio or colleting biometric data)

Review AI Use

  • Determine whether AI is being used to evaluate employees
  • Assess whether applicable AI laws impose additional obligations
  • Confirm that outputs are being used appropriately in employment decisions

Update Policies and Training

  • Develop or revise policies addressing dashcam use
  • Train employees on what is being collected and why
  • Provide guidance on appropriate use of company vehicles and equipment

Minimize Data Collection and Retention

  • Disable unnecessary features (e.g., audio, facial recognition) where possible
  • Limit retention periods to what is actually needed
  • Avoid collecting data “just in case it’s useful at some point”

Manage Vendor Risk

  • Conduct diligence on dashcam vendors’ privacy and security practices
  • Confirm where and how data is stored, processed, and transmitted
  • Understand whether the vendor uses data for product improvement, AI training, or other secondary purposes
  • Put clear contractual restrictions in place governing data use, retention, disclosure, breach notification, and risk allocation
  • Require appropriate security controls (e.g., encryption, access controls, incident response obligations)
  • Periodically reassess vendors to confirm ongoing compliance

A putative class action filed in December 2025 in the U.S. District Court for the Central District of Illinois offers a reminder that AI meeting assistant and transcription tools potentially carry significant legal exposure when organizations deploy them without appropriate governance guardrails in place. It also serves as a reminder to apply strong governance principles when evaluating and deploying these and similar technologies.

What the Fireflies.AI Complaint Alleges

The plaintiff in Cruz v. Fireflies.AI Corp., No. 3:25-cv-03399 (C.D. Ill.), alleges that she participated in a virtual meeting hosted by an Illinois nonprofit organization that had enabled Fireflies.ai — a popular AI meeting assistant that automatically joins Zoom, Microsoft Teams, and Google Meet sessions to record, transcribe, and analyze conversations. She alleges she never created a Fireflies account, never agreed to its Terms of Service, and never provided any written consent authorizing the collection of her biometric data.

Several states, most notably Illinois (under its Biometric Information Privacy Act (“BIPA”), regulate the collection and processing of biometric identifiers and information, creating significant compliance and litigation risk. Check out our summary of that regulation

The crux of the BIPA claims is straightforward: Fireflies’ “Speaker Recognition” feature, marketed as able to identify different speakers in meetings and audio files, necessarily generates voiceprints — biometric identifiers expressly covered by BIPA. The complaint alleges Fireflies violated BIPA in three distinct respects:

  1. failing to maintain and publicly publish a retention schedule and destruction policy for biometric data;
  2. failing to inform participants in writing that voiceprints were being collected, or of the purpose and duration of that collection; and
  3. collecting voiceprints without obtaining a written release from participants — including non-account holders who were simply present in recorded meetings.

The plaintiff seeks statutory damages of $1,000 per negligent violation and $5,000 per reckless or intentional violation, plus attorneys’ fees and injunctive relief.

Why This Matters For and Well Beyond the Fireflies Litigation

In the case of AI meeting and transcription tools, consider the following use cases along with the potential legal and other risks if, in fact, the tools are capturing biometric information:

  • Trainings. When multiple employees use the same workstation or conference room to join trainings using an AI transcription tool, voiceprints of each participant may be captured. Unless each individual has provided written consent, exposure compounds with each meeting and each attendee.
  • Witness and investigation interviews. HR professionals and corporate investigators increasingly use AI transcription tools to document and summarize interviews.
  • Applicant interviews. Talent acquisition teams using AI notetakers during candidate interviews may be capturing the voiceprints of applicants who are unlikely to have been informed their biometric data is being processed.
  • Patient and client encounters. Healthcare providers and other licensed medical professionals using AI transcription in clinical or counseling settings face layered risk — HIPAA, state privacy laws, and where applicable, biometric information protections.

Even beyond meeting assistant and transcription tools, as similar technologies are embedded into a myriad of devices and applications, questions about the collection of biometric information arise. Examples include performance management platforms and AI glasses, both of which can capture and record audio and video.

Clearly, the allegations in Cruz highlight a risk that extends far beyond any one technology, one use case, or the law in one state. AI meeting and transcription tools, like many emerging technologies, potentially can provide substantial productivity and other benefits to an organization. Compelling evidence of this is the rapid implementation and deployment for a wide range of use cases. Whether biometric identifiers or information are collected by Fireflies.AI remains to be seen. What we do know is rolling out technology without appropriate due diligence can expose an organization to significant compliance and litigation risk.

Governance Takeaways

  1. Adopt a team approach to due diligence. The data privacy and security challenges presented by complex and easily adaptable technologies cannot be solved by the IT department alone. Technology safeguards are critical, but they do not replace strong administrative, physical, and organizational controls, nor do they address the nuances certain applications bring. Organization executives, legal counsel, HR professionals, risk, and other key stakeholders should be at the table to ensure the right questions are being asked.
  2. Know your legal and contractual limitations, and the various ways they could apply. An organization’s compliance team need not and should not be comprised solely of lawyers. But it should maintain a keen awareness of the various legal and contractual limitations on the use of certain technologies, and their potential use cases.  
  3. Change management. It is increasingly common to vet and deploy a technology for one application, only to discover that it can easily address a number of other unrelated problems. Or, the vendor developing the technology significantly expands its functionality, which could be very beneficial to the organization beyond its current use. Will those pursuing the additional use cases or functionalities reopen the due diligence analysis? They should.
  4. Write it down. Even if the organization is checking off items 1-3 above, building some structure around the process can help to ensure it is ongoing and consistent.

On March 20, 2026, Oklahoma’s Governor signed Senate Bill (SB) 546, which establishes a consumer data privacy law for the state. Oklahoma’s law takes effect January 1, 2027.

To whom does the law apply?

The law applies to controllers (or processors) operating in the state and handling data for:

  • at least 100,000 consumers; or,
  • at least 25,000 consumers, while earning over half of their revenue from selling personal data.

There are certain exemptions for state agencies and their service providers, financial institutions covered by the Gramm-Leach-Bliley Act, entities covered by HIPAA/HITECH, non-profit organizations, and institutions of higher education.

Who is protected by the law?

A consumer protected under the legislation is defined as an individual who is a resident of Oklahoma, acting only in an individual or household capacity. A consumer does not include a person acting in a commercial or employment context.

What data is protected by the law?

The law protects “personal data,” which means any information, including sensitive data, which is linked or reasonably linkable to an identified or identifiable individual.

“Sensitive data” is given additional protection and includes the following:

  • Personal data revealing racial or ethnic origin
  • Religious beliefs
  • Mental or physical health diagnosis
  • Sexual orientation
  • Citizenship or immigration status
  • Genetic or biometric data for uniquely identifying an individual
  • Personal data collected from a known child
  • Precise geolocation data.

What are the rights of consumers?

Under the law, consumers have the following rights:

  • To confirm whether a controller is processing their personal data
  • To correct inaccurate personal data
  • To delete personal data maintained by the controller
  • For data available in a digital format, to obtain a copy of their personal data that the consumer previously provided to the controller in a portable and, to the extent technically feasible, readily usable format that allows the consumer to transmit the data to another controller without hindrance
  • To opt out of the processing of personal data for targeted advertising, sale, or certain profiling

Controllers must respond within 45 days to consumers’ requests under the law, with one additional 45-day extension when reasonably necessary. If declining to act, the controller must explain why and provide appeal instructions.

What obligations do controllers have?

Similar to other state comprehensive privacy laws that have been enacted over the last several years, controllers in Oklahoma must, among other things:

  • Comply with data minimization principles, including limiting the collection of personal data to what is adequate, relevant, and reasonably necessary;
  • Perform data protection assessments relating to certain data processing activities, including processing sensitive data;
  • Provide a reasonably accessible and clear privacy notice to consumers;
  • Include certain provisions in agreements with processors concerning personal data;
  • Maintain reasonable administrative, technical, and physical security practices
  • Avoid processing for incompatible purposes without consent
  • Avoid unlawful discrimination and discriminating against consumers for exercising their rights
  • Obtain consent before processing sensitive data and comply with COPPA for known children

How is the law enforced?

The Attorney General has exclusive authority to enforce violations of the legislation. Violators of the law may incur a fine of up to $7,500 per violation. The law makes clear that it shall not be construed as providing a basis for a private right of action for a violation of this law.

If you have questions about Oklahoma’s new privacy law or related issues, please reach out to a member of our Privacy, Data, and Cybersecurity practice group to discuss.

U.S. organizations have long focused on federal requirements governing international data transfers. But a growing wave of state enforcement—particularly in Florida and Texas—signals that regulators are increasingly scrutinizing how companies move sensitive data outside the United States, especially when foreign adversaries may be involved. Recent developments suggest organizations should reassess their data flows, vendor relationships, and ownership structures to understand where sensitive information may ultimately land.

Federal Rule Raises the Stakes on Cross-Border Data Transfers

The Department of Justice (DOJ) took a significant step in 2024 when it began implementing regulations restricting certain outbound transfers of sensitive U.S. personal data to entities linked to “countries of concern,” including China, Iran, and North Korea. The rule targets transfers of large volumes of sensitive data—such as precise location data, biometric identifiers, genomic data, and other categories—where access by foreign adversaries could pose national security risks.

As discussed in our earlier analysis of the rule, the framework focuses on transactions involving “covered data” and “covered persons,” and in some cases prohibits transfers outright or requires companies to implement security controls, diligence processes, and recordkeeping obligations. Organizations subject to the rule must examine their vendor relationships, data brokerage arrangements, and service provider agreements to determine whether the transfers fall within the regulation’s scope.

Yet while the DOJ rule represents a significant federal development, enforcement activity suggests that federal regulators are only part of the story.

States Filling the Enforcement Gap

States are increasingly stepping into what some see as a federal enforcement gap. According to recent reports, states have launched more than a dozen investigations or lawsuits related to U.S. consumer data transfers to China or other foreign actors. These actions have targeted companies across multiple sectors—not just traditional data brokers, but also firms handling consumer electronics, genetic data, and online marketplaces.

State regulators often lack explicit authority over national security concerns. As a result, they are using other tools, including consumer protection laws, unfair or deceptive practices statutes, and state privacy statutes, to investigate companies whose data practices may expose Americans’ information to foreign entities.

Texas has been among the most aggressive jurisdictions, filing actions against several companies, illustrating how states may combine allegations related to privacy practices with broader consumer protection claims. Florida, meanwhile, is emerging as another focal point for state enforcement.

Florida Launches Dedicated Unit Targeting Foreign Data Risks

In February 2026, Florida Attorney General James Uthmeier announced the creation of a new enforcement team dedicated to investigating foreign access to Americans’ data. The initiative—called the Consumer Harm from International and Nefarious Actors (CHINA) unit—will pursue both civil and criminal investigations involving foreign corporations’ data practices.

The new unit plans to focus heavily on companies that collect sensitive personal information, including biometric and demographic data. Health care organizations, in particular, may face heightened scrutiny given the sensitivity of the information they handle.

According to the attorney general’s office, the unit will ramp up subpoenas, investigations, and lawsuits under Florida consumer protection laws. The effort is designed not only to address potential risks within Florida but also to serve as a model for other states considering similar initiatives.

Florida’s Investigation Into Lorex Signals Broader Scrutiny

Florida has already begun investigating companies suspected of exposing consumer data to foreign surveillance risks. One notable example is Lorex Corp., a surveillance camera manufacturer that has faced investigations and litigation in several states over alleged connections to Chinese ownership.

As part of Florida’s inquiry, authorities reportedly compelled the company to produce extensive information about its corporate structure, contracts, and software architecture. The investigation highlights a growing focus on how foreign ownership structures or technological dependencies could create pathways for sensitive data to leave the United States.

For organizations, the Lorex matter underscores a key compliance issue: regulators are looking beyond privacy notices and security practices to evaluate who ultimately has access to data—including corporate affiliates, overseas vendors, and parent companies.

Florida’s Offshore Data Law Adds Another Layer

Florida has also enacted legislation restricting certain transfers of health data outside the United States, sometimes referred to as the state’s “Offshore Data” restrictions. The law prohibits the storage of personal health information by healthcare providers using certified electronic health record technology (CEHRT) outside the United States, its territories, or Canada.

When combined with the DOJ rule and the state’s new enforcement unit, these laws create a regulatory environment in which organizations operating in Florida—or handling data about Florida residents—may face multiple overlapping compliance obligations.

Practical Takeaways for Organizations

These developments highlight a critical shift in how regulators view cross-border data transfers. Organizations should consider taking several steps:

  • Map data flows. Companies should understand where sensitive data is stored, processed, and transmitted—including by vendors and subcontractors.
  • Assess vendor and ownership risks. Regulators are paying closer attention to foreign ownership interests, corporate affiliations, and data access rights.
  • Review contracts and technical controls. Agreements with service providers should address cross-border data transfers and incorporate appropriate safeguards.
  • Monitor state developments. State enforcement efforts are expanding rapidly and may reach companies that previously focused primarily on federal requirements.

The combined pressure from federal regulators and an increasingly active group of state attorneys general suggests that scrutiny of foreign data transfers is likely to intensify. As states continue to explore creative ways to regulate cross-border data flows, organizations may find that compliance requires not only understanding where their data goes—but also who ultimately controls it.

In May 2023, Florida enacted a significant change to its health data laws. Senate Bill 264 amended the Florida Electronic Health Records Exchange Act restricting where certain patient data can be stored and accessed. Codified at Section 408.051(3) of the Florida Electronic Health Records Exchange Act, the change mandates that:

In addition to the requirements in 45 C.F.R. part 160 and subparts A and C of part 164, a health care provider that utilizes certified electronic health record technology must ensure that all patient information stored in an offsite physical or virtual environment, including through a third-party or subcontracted computing facility or an entity providing cloud computing services, is physically maintained in the continental United States or its territories or Canada. This subsection applies to all qualified electronic health records that are stored using any technology that can allow information to be electronically retrieved, accessed, or transmitted.

In other words, the law requires healthcare providers using certified electronic health record technology (CEHRT) to ensure that patient information stored outside their facilities—whether in a physical data center, virtual environment, or cloud service—is maintained only in the continental United States, its territories, or Canada.

Note this compliance requirement also comes with a statutory obligation (Section 408.810(14)) for any license under Chapter 408 of the Florida Public Health Law to sign an affidavit of compliance upon initial application and future renewals:  

The licensee must sign an affidavit at the time of his or her initial application for a license and on any renewal applications thereafter that attests under penalty of perjury that he or she is in compliance with s. 408.051(3). The licensee must remain in compliance with s. 408.051(3) or the licensee shall be subject to disciplinary action by the agency.

Emphasis added.

This amendment makes clear its intent that the new rule go beyond the requirements in the well-known federal privacy and security regulations for healthcare providers, the Health Insurance Portability and Accountability Act (HIPAA). HIPAA generally does not impose geographic restrictions on where protected health information (PHI) may be processed or stored, so long as appropriate safeguards and agreements are in place. Likely considered a more stringent protection for PHI, the Florida amendment would appear to survive HIPAA preemption.  

The law applies broadly across the healthcare sector, including hospitals, clinics, ambulatory surgical centers, home health agencies, hospices, nursing homes, labs, pharmacies, and many individual licensed practitioners—from physicians and nurses to therapists and pharmacists.

And, this restriction does not stop with covered providers. It extends to vendors and subcontractors that support healthcare operations. Managed service providers, IT vendors, scheduling support services, and other contractors that store or access patient information must also ensure that the data remains within the permitted geographic boundaries.

The requirements in the law also are not limited to certain types of patient information, such as diagnoses or mental health status. The rule extends to all patient information.

For many covered entities, the operational challenge is real. Disaster recovery environments, backup systems, and globally distributed cloud infrastructure often rely on servers outside the United States. Architectures designed for redundancy or resilience may now create compliance issues under Florida’s law.

Example: Healthcare providers often rely on vendors when handling investigations, such as for security incidents, and responding to data breaches. In some cases, providers may need to perform substantial data mining efforts to identify patients impacted by a breach. Third party data mining vendors often offer substantial discounts when that work is performed outside the U.S. Incident response plans of Florida covered providers should serve as a reminder of where patient information need to be stored.

Practically speaking, that means covered healthcare providers should be, at a minimum:

  • auditing where patient data is actually stored
  • reviewing vendor and subcontractor arrangements
  • updating contracts, BAAs, and data processing agreements to reflect storage restrictions
  • performing diligence on data location when onboarding new vendors

Florida’s move is also part of a larger trend. Regulators and policymakers are increasingly focused on data sovereignty and foreign access to sensitive health information. This amendment is an indicator of where state and federal regulation appears to be headed.

Some years ago, I listened to Richard Susskind speak about the “Future of Professions” and, in his view, how systems like AI might replace them. Indeed, the disruption he predicted largely has materialized in recent years, as many assess what impact AI will have on certain professional services, knowledge-based occupations, such as attorneys, accountants, healthcare professionals, etc. The jury is still out, but while most believe professions will not be eliminated entirely, there most certainly will be some impact, driving the need for adaptation to market realities.

Artificial intelligence chatbots are increasingly being deployed across industries — from healthcare portals to legal tech platforms to financial services. As these tools take on more substantive roles, lawmakers are beginning to push back. New York Senate Bill S7263, introduced last year by Sen. Gonzalez, would impose meaningful liability on businesses that allow chatbots to stray into licensed professional territory.

What the Bill Would Do

S7263 would add a new section to New York’s General Business Law targeting “proprietors” — defined as any person, business, organization, institution, or government entity that owns, operates, or deploys a chatbot to interact with users. Notably, third-party developers who merely license their chatbot technology to a proprietor are explicitly excluded from this definition, though that distinction carries its own implications (more on that below).

The bill draws a hard line around two categories of regulated conduct:

Licensed professions. The bill lists a broad set of professional fields governed under New York’s Education Law — including medicine, dentistry, optometry, psychology, chiropractic, pharmacy, nursing, physical therapy, and others. A chatbot that provides substantive responses, information, or advice that would constitute unlicensed practice of any of these professions could expose its deployer to civil liability.

Legal practice. Chatbots would also be prohibited from providing responses that would amount to practicing law without admission to the New York bar — a significant concern given the explosive growth of AI-powered legal research and document tools.

The Disclosure Requirement

Beyond limiting what chatbots can say, S7263 would impose an affirmative disclosure obligation on all proprietors: users must receive clear, conspicuous, and explicit notice that they are interacting with an AI chatbot. The notice must appear in the same language the chatbot is using and in a font no smaller than the largest text elsewhere on the page. In other words, burying a disclosure in fine print or a terms-of-service page won’t cut it.

Liability and Enforcement

The bill would create a private right of action, allowing individuals to sue directly for actual damages. If a court finds the violation was willful, the proprietor faces actual damages plus attorneys’ fees and court costs — a provision that significantly raises the stakes for deliberate non-compliance.

Critically, the bill explicitly states that a disclaimer alone is not a defense. Simply telling users they are talking to a bot does not shield a proprietor from liability if that bot is providing advice that crosses into licensed professional practice.

What Steps Would Deployers Need to Consider

If S7263 becomes law, organizations deploying customer-facing AI tools in New York should take several steps:

  • Audit chatbot scope. Review what questions your chatbot answers and whether any responses could be characterized as medical, legal, dental, psychological, or other licensed-professional advice. Restrict or redirect sensitive queries accordingly.
  • Implement robust disclosures. Design chatbot interfaces with prominent, plain-language notices that satisfy the font and language requirements in the bill.
  • Review vendor contracts. Even though third-party developers are excluded from the definition of “proprietor,” deployers should ensure their vendor agreements clearly address responsibility for chatbot behavior and include indemnification provisions.
  • Establish escalation paths. Build in clear handoffs to licensed professionals when users raise topics that fall within the bill’s restricted categories.

What Developers Should Consider

While S7263 would not directly impose liability on technology vendors and developers who license their systems to others, the bill creates downstream pressure that developers cannot ignore. Deployers will increasingly demand contractual assurances — and may seek to shift liability — when chatbot behavior triggers a claim. Developers should consider building configurable guardrails into their products that allow deployers to restrict professional-domain responses, and they should be transparent about the limitations of their systems in licensing documentation and product design.

The Bottom Line

If enacted, the law would establish that deploying AI in contexts involving regulated professional advice carries real legal risk — regardless of disclaimers. However, this and other measures like it signal an effort by professions to push back on technology that is changing the landscape for access to such services. Where this will end up remains unclear.