On May 1, 2025, the California Privacy Protection Agency (CPPA) issued a Final Order in one of its first public enforcement actions under the California Consumer Privacy Act (CCPA), imposing a fine of nearly $350,000 on the business.

An important take away from the Final Order: simply posting a privacy policy is not enough. Businesses must actively monitor, test, and verify that the tools supporting consumer rights are working — even when those tools are managed by third parties.

What Went Wrong?

The CPPA found multiple violations of the CCPA and its implementing regulations. Here are the most notable failures:

1. Non-Functioning “Cookie Preferences Center” Link

Like many retailers, the business used third party tracking software on its website, such as cookies and pixels, to share data about consumers online behavior (a category of personal information) with third parties. The business shared this data for purposes such as analytics and cross-context behavioral advertising. While the business told consumers they could opt out of the sharing of their personal information, the technical infrastructure of their website did not support elections by consumers to do so. In short, opt-out elections simply were not processed correctly for a period of time, 40 days.

According to the CPPA, the business

would have known that Consumers could not exercise their CCPA right if the company had been monitoring its Website, but [the company] instead deferred to third-party privacy management tools without knowing their limitations or validating their operation.”

2. Failure to Properly Identify Verifiable Requests and Overcollection of Verification Information

The business offered a webform to enable consumers to exercise several of their CCPA rights, including the right to opt-out of the selling or sharing of personal information. However, using the webform to exercise any of those rights required consumers to provide certain personal information, including a picture of the consumer holding an “identity document.” This approach created two problems: (i) it resulted in the collection of sensitive personal information (e.g., a drivers license) to make the request, and (ii) it failed to distinguish requests to opt-out of the sale or sharing of personal information, which are not verifiable consumer requests. In short, according to the CPPA, the webform collected more personal information than necessary for verifiable consumer requests and failed to authenticate consumers in a compliant manner, ultimately leading to complaints from consumers.

Practical Takeaways

This case illustrates the kind of avoidable but costly missteps that any business could make. Conducting an annual review of CCPA compliance, as required under the law, is an obvious step to help ensure ongoing compliance. But here are some more specific items to consider as well:

  • Test your links and forms regularly across devices and browsers. Don’t assume that what’s written in your privacy policy functions properly.
  • Review webforms and verification procedures to ensure they correctly identify, route, and respond to verifiable consumer requests without collecting unnecessary personal data. Also, assess whether backend processes and training support procedures outlined in online privacy policies.
  • Vet and monitor third-party vendors responsible for CCPA compliance tools. Require written assurances of compliance and retain the right to audit their systems and processes, while also checking to ensure the services provided are compliant.
  • Document your due diligence and monitoring to illustrate a focus on compliance. Mistakes happen, but the business can mount a stronger defense to allegations of non-compliance when it can show an ongoing effort to achieve compliance.

Rhode Island’s Governor recently signed the Rhode Island Judicial Security Act (H5892), which aims to bolster the privacy and security of current and former judicial officers and their families by introducing several measures to safeguard their personal information.

Definition of Protected Individuals

The Act defines “protected individuals” as current, retired, or recalled justices, judges, and magistrates of the Rhode Island unified judicial system, as well as federal judicial officers residing in Rhode Island.

Definition of Personal Information

Personal information is defined to mean the Social Security number, residence address, home phone number, mobile phone number, or personal email address of, and identifiable to, the protected individual or their immediate family member.

Restrictions on Public Posting

 Protected individuals may file a written notice of their status as a protected individual, for themselves and immediate family, with any state, county, and municipal agencies, as well as with any person, data broker, business, or association.

Following receipt of this notice, these entities shall:

  • mark as confidential the protected individual’s or immediate family member’s personal information,
    • remove within 72 hours any publicly available personal information of the protected individual or immediate family member, and
    • obtain written permission from the protected individual prior to publicly posting or displaying the personal information of the protected individual or immediate family members.  

After receiving a protected individual’s written request, a person, data broker, business, or association shall also:

  • ensure that the protected individual’s or the immediate family member’s personal information is not made available on any website or subsidiary website under their control, and
  • not transfer this information to any other person, business, or association through any medium.

The Act further prohibits data brokers from selling, licensing, trading, or otherwise making available for consideration the personal information of a protected individual or immediate family member.

Enforcement and Legal Recourse:

Protected individuals or their immediate family members can seek injunctive or declaratory relief in court if their personal information is disclosed in violation of the act. Violators may be required to pay the individual’s costs and reasonable attorneys’ fees.

The law will take effect January 1, 2026.

Rhode Island’s Judicial Security Act bears a striking resemblance to New Jersey’s Daniel’s Law. Daniel’s Law prohibits the disclosure of the residential addresses and unpublished phone numbers of judicial officers, prosecutors, and law enforcement officers on websites controlled by New Jersey state, county, and local government agencies.

Entities subject to the Act should promptly review and, where necessary, revise their data handling practices to ensure compliance with the Act’s restrictions on disclosing protected judicial information.

On July 23, 2025, the White House released America’s AI Action Plan, a comprehensive national strategy designed to strengthen the United States’ position in artificial intelligence through investment in innovation, infrastructure, and international diplomacy and security. The plan, issued in response to Executive Order 14179, reflects a pro-innovation approach to AI policy—one that aims to accelerate adoption while mitigating security and integrity risks through targeted government action, collaboration with the private sector, and modernization of key systems.

The plan does not introduce new laws or regulatory mandates. Instead, it focuses on leveraging existing authorities, enhancing voluntary standards, and enabling responsible AI development and deployment at scale.

Pillar 1: Driving AI Innovation

The first pillar emphasizes enabling cutting-edge research, workforce readiness, and private-sector growth. Federal agencies are directed to align funding, tax guidance, and educational programs to support AI upskilling and integration across industries.

Key actions include:

  • Removing “red tape” and onerous regulation, calling for suggestions to remove regulatory barriers to innovation, and for federal funding to be directed away from states with “burdensome AI regulations.”
  • Treasury guidance to allow tax-free reimbursement of AI training expenses under IRC §132.
  • Coordination among agencies like the Department of Labor, NSF, and Department of Education to embed AI literacy into training and credentialing programs.
  • Confronting the growing threat of synthetic media, including deepfakes and falsified evidence. Federal agencies—particularly the Department of Justice—are tasked with developing technologies to detect AI-generated content and preserve the integrity of judicial and administrative proceedings.
  • Launching a new AI Workforce Research Hub to study the impact of AI on economic productivity and labor markets.
  • The Department of Defense will create an AI and Autonomous Systems Virtual Proving Ground to simulate real-world scenarios and ensure readiness and safety.
  • Agencies will increase investment in quality datasets, standards, and measurement science to support reliable, scalable AI.

Notably, the plan does not invoke terms such as “discrimination” or “bias” in employment or algorithmic decision-making contexts—an omission that may reflect the administration’s focus on economic opportunity and innovation over regulatory constraint. However, bias is referenced in the context of safeguarding free speech and preventing censorship in AI-generated content.

Pillar 2: Building Infrastructure for the AI Age

This second pillar recognizes that AI requires new infrastructure—digital, physical, and institutional—to thrive safely and at scale. The plan outlines federal efforts to modernize government systems, support critical infrastructure security, and establish testing environments for AI tools.

Highlights include:

  • A commitment to “security by design” principles, encouraging developers to build cybersecurity, privacy, and safety into AI products from the ground up.
  • Ensuring the nation has the workforce ready to build, operate, and maintain an infrastructure that can support America’s AI future – with jobs such as electricians and advanced HVAC technicians.

These initiatives aim to reinforce public trust while enabling widespread AI adoption in sectors such as transportation, energy, defense, and public services.

Pillar 3: Advancing International Diplomacy and Security

The third pillar focuses on global leadership, international coordination, and national security. It underscores the need to shape global AI norms and standards in line with democratic values, while protecting U.S. interests against adversarial use of AI.

Strategic priorities include:

  • Strengthening cross-border partnerships to promote responsible AI development and interoperability.
  • Addressing threats from foreign actors who may use AI for disinformation, cyberattacks, or military advantage.
  • Encouraging export controls, intelligence coordination, and diplomatic engagement around emerging AI technologies.

This pillar reflects the administration’s intent to ensure that AI supports—not undermines—international stability, democratic resilience, and national defense.

Legal and Strategic Takeaways

  • Policy Through Enablement: The plan reflects a shift away from regulation and toward enabling frameworks—creating opportunities for private-sector leadership in shaping standards, tools, and data ecosystems.
  • Synthetic Media Enforcement: With federal agencies actively addressing deepfakes and AI-generated content, litigation and evidentiary practices are likely to evolve. Legal practitioners should monitor developments in forensic tools and admissibility standards.
  • Cybersecurity Imperatives: The emphasis on “security by design” may influence future procurement requirements, vendor due diligence, and contractual obligations—especially for organizations working with or for the government.

The AI Action Plan presents a clear vision of the United States as a global AI leader—by empowering innovators, modernizing infrastructure, and projecting democratic values abroad. While the plan avoids broad regulatory mandates, it signals rising expectations around safety, authenticity, and international coordination.

Earlier this year, North Dakota’s Governor signed HB 1127,  which introduces new compliance obligations for financial corporations operating in North Dakota. This new law will take effect on August 1, 2025.

The law applies to certain “financial corporations.” Under the law, financial corporation means all entities regulated by the Department of Financial Institutions, excluding credit unions, as well as banks and similar institutions organized under North Dakota or U.S. law. Entities covered by the law include collection agencies, money brokers, money transmitters, mortgage loan originators, and trust companies.

Covered financial corporations must implement a WISP. HB 1127 requires the implementation of comprehensive, written information security programs tailored to each organization’s size, complexity, and the sensitivity of customer information they handle. The law mandates specific program elements, including risk assessments, designated security personnel, implementation of technical safeguards, regular testing, incident response planning, and prompt notification of security events to authorities, discussed further below.

The law defines “information security program” as “the administrative, technical, or physical safeguards a financial corporation uses to access, collect, distribute, process, protect, store, use, transmit, dispose of, or otherwise handle customer information.” 

HB 1127 also outlines several elements required for the programs, which include, among other things:

  • Designated Security Leadership: The information security program must denote a qualified individual responsible for implementing, overseeing, and enforcing the program.
  • Risk Assessment: foundational to the information security program is the written risk assessment, which identifies reasonably foreseeable internal and external risks to the security, confidentiality, and integrity of customer information.
  • Safeguards: The corporation must design and implement safeguards to control and mitigate the risks identified through the risk assessment. This should include a periodic review of the corporation’s data retention policy.
  • Testing and Monitoring: the above safeguards’ key controls, systems, and procedures must be regularly tested or otherwise monitored.
  • Incident Response Planning: The corporation must establish a written incident response plan designed to promptly respond to and recover from any security event materially affecting the confidentiality, integrity, or availability of customer information the corporation controls.
  • Notification Requirements: the corporation must notify the state’s Commissioner of Financial Institutions of a “notification event” – defined as “the acquisition of unencrypted customer information without the authorization of the individual to which the information pertains.” For notification events implicating five hundred or more consumers, the corporation must notify the Commissioner as soon as possible, but no later than forty-five days after the discovery of the event.
  • Oversee Service Providers: The corporation must take reasonable steps to select and retain service providers capable of maintaining the safeguards of customer information. Moreover, the corporation must periodically assess the service providers based on the risk they present.
  • Annual Report to Board: Must designate a qualified individual to report in writing at least annually to the corporation’s board of directors or similar on the overall status of the information security program and material matters related to the program, including risk assessment.

If you have questions about compliance with these new requirements or related issues, contact a Jackson Lewis attorney to discuss.

The U.S. Senate voted early Tuesday to remove a proposed moratorium from the federal budget bill. This outcome marks a pivotal moment in the ongoing debate over artificial intelligence regulation in the United States.

The AI moratorium, initially proposed as part of the One Big Beautiful Bill Act, proposed a 10-year moratorium on the enforcement of AI-related legislation by states or other entities. Specifically, it was designed to restrict the regulation of AI models, systems, or automated decision systems involved in interstate commerce.

The provision faced strong bipartisan opposition, with critics warning that it would leave consumers vulnerable to AI-related harm and undermine state-level consumer protection laws.

After extensive negotiations and public outcry, an amendment to strip the provision from the budget bill was introduced. The Senate voted overwhelmingly in favor of the amendment, thus ending the moratorium effort.

Support for the removal was partially based on states having the ability to enforce their own AI regulations, particularly in areas such as robocalls, deepfakes, and autonomous vehicles. Currently, several state and local jurisdictions have AI protections planned or already in place.

These developments also reflect a growing recognition of the complexities and potential risks associated with AI technologies. Both the federal government and states will likely be grappling with balancing regulation with innovation when it comes to AI for the foreseeable future.

Jackson Lewis will continue to monitor legislative developments related to AI and related technologies.

The Senate recently voting 99-1 to remove a 10-year moratorium on state regulation of AI says something about the impact of AI, but also its challenges.

A new MIT study, presented at the ACM Conference on Fairness, Accountability and Transparency, demonstrates that large language models (LLMs) used in healthcare can be surprisingly “brittle.” As discussed in Medical Economies, the researchers evaluated more than 6,700 clinical scenarios and introduced nine types of minor stylistic variations—typos, extra spaces, informal tone, slang, dramatic language, or even removing gender markers. Across all variants, these small changes altered AI treatment recommendations in clinically significant ways, with a 7‑9% increase in the AI advising patients to self‑manage rather than seek care—even when medical content remained identical.

“These models are often trained and tested on medical exam questions but then used in tasks that are pretty far from that, like evaluating the severity of a clinical case. There is still so much about LLMs that we don’t know,” Abinitha Gourabathina, Lead author of the study

Notably, these misinterpretations disproportionately impacted women and other vulnerable groups—even when gender cues were stripped from the input. In contrast, according to the study’s findings, human clinicians remained unaffected by such stylistic changes in deciding whether care was needed.

Why This Matters For Healthcare and Beyond

Healthcare Providers

  • Patient safety risk: If patients’ informal language or typos unintentionally trigger incorrect triage outcomes, patients may be more likely to engage in self-care when in-person care is recommended, and serious conditions may go unflagged.
  • Health equity concerns: The disproportionately poorer outcomes linked to female or vulnerable‐group messages highlight amplified bias through sensitivity to style, not substance.

Professional Services

  • Faulty legal, compliance, or regulatory advice: The same kind of “brittleness” that the MIT study suggests could compromise patient care, may also lead to varied and inaccurate and/or incomplete legal or compliance recommendations.

Human Resources

  • Compliance and bias risks: There are many use cases for LLMs in the employment arena which also rely on prompts from employees. And, there are lots of concerns about bias. Without sufficient training, auditing, and governance, they too may fall prey to the same kind of limitations witnessed in the MIT study.  

Governance, Testing & Accountability

Healthcare surely is not the only industry grappling with the new challenges LLMs present. Establishing a governance framework is critical, and organizations might consider the following measures as part of that framework:

  1. Pre‑deployment auditing: LLMs should undergo rigorous testing across demographic subgroups and linguistic variants—not just idealized text—but also informal, error‐ridden inputs.
  2. Prompt perturbation testing: Simulate typos, tone shifts, missing or added markers—even swapping gender pronouns—to check for output stability.
  3. Human validation oversight: At minimum, have subject matter experts (SMEs) review AI outputs, especially where high‐stakes decisioning is involved.
  4. Developer scrutiny and certification: Organizations deploying LLMs should understand and assess to the extent necessary and appropriate efforts made by their developers to address these and other issues before adoption and implementation.

The MIT research reveals that small changes in input—like a typo or a slangy phrase—can meaningfully skew outputs in potentially high‑stakes contexts. For users of LLMs, including healthcare providers, lawyers, and HR professionals, that “brittleness” may heighten risks to safety, accuracy, fairness, and compliance. Strong governance is needed. And with the moratorium on state regulation of AI removed from the Big Beautiful Bill, should its removal hold, organizations are like to see more attention given to governance, risk, and compliance requirements as legislation develops.  

Explained in more detail below, under the recent vacatur of most of the HIPAA Privacy Rule to Support Reproductive Health Care Privacy (the “Reproductive Health Rule”):

  • The broad prohibitions on disclosing protected health information (“PHI”) relating to reproductive health for law enforcement or investigatory purposes are vacated nationally.
  • The attestation requirement that was included as part of the Reproductive Health Rule no longer applies to requests for such information.

Note, however, the more recent U.S. Supreme Court decision in Trump v. CASA, Inc. (S. Ct. 2025) may limit the scope of the Texas District Court’s decision.

What This Means for HIPAA-Covered Entities

  • Providers and plans should:
    • Revisit their policies, notices, and related materials to determine what  changes should be made. They also should be looking at the activities of business associates acting on their behalf (and consider any recent changes to business associate agreements).
    • Audit previous attestation workflows, disclosures, and programming.
    • Communicate any implemented changes.
    • Re-train staff who handle PHI or receive subpoenas to ensure workflows align with and policy revisions.
  • HIPAA’s core Privacy Rule remains unchanged, so protected uses/disclosures (e.g., treatment, payment, health oversight or law enforcement under the usual standards) still apply.
  • Compliance with substance use disorder (“SUD”)-related Notice of Privacy Practices updates, per the CARES Act, must continue.

State Law Dynamics

  • Remember that more stringent protections for health information under state law that conflicts with HIPAA’s Privacy Rule are not preempted by HIPAA.
  • For example, many states have enacted their own enhanced protections, like limits on geofencing or disclosure of reproductive health data.
  • As a result, providers and plans must make sure that their policies comply not only with HIPAA’s Privacy Rule, but with applicable state law requirements.

2024 Reproductive Health Rule

In 2024, HIPAA covered entities, including healthcare providers and health plans, began taking steps to comply with the Department of Health and Human Services 2024 Reproductive Health Rule under the HIPAA Privacy Rule. See our prior articles on these regulations:  New HIPAA Final Rule Imposes Added Protections for Reproductive Health Care Privacy | Workplace Privacy, Data Management & Security Report and HIPAA Final Rule For Reproductive Health Care Privacy with December 23, 2024, Compliance Deadline | Benefits Law Advisor

Fast forward to June 18, 2025, when U.S. District Judge Matthew Kacsmaryk (N.D. Tex.), issued a nationwide injunction in Purl v. HHS, vacating most of the Reproductive Health Rule. Key holdings include:

  • HHS exceeded its statutory authority, overstepped via the “major‑questions doctrine,” and unlawfully redefined terms like “person” and “public health.”
  • The agency impermissibly intruded on state authority to enforce laws, including child abuse reporting.

The essence of the ruling is that it blocks most of the protections in the Reproductive Health Rule, including the attestation requirement. However, unrelated SUD-related Notice of Privacy Practices provisions are not affected and remain in force.

Although the court determined that vacatur is the default remedy for unlawful agency actions challenged under the Administrative Procedures Act, it also noted that the Supreme Court would need to address vacatur and universal injunctions. On June 27, 2025, the Supreme Court did just that in Trump v. CASA, Inc., when it held that universal injunctions likely exceed the equitable authority that Congress gave to federal courts. In Casa, the Court granted the government’s applications for a partial stay of the injunctions at issue, but only to the extent that the injunctions were broader than necessary to provide complete relief to each plaintiff with standing to sue. Stay tuned for further developments on how Casa will impact the nation-wide vacatur discussed in this post.

Update as of July 1, 2025.

The federal budget bill titled One Big Beautiful Bill aims to unharness artificial intelligence (AI) development in the U.S. The current draft of the bill, which has passed the House, proposes a 10-year moratorium on the enforcement of AI-related legislation by states or other entities. Specifically, it restricts the regulation of AI models, systems, or automated decision systems involved in interstate commerce.

Supporters believe the moratorium will encourage innovation and prevent a fragmented landscape of state AI laws, which are already emerging. However, opponents express concerns about the potential impact on existing state laws that regulate issues such as deepfakes and discrimination in automated hiring processes.

The ultimate outcome of this provision of the federal budget bill remains uncertain. However, if the budget bill passes with the moratorium, then it will take effect upon enactment.

Meanwhile, states continue to propose their own legislation to regulate AI in the workplace and other areas. Jackson Lewis will continue to monitor this and other related legislative developments.

For businesses subject to the California Consumer Privacy Act (CCPA), a compliance step often overlooked is the requirement to annually update the businesses online privacy policy. Under Cal. Civ. Code § 1798.130(a)(5), CCPA-covered businesses must among other things update their online privacy policies at least once every 12 months. Note that CCPA regulations establish content requirements for online privacy policies, one of which is that the policy must include “the date the privacy policy was last updated.” See 11 CCR § 7011(e)(4).

As businesses continue to grow, evolve, adopt new technologies, or otherwise make online and offline changes in their business, practices, and/or operations, CCPA required privacy policies may no longer accurately or completely reflect the collection and processing of personal information. Consider, for example, the adoption of emerging technologies, such as so-called “artificial intelligence” tools. These tools may be collecting, inferring, or processing personal information in ways that were not contemplated when preparing the organization’s last privacy policy update.

The business also may have service providers that collect and process personal information on behalf of the business in ways that are different than they did when they began providing services to the business.

Simply put: If your business (or its service providers) has adopted any new technologies or otherwise changed how it collects or processes personal information, your privacy policy may need an update.

Practical Action Items for Businesses

Here are some steps businesses can take to comply with the annual privacy policy review and update requirement under the CCPA:

  • Inventory Personal Information
    Reassess what categories of personal information your organization collects, processes, sells, and shares. Consider whether new categories—such as biometric, geolocation, or video —have been added.
  • Review Data Use Practices
    Confirm whether your uses of personal information have changed since the last policy update. This includes whether you are profiling, targeting, or automating decisions based on the data.
  • Assess adoption of new technologies, such as AI and New Tech Tools
    Has your business adopted any new technologies or systems, such as AI applications? Examples may include:
    • AI notetakers, transcription, or summarization tools for use in meetings (e.g., Otter, Fireflies)
    • AI used for chatbots, personalized recommendations, or hiring assessments
  • Evaluate Third Parties and Service Providers
    Are you sharing or selling information to new third parties? Has your use of service providers changed, or have service providers changed their practices around the collection or processing of personal information?
  • Review Your Consumer Rights Mechanisms
    Are the methods for consumers to submit access, deletion, correction, or opt-out requests clearly stated and functioning properly?

These are only a few of the potential recent developments that may drive changes in an existing privacy policy. There may be additional considerations for businesses in certain industries and departments within those businesses that should be considered as well. Here are a few examples:

Retail Businesses

  • Loyalty programs collecting purchase history and predictive analytics data.
  • More advanced in-store cameras and mobile apps collecting biometric or geolocation information.
  • AI-driven customer service bots that gather interaction data.

Law Firms

  • Use of AI notetakers or transcription tools during client calls.
  • Remote collaboration tools that collect device or location data.
  • Marketing platforms that profile client interests based on website use.

HR Departments (Across All Industries)

  • AI tools used for resume screening and candidate profiling.
  • Digital onboarding platforms collecting sensitive identity data.
  • Employee productivity and monitoring software that tracks usage, productivity, or location.

The online privacy policy is not just a static compliance document—it’s a dynamic reflection of your organization’s data privacy practices. As technologies evolve and regulations expand, taking time once a year to reassess and update your privacy disclosures is not only a legal obligation in California but a strategic risk management step. And, while we have focused on the CCPA in this article, inaccurate or incomplete online privacy policies can elevate compliance and litigation risks under other laws, including the Federal Trade Commission Act and state protections against deceptive and unfair business practices.