Following failed congressional attempts to limit state AI laws, on December 11, 2025, the President issued an Executive Order titled Ensuring a National Policy Framework for Artificial Intelligence. The Order represents federal intervention into the growing landscape of state-level AI regulation. According to the Administration, a patchwork of state laws has created inconsistent and burdensome compliance obligations, particularly for startups and organizations operating across multiple jurisdictions. The Order claims that certain current state AI laws not only restrict innovation but could also force AI developers to incorporate “ideological bias.”

The EO provides the following example:

a new Colorado law banning “algorithmic discrimination” may even force AI models to produce false results in order to avoid a “differential treatment or impact” on protected groups.

To address these concerns, the Executive Order establishes a new AI Litigation Task Force within the Department of Justice. This group is charged with challenging state AI laws that conflict with the federal policy of promoting minimally burdensome, innovation-focused AI governance.

The Administration anticipates litigation against states whose laws it believes unconstitutionally regulate interstate commerce, impose unlawfully compelled speech, or require model outputs to be modified in ways that conflict with federal law. Within 90 days, the Department of Commerce must also publish a public evaluation identifying specific state laws considered “onerous” or inconsistent with the national policy framework—the Colorado AI Act and California Consumer Privacy Act’s ADMT Regulations will very likely make the list, including those that require disclosures or reporting obligations the Administration argues may infringe the First Amendment.

The Order further ties compliance with federal AI policy to federal funding. States that maintain AI laws deemed inconsistent with federal objectives may become ineligible for certain Broadband Equity, Access, and Deployment (BEAD) funds, and federal agencies are directed to explore conditioning other discretionary grants on a state’s willingness to refrain from enforcing its AI regulations during funding periods. This introduces a significant financial dimension to federal-state tensions and may influence how aggressively states choose to regulate AI going forward.

In addition, the Order directs federal agencies to begin steps that lay the groundwork for federal preemption. The Federal Communications Commission must consider creating a national reporting and disclosure standard that would override conflicting state requirements, while the Federal Trade Commission is instructed to clarify that state laws compelling alterations to truthful AI outputs may be preempted under federal prohibitions on deceptive practices. These efforts suggest a shift toward a unified federal approach that could substantially reshape or displace existing state obligations.

The effects of the EO remain uncertain. Organizations have been grappling with a rapid proliferation of state AI laws governing areas such as notice, transparency, nondiscrimination, fairness, safety, accuracy, and vendor management stemming from automated decision-making. For covered organizations, these AI developments also intersect with long-standing civil rights laws, like Title VII and similar state laws, and well-established guardrails to prevent employment discrimination, like the Uniform Guidelines on Employee Selection Procedures, which continue to shape how AI-enabled selection tools must be assessed for compliance. 

If federal litigation succeeds or preemptive standards emerge, some existing obligations may shrink or change. At the same time, organizations should expect a period of regulatory instability as states and the federal government contest the limits of their respective authority. Organizations that have invested heavily in state-specific compliance frameworks may need to revisit or revise them, while AI developers could face shifting expectations around disclosure, output modification, and fairness-related requirements.

The Executive Order also directs federal advisors to prepare legislative recommendations for a uniform federal AI framework. Although the Administration proposes broad federal preemption, it indicates that certain topics—such as child safety protections and state AI procurement rules—should remain within state authority. This signals a coming debate in Congress over how much room states should retain to regulate AI-related issues.

Finally, the Order is almost certain to face legal challenges from states, which may argue that the Administration is exceeding its authority, infringing on state sovereignty, or coercively attaching conditions to federal funding. Litigation could take years to resolve, leaving covered organizations to navigate an evolving legal environment where both federal and state rules remain in flux—underscoring the importance of developing AI governance approaches that are flexible, regularly revisited, and attentive to how AI tools interact with existing employment discrimination laws and privacy requirements, for example. The bottom line is that the Executive Order marks the beginning of an aggressive federal push to standardize AI regulation nationwide, with substantial consequences for compliance, risk management, and future governance. Covered organizations should monitor developments closely and prepare for a shifting regulatory landscape.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Eric J. Felsberg Eric J. Felsberg

Eric J. Felsberg is a principal in the Long Island, New York office of Jackson Lewis P.C. Eric is the leader of the firm’s AI Governance and Bias Testing and Pre-Employment Assessments subgroups, as well as the Technology industry group. An early adopter…

Eric J. Felsberg is a principal in the Long Island, New York office of Jackson Lewis P.C. Eric is the leader of the firm’s AI Governance and Bias Testing and Pre-Employment Assessments subgroups, as well as the Technology industry group. An early adopter, Eric has long understood the intersection of law and technology and the influence artificial intelligence has on employers today and will have on the workforce of the future.

Recognized as a leading voice in the industry, Eric monitors laws, regulations and trends, providing practical advice and answers to emerging workplace issues before his clients even know to ask the questions. He partners with clients to develop AI governance models, and provides advice and counsel on AI use policies, ethics and transparency issues related to AI products, systems and services. Eric leverages his considerable knowledge of the technology and AI industries to create meaningful partnerships with developers and distributors of AI models and tools and owners of content and data used to train AI applications for the benefit of his clients. He delivers user-friendly counsel and training to employers on everyday employment and compliance issues arising from federal, state and local regulations.

Photo of Joseph J. Lazzarotti Joseph J. Lazzarotti

Joseph J. Lazzarotti is a principal in the Tampa, Florida, office of Jackson Lewis P.C. He founded and currently co-leads the firm’s Privacy, Data and Cybersecurity practice group, edits the firm’s Privacy Blog, and is a Certified Information Privacy Professional (CIPP) with the…

Joseph J. Lazzarotti is a principal in the Tampa, Florida, office of Jackson Lewis P.C. He founded and currently co-leads the firm’s Privacy, Data and Cybersecurity practice group, edits the firm’s Privacy Blog, and is a Certified Information Privacy Professional (CIPP) with the International Association of Privacy Professionals. Trained as an employee benefits lawyer, focused on compliance, Joe also is a member of the firm’s Employee Benefits practice group.

In short, his practice focuses on the matrix of laws governing the privacy, security, and management of data, as well as the impact and regulation of social media. He also counsels companies on compliance, fiduciary, taxation, and administrative matters with respect to employee benefit plans.

Photo of Christopher T. Patrick Christopher T. Patrick

Chris Patrick is a Principal in the Denver, Colorado, office of Jackson Lewis P.C. and is a member of the Firm’s Affirmative Action Compliance and OFCCP Defense practice group and Pay Equity resource group.

Chris partners with employers on practical solutions to ensure…

Chris Patrick is a Principal in the Denver, Colorado, office of Jackson Lewis P.C. and is a member of the Firm’s Affirmative Action Compliance and OFCCP Defense practice group and Pay Equity resource group.

Chris partners with employers on practical solutions to ensure equal employment opportunity (EEO), including counseling on affirmative action, pay equity and transparency, and diversity. In short, Chris develops actionable strategies under privilege that identify and eliminate unseen barriers to EEO in personnel practices—often informed by trends in employee data.

Photo of Damon W. Silver Damon W. Silver

Damon W. Silver is a principal in the New York City, New York, office of Jackson Lewis P.C. and co-leader of the firm’s Privacy, AI & Cybersecurity practice group. He is a Certified Information Privacy Professional (CIPP/US).

Damon helps clients across various industries—with…

Damon W. Silver is a principal in the New York City, New York, office of Jackson Lewis P.C. and co-leader of the firm’s Privacy, AI & Cybersecurity practice group. He is a Certified Information Privacy Professional (CIPP/US).

Damon helps clients across various industries—with a focus on financial services, healthcare, and education—handle their data safely. He works with them to pragmatically navigate the challenges they face from cyberattacks, technological developments including AI, a fast-evolving data privacy and security legal compliance landscape, and an active and innovative plaintiffs’ bar.

Damon recognizes that needs vary from one client to the next. Large, mature organizations, for instance, may need assistance managing multi-jurisdictional and multi-faceted compliance obligations. Others may be in a stage of development where their greatest need is to triage what must be done now and what can more safely be left for later. Damon takes the time to understand each client’s circumstances and priorities and then works with it to develop tailored approaches to effectively managing risk without unnecessarily hindering business operations.