New York State’s 2025 legislative session marked a notable moment in the evolution of artificial intelligence (AI) and privacy regulation. Governor Kathy Hochul signed the Responsible AI Safety and Education (RAISE) Act, creating one of the first state-level frameworks aimed specifically at the most advanced AI systems, while vetoing the proposed New York Health Information Privacy Act (NYHIPA), a bill that would have significantly expanded health data protections beyond existing federal law. Together, these developments provide important signals for businesses operating in or touching New York.

The RAISE Act

The RAISE Act amends the General Business Law to impose transparency and risk-management obligations on developers of certain high-end AI systems. The law is narrowly focused on “frontier models,” defined by extraordinarily high computational thresholds, generally models trained with more than 10²⁶ computational operations and over $100 million in compute costs.

For most businesses, this means the law will primarily affect developers and deployers of the most powerful AI systems rather than everyday enterprise automation tools.

Practical examples of AI technologies that could fall within scope include:

  • Large language models such as GPT-4-class, Claude-class, or Gemini-class systems trained at a massive scale;
  • Generative AI systems capable of producing highly realistic video or audio content, including synthetic voices or deepfake-quality media;
  • Advanced medical or scientific AI tools, such as models used to support diagnostics, drug discovery, or large-scale biological simulations that require substantial computational resources.

Covered “large developers” must implement and publish a safety and security protocol (with limited redactions), assess whether deployment poses an unreasonable risk of “critical harm,” and report certain safety incidents to the New York Attorney General within 72 hours, in contrast to changes to data breach laws that took effect at the end of 2024.

 While the law does not create a private right of action, enforcement authority rests with the Attorney General, including significant civil penalties for violations.

The RAISE Act takes effect January 1, 2027.

For businesses that license or integrate frontier AI models from third parties, the RAISE Act is also relevant contractually. Vendors may pass through compliance obligations, audit rights, or usage restrictions as part of their efforts to meet statutory requirements.

Health Information Privacy Act Vetoed

Although NYHIPA was vetoed, its contents remain highly relevant, particularly for businesses in health, wellness, advertising, and AI-enabled consumer services. The bill would have applied broadly to any entity processing health-related information linked to a New York resident or someone physically present in the state, regardless of HIPAA status. This would have been a more expansive law than similar state health data laws in Washington and Nevada.

Key provisions included strict limits on processing health data without express authorization, detailed and standalone consent requirements, and explicit bans on consent practices that obscure or manipulate user decision-making. The bill would have excluded research, development, and marketing from “internal business operations”, meaning AI training or product improvement using health data could have required new authorization. Individuals would also have been granted robust access and deletion rights, including obligations to notify downstream service providers and third parties of deletion requests going back one year.

Takeaways for Businesses

Taken together, these developments reflect New York’s intent to play a leading role in AI and privacy governance. For businesses, the message is not one of immediate across-the-board compliance, but of strategic preparation.

Companies developing or deploying advanced AI should strengthen governance, documentation, and incident-response processes. Organizations handling health-adjacent data, especially data that falls outside of HIPAA, should continue monitoring legislative activity and assess whether existing consent flows, data uses, and vendor arrangements would withstand a future version of NYHIPA or similar state laws.

New York’s approach underscores a broader trend: even narrowly scoped laws can have a wide practical impact through contracts, product design, and risk management. Businesses that plan early will be best positioned as this regulatory landscape continues to evolve.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Eric J. Felsberg Eric J. Felsberg

Eric J. Felsberg is a principal in the Long Island, New York office of Jackson Lewis P.C. Eric is the leader of the firm’s AI Governance and Bias Testing and Pre-Employment Assessments subgroups, as well as the Technology industry group. An early adopter…

Eric J. Felsberg is a principal in the Long Island, New York office of Jackson Lewis P.C. Eric is the leader of the firm’s AI Governance and Bias Testing and Pre-Employment Assessments subgroups, as well as the Technology industry group. An early adopter, Eric has long understood the intersection of law and technology and the influence artificial intelligence has on employers today and will have on the workforce of the future.

Recognized as a leading voice in the industry, Eric monitors laws, regulations and trends, providing practical advice and answers to emerging workplace issues before his clients even know to ask the questions. He partners with clients to develop AI governance models, and provides advice and counsel on AI use policies, ethics and transparency issues related to AI products, systems and services. Eric leverages his considerable knowledge of the technology and AI industries to create meaningful partnerships with developers and distributors of AI models and tools and owners of content and data used to train AI applications for the benefit of his clients. He delivers user-friendly counsel and training to employers on everyday employment and compliance issues arising from federal, state and local regulations.

Photo of Joseph J. Lazzarotti Joseph J. Lazzarotti

Joseph J. Lazzarotti is a principal in the Tampa, Florida, office of Jackson Lewis P.C. He founded and currently co-leads the firm’s Privacy, Data and Cybersecurity practice group, edits the firm’s Privacy Blog, and is a Certified Information Privacy Professional (CIPP) with the…

Joseph J. Lazzarotti is a principal in the Tampa, Florida, office of Jackson Lewis P.C. He founded and currently co-leads the firm’s Privacy, Data and Cybersecurity practice group, edits the firm’s Privacy Blog, and is a Certified Information Privacy Professional (CIPP) with the International Association of Privacy Professionals. Trained as an employee benefits lawyer, focused on compliance, Joe also is a member of the firm’s Employee Benefits practice group.

In short, his practice focuses on the matrix of laws governing the privacy, security, and management of data, as well as the impact and regulation of social media. He also counsels companies on compliance, fiduciary, taxation, and administrative matters with respect to employee benefit plans.