Some years ago, I listened to Richard Susskind speak about the “Future of Professions” and, in his view, how systems like AI might replace them. Indeed, the disruption he predicted largely has materialized in recent years, as many assess what impact AI will have on certain professional services, knowledge-based occupations, such as attorneys, accountants, healthcare professionals, etc. The jury is still out, but while most believe professions will not be eliminated entirely, there most certainly will be some impact, driving the need for adaptation to market realities.

Artificial intelligence chatbots are increasingly being deployed across industries — from healthcare portals to legal tech platforms to financial services. As these tools take on more substantive roles, lawmakers are beginning to push back. New York Senate Bill S7263, introduced last year by Sen. Gonzalez, would impose meaningful liability on businesses that allow chatbots to stray into licensed professional territory.

What the Bill Would Do

S7263 would add a new section to New York’s General Business Law targeting “proprietors” — defined as any person, business, organization, institution, or government entity that owns, operates, or deploys a chatbot to interact with users. Notably, third-party developers who merely license their chatbot technology to a proprietor are explicitly excluded from this definition, though that distinction carries its own implications (more on that below).

The bill draws a hard line around two categories of regulated conduct:

Licensed professions. The bill lists a broad set of professional fields governed under New York’s Education Law — including medicine, dentistry, optometry, psychology, chiropractic, pharmacy, nursing, physical therapy, and others. A chatbot that provides substantive responses, information, or advice that would constitute unlicensed practice of any of these professions could expose its deployer to civil liability.

Legal practice. Chatbots would also be prohibited from providing responses that would amount to practicing law without admission to the New York bar — a significant concern given the explosive growth of AI-powered legal research and document tools.

The Disclosure Requirement

Beyond limiting what chatbots can say, S7263 would impose an affirmative disclosure obligation on all proprietors: users must receive clear, conspicuous, and explicit notice that they are interacting with an AI chatbot. The notice must appear in the same language the chatbot is using and in a font no smaller than the largest text elsewhere on the page. In other words, burying a disclosure in fine print or a terms-of-service page won’t cut it.

Liability and Enforcement

The bill would create a private right of action, allowing individuals to sue directly for actual damages. If a court finds the violation was willful, the proprietor faces actual damages plus attorneys’ fees and court costs — a provision that significantly raises the stakes for deliberate non-compliance.

Critically, the bill explicitly states that a disclaimer alone is not a defense. Simply telling users they are talking to a bot does not shield a proprietor from liability if that bot is providing advice that crosses into licensed professional practice.

What Steps Would Deployers Need to Consider

If S7263 becomes law, organizations deploying customer-facing AI tools in New York should take several steps:

  • Audit chatbot scope. Review what questions your chatbot answers and whether any responses could be characterized as medical, legal, dental, psychological, or other licensed-professional advice. Restrict or redirect sensitive queries accordingly.
  • Implement robust disclosures. Design chatbot interfaces with prominent, plain-language notices that satisfy the font and language requirements in the bill.
  • Review vendor contracts. Even though third-party developers are excluded from the definition of “proprietor,” deployers should ensure their vendor agreements clearly address responsibility for chatbot behavior and include indemnification provisions.
  • Establish escalation paths. Build in clear handoffs to licensed professionals when users raise topics that fall within the bill’s restricted categories.

What Developers Should Consider

While S7263 would not directly impose liability on technology vendors and developers who license their systems to others, the bill creates downstream pressure that developers cannot ignore. Deployers will increasingly demand contractual assurances — and may seek to shift liability — when chatbot behavior triggers a claim. Developers should consider building configurable guardrails into their products that allow deployers to restrict professional-domain responses, and they should be transparent about the limitations of their systems in licensing documentation and product design.

The Bottom Line

If enacted, the law would establish that deploying AI in contexts involving regulated professional advice carries real legal risk — regardless of disclaimers. However, this and other measures like it signal an effort by professions to push back on technology that is changing the landscape for access to such services. Where this will end up remains unclear.