Businesses across many industries naturally want to showcase their satisfied customers. Whether it’s a university featuring successful graduates, a retailer highlighting happy shoppers, or a healthcare facility showcasing thriving patients, these real-world testimonials can be powerful marketing tools. However, when it comes to healthcare providers subject to HIPAA, using patient images and information for promotional purposes requires careful navigation of both federal privacy rules and state law requirements.

In a recent case, the failure to comply with these requirements resulted in a $182,000 fine and a two year compliance program for a Delaware nursing home, according to the resolution agreement.

The Office for Civil Rights (OCR), which enforces the HIPAA Privacy and Security Rules, recently announced an enforcement action that serves as an important reminder of these obligations. The case involved a nursing home that posted photographs of approximately 150 facility residents over a period of time to its social media page. These postings were part of a campaign to highlight the success residents were achieving at the nursing home. When a resident complained to OCR, the agency investigated and found the covered entity had not obtained the required HIPAA authorizations or complied with breach notification requirements. The enforcement actions that followed underscore that even seemingly benign marketing practices can trigger significant compliance issues under HIPAA.

Understanding HIPAA’s Authorization Requirements

Under HIPAA, covered entities may generally use and disclose protected health information (PHI) for treatment, payment, and healthcare operations, and certain other purposes, without patient authorization. Marketing activities, however, fall outside these permissible uses. In the OCR investigation, the covered entity didn’t simply share photographs—it also disclosed information about residents’ care to tell “success stories” of patients at their facilities. This combination of visual identification and health information, according to the OCR, constituted a use of PHI requiring express patient authorization under HIPAA.

The authorization requirement isn’t merely a technicality. HIPAA authorizations must meet specific regulatory standards, such as a clear description of the information to be disclosed, the purpose of the disclosure, and a date or event after which the authorization will cease to be valid. A patient’s informal agreement or willingness to participate doesn’t satisfy these requirements.

The Breach Notification Complication

The OCR investigation revealed another compliance failure: not providing the required breach notification. Under HIPAA’s Breach Notification Rule, a disclosure not permitted under the Privacy Rule can constitute a reportable breach requiring notification to affected individuals and potentially to OCR and the media. This means that a marketing misstep can go beyond just failing to get an authorization.

Lessons from Social Media Cases

This isn’t an isolated concern. Similar issues have arisen when healthcare providers, such as dentists and other practitioners, responded to patient complaints on platforms like Google and Yelp. Well-intentioned responses that acknowledge treating a patient or try to resolve the patient’s concerns can violate HIPAA. These cases make clear that covered entities must think carefully about any use or disclosure of patient information outside the core functions of treatment, payment, and healthcare operations, even when the patient may have disclosed the same information already.

State Law Adds Another Layer, Including for Regulation of AI and Biometrics

HIPAA compliance alone may not be sufficient, particularly when potentially more stringent protections exist at state law. Many states have laws and common law obligations requiring consent before using a person’s image or likeness for commercial purposes, as well as specifics concerning what that consent should look like. Covered entities must ensure they’re meeting both HIPAA authorization requirements and any applicable state law consent requirements. They also should be sure to understand the technologies they are using, including whether they are inadvertently collecting biometric data.

Looking ahead, covered entities should be aware that several states have begun enacting or amending laws addressing how businesses can use digital replicas of individuals, particularly in the AI context. As healthcare organizations increasingly adopt AI technologies, questions about using patient images or data to create or train AI systems, will require careful analysis under both existing HIPAA rules and these emerging state laws.

The Bottom Line

The message for HIPAA covered entities is clear: think before you post, promote, or publicize to good work you do for your patients. Even when patients are willing participants in marketing efforts, formal HIPAA authorizations and state law consents may be required. The cost of non-compliance—including financial settlements, required corrective action plans, and reputational harm—far exceeds the investment in proper authorization processes. When in doubt about whether patient information can be used for a particular purpose, covered entities should consult with privacy counsel to ensure full compliance with both federal and state requirements.

Recently, California’s Governor signed Assembly Bill (AB) 45, which builds on existing California laws, such as the Confidentiality of Medical Information Act, seeking to protect individuals seeking certain healthcare services. AB 45 takes effect January 1, 2026.

Specifically, the law prohibits the collection, use, disclosure, sale, sharing, or retention of personal information of a natural person located at or with the precise geolocation of a “family planning center” – in general, a clinic or center that provides reproductive health care services.

Some exceptions apply, such as to perform services or provide goods requested by the natural person, or as provided in a collective bargaining agreement. Also, the proscription described above does not apply to covered entities and business associates as defined under HIPAA, although for the exception to apply to business associates, they must be contractually obligated to comply with all state and federal privacy laws.

Persons aggrieved by a violation of this prohibition have a private right of action, which permits treble damages and recovery of attorneys’ fees and costs.

AB 45 also makes it unlawful to, directly or through a third party, geofence for certain purposes an entity that provides certain in-person health care services (e.g., medical, surgical, psychiatric, mental health, behavioral health, preventative, rehabilitative, supportive, consultative, referral). Those purposes include, but are not limited to, identifying the persons receiving such services or sending notifications or advertisements to such persons. Any person who violates this section can be subject to a $25,000 penalty per violation.

However, there are several exceptions, such as:

  • The owner of an in-person health care entity may geofence its own location to provide necessary health care services.
  • Geofencing is conducted solely for certain approved research purposes that comply with applicable federal regulations.
  • Geofencing either by (I) labor organizations if the geofencing does not result in the labor union’s collection of names or personal information without the express consent of an individual and is for activities concerning workplace conditions, worker or patient safety, labor disputes, or organizing, or (II) a third party vendor, including, but not limited to, a social media platform, that collects personal information from a labor organization solely to carry out the purposes in (I).

AB 45 also provides protections for personally identifiable research records developed for the kind of research described above. Those protections provide that such records may not be released in response to another state’s law enforcement activities, including subpoenas or requests, that would interfere with certain rights of a person, such as under California’s Reproductive Privacy Act.   

Federal and state laws, including under HIPAA, continue to expand protections for information related to health services, including whether or not a person is receiving services, as well as the types of services, such as reproductive health services. Persons or entities seeking to collect, process, or share this information need to be aware of this growing patchwork of law.

If you have questions about AB 45 or related issues, contact a Jackson Lewis attorney to discuss.

On September 17, 2025, the Florida Agency for Health Care Administration (AHCA) will hold its first public meeting to discuss proposed rules designed to enhance transparency and preparedness around health care information system breaches. AHCA is Florida’s agency responsible for the state’s Medicaid program, the licensure of the state’s health care facilities, and the sharing of health care data through the Florida Center of Health Information and Policy Analysis.

The proposed rules would apply broadly to a wide range of licensed health care providers and facilities under AHCA’s regulatory authority. This includes, among others, hospitals, nursing homes, assisted living facilities, ambulatory surgical centers, hospice providers, home health agencies, intermediate care facilities for individuals with developmental disabilities, clinical laboratories, rehabilitation centers, and health care clinics. In practice, nearly every licensed entity that delivers health care services in Florida or participates in Medicaid could be subject to the new obligations if approved.

Key Provisions

Mandatory Breach Reporting. Providers would be required to report “information technology incidents” to AHCA within 24 hours of having a reasonable belief that an incident may have occurred. For this purpose, an information technology incident means:   

an observable occurrence or data disruption or loss in an information technology system or network that permits or is caused by unauthorized access of data in electronic form. Good faith access by an authorized employee does not constitute an information technology incident, provided that the data is not used in an unauthorized manner or for an unauthorized purpose.

Notably the reporting obligation is not limited to an unauthorized access or acquisition of protected health information. Also, reports would need to be submitted through the Agency’s adverse incident reporting system using a standardized form. This short timeframe signals the Agency’s intent to receive timely information about potential breaches that could affect patient care or compromise sensitive health information.

Written Continuity Plans. Providers covered by the rule would need to maintain a written “continuity plan.” This plan is defined as a detailed policy that sets out procedures to maintain critical operations and essential patient care services during any disruption of normal operations.

Importantly, according to the proposed rules, continuity plans must not only have a process for performing redundant on-site and off-site data backups, but one that verifies the restorability of back-ups.  When facing a ransomware attack, for example, it is little help to have backed-up files, if the organization cannot restore them.

Additionally, the continuity plan must include procedures for restoring critical systems and patient services, and securely restoring backed-up data.

Post-Incident Documentation. Upon AHCA’s request, providers would be obligated to furnish documentation relating to an information technology incident. This could include police or forensic investigation reports, internal policies, details of the information disclosed, remedial measures taken, and the provider’s continuity plan. The rule is intended to ensure that providers not only respond to incidents but also demonstrate how they investigated, contained, and addressed them.

However, in many cases, some of these materials are prepared at the direction of counsel in anticipation of litigation and subject to the attorney client privilege. Providers concerned about the disclosure of such materials, which could include confidential business and proprietary information, as well as sensitive information about the organization’s IT infrastructure, should consult with counsel.

Next Steps

If adopted, the proposed rule would impose significant operational and compliance requirements on Florida’s licensed health care providers. Facilities and organizations subject to AHCA licensure should review their current cybersecurity incident response procedures, reporting mechanisms, and continuity planning to ensure they align with the proposed requirements, if adopted.

The rapid adoption of AI notetaking and transcription tools has transformed how organizations (and individuals) capture, analyze, and share meeting and other content. But as these technologies expand, so too do the legal and compliance risks. A recent putative class action lawsuit filed in federal court in California against Otter.ai, a leading provider of AI transcription services, highlights the potential pitfalls for organizations relying on these tools.

The Complaint Against Otter.ai

Filed in August 2025, Brewer v. Otter.ai alleges that Otter’s “Otter Notetaker” and “OtterPilot” services recorded, accessed, and used the contents of private conversations without obtaining proper consent. According to the complaint, the AI-powered notetaker:

  • Joins Zoom, Google Meet, and Microsoft Teams meetings as a participant and transmits conversations to Otter in real time for transcription.
  • Records meeting participants’ conversations even if they are not Otter accountholders. The lead plaintiff in this case is not an Otter accountholder.
  • Uses those recordings to train Otter’s automatic speech recognition (ASR) and machine learning models.
  • Provides little or no notice to non- accountholders and shifts the burden of obtaining permissions onto its accountholders.

The lawsuit asserts a wide range of claims, including violations of:

  • Federal law: the Electronic Communications Privacy Act (ECPA) and the Computer Fraud and Abuse Act (CFAA).
  • California law: the California Invasion of Privacy Act (CIPA), the Comprehensive Computer Data and Fraud Access Act, common law intrusion upon seclusion and conversion, and the Unfair Competition Law (UCL).

The plaintiffs allege that Otter effectively acted as an unauthorized third party eavesdropper, intercepting communications and repurposing them for product training without consent.

Key Legal Takeaways

The Otter.ai complaint underscores several important legal themes that organizations using AI notetakers should carefully consider:

  1. Consent Gaps Are a Liability
    Under California wiretap laws, recording or intercepting communications typically requires the consent of all parties. The complaint emphasizes that Otter sought permission only from meeting hosts (and sometimes not even them), but not from all participants. This “single-consent” model is risky in states like California that require all-party consent.
  2. Secondary Use of Data Raises Privacy Risks
    Beyond transcription, Otter allegedly used recorded conversations to train its AI models. Even if data is “de-identified,” the complaint notes that de-identification is imperfect, particularly with voice data and conversational context. Organizations allowing vendors to reuse data for training AI models should scrutinize whether proper disclosures and consents exist.
  3. Vendor Contracts and Shifting Responsibility
    Otter’s privacy policy placed responsibility on accountholders to obtain permissions from others before capturing or sharing data. Courts may find this approach insufficient, especially when the vendor is the party processing and monetizing the data.
  4. Unfair Business Practices
    Plaintiffs also claim that Otter’s conduct violated California’s Unfair Competition Law by depriving individuals of control over their data while enriching the company. This theory—loss of data value as a consumer injury—could gain traction in privacy-related class actions.

Broader Risks for Organizations Using AI Notetakers

Even if an organization is not the technology provider, using AI notetaking tools in the workplace creates real risk. Companies should consider:

  • Employee and Third-Party Notice: Are employees, clients, or customers clearly informed when AI notetakers are in use? Does the notice satisfy federal and state recording laws?
  • Consent Management: Is the organization obtaining and documenting consent where required? What about meetings that cross jurisdictions with differing consent rules?
  • Confidentiality and Privilege: If a meeting involves sensitive legal, HR, or business discussions, does the use of third-party AI notetakers risk waiving attorney-client privilege or exposing trade secrets?
  • Data Use, Security, and Retention: How does the vendor store, use, and share transcription data? Who has access to them? Do they contain personal information that must be safeguarded? Can recordings be deleted upon request? Are they used for training or product development?
  • Comparative Practices: Some vendors offer features that allow any participant to pause or prevent recording—an important safeguard. Organizations should evaluate whether their chosen tool provides these protections.

Practical Steps for Risk Mitigation

Organizations should take proactive measures when adopting AI notetakers:

  1. Conduct a Legal Review: Assess whether recording practices align with ECPA, state wiretap laws, and international requirements (such as GDPR).
  2. Update Policies: Ensure meeting and privacy policies address the use of AI notetakers, including requirements for notice and consent.
  3. Review Vendor Agreements: Negotiate contractual limits on data use, retention, and training.
  4. Consider Potential Use Cases: The nature and content of the discussion captured by the AI notetaker can trigger a range of other legal, compliance, and contractual obligations. Additionally, consider the organization’s position when third parties, such as customers or job applicants, use AI notetakers during a meeting.
  5. Enable Safeguards: Where possible, configure tools to require pre-meeting notices and allow participants to decline recording.
  6. Train Employees: Make sure staff understand when and how to use AI transcription tools appropriately, especially in sensitive contexts.

Conclusion

The Brewer v. Otter.ai complaint is a reminder that AI notetaking tools carry both benefits and significant risks. Organizations leveraging these technologies must balance efficiency with compliance—ensuring that recording, consent, and data-use practices align with evolving privacy and other laws.

On August 18, 2025, the Department of Health and Human Services’ Office for Civil Rights (OCR) announced a settlement with BST & Co. CPAs, LLP (BST). The announcement continues OCR’s escalating enforcement of the HIPAA Security Rule, particularly around ransomware and risk analysis inadequacies.

For the OCR, this is the agency’s 15th ransomware enforcement action and 10th enforcement action in OCR’s Risk Analysis Initiative. For BST, the settlement means the payment of a Resolution Amount of $175,000 and a two-year Corrective Action Plan.

What Happened?

The underlying facts outlined in the settlement are all too familiar. BST discovered a ransomware attack in December 2019 triggered by a phishing email. The business associate reported the attack to OCR in February 2020. The attack affected client PHI pertaining to 170,000 individuals.

BST is a New York–based accounting and business advisory firm that provides services—including tax preparation and forensic accounting—to covered entities. One of BST’s HIPAA covered healthcare provider clients provided BST with financial data that included protected health information (PHI).

The administrative services BST provided using that PHI caused BST to be a business associate under HIPAA. As a business associate, BST was directly subject to the HIPAA Security Rule—and certain provisions of the Privacy and Breach Notification Rules.

Business Associates: When thinking about HIPAA, it’s common to focus on healthcare providers. The reality is, however, that for each healthcare provider there are many business associates supporting that provider’s business and, in doing so, processing PHI. These businesses include accounting firms, medical billing firms, transcription services, law firms, practice management consultants, cloud storage providers, and the list goes on.  

OCR’s Risk Analysis Enforcement Initiative

“A HIPAA risk analysis is essential for identifying where ePHI is stored and what security measures are needed to protect it,” said OCR Director Paula M. Stannard.  “Completing an accurate and thorough risk analysis that informs a risk management plan is a foundational step to mitigate or prevent cyberattacks and breaches.”

Upon investigation, OCR determined that BST had failed to perform an accurate and thorough risk analysis under the HIPAA Security Rule (45 C.F.R. § 164.308(a)(1)(ii)(A)). That lapse, according to OCR, left BST ill-prepared to identify or mitigate vulnerabilities—something OCR has emphasized repeatedly in similar enforcement actions.

Terms of the Settlement

To resolve the investigation, BST entered into a resolution agreement with OCR that included:

  • Payment of $175,000.
  • A Corrective Action Plan (CAP), monitored by OCR for two years, which requires BST to:
    1. Conduct a comprehensive risk analysis.
    2. Develop and implement a risk management plan addressing the vulnerabilities identified.
    3. Draft, maintain, and periodically revise written policies and procedures to comply with HIPAA Privacy and Security Rules.
    4. Enhance its HIPAA/security training and deliver annual training to all relevant workforce members.

What This Means for Business Associates

This enforcement action is another reminder that business associates are bound by nearly all the same obligations as covered entities when it comes to protecting ePHI.

Today, data breaches are a near certainty for most organizations. The question is whether an organization is prepared to weather the incident and be strongly positioned to defend an enforcement action by federal or state agencies. In the case of a HIPAA business associate, that means the OCR and its focus on performing a risk analysis. To that end, while not an exhaustive list, business associates should be:

  • Conducting an accurate and thorough risk analysis to assess risks to the confidentiality, integrity, and availability of ePHI.
  • Implementing corresponding risk management plans to mitigate identified risks.
  • Maintain and regularly update written policies and procedures that align with HIPAA Privacy, Security, and, when applicable, Breach Notification Rules.
  • Provide security awareness training tailored to their workforce.
  • If a breach occurs, especially affecting unsecured PHI, promptly notify the covered entity (within 60 days), and supply all necessary details for breach notifications

HIPAA isn’t just about covered entities—it’s a shared responsibility.

On May 1, 2025, the California Privacy Protection Agency (CPPA) issued a Final Order in one of its first public enforcement actions under the California Consumer Privacy Act (CCPA), imposing a fine of nearly $350,000 on the business.

An important take away from the Final Order: simply posting a privacy policy is not enough. Businesses must actively monitor, test, and verify that the tools supporting consumer rights are working — even when those tools are managed by third parties.

What Went Wrong?

The CPPA found multiple violations of the CCPA and its implementing regulations. Here are the most notable failures:

1. Non-Functioning “Cookie Preferences Center” Link

Like many retailers, the business used third party tracking software on its website, such as cookies and pixels, to share data about consumers online behavior (a category of personal information) with third parties. The business shared this data for purposes such as analytics and cross-context behavioral advertising. While the business told consumers they could opt out of the sharing of their personal information, the technical infrastructure of their website did not support elections by consumers to do so. In short, opt-out elections simply were not processed correctly for a period of time, 40 days.

According to the CPPA, the business

would have known that Consumers could not exercise their CCPA right if the company had been monitoring its Website, but [the company] instead deferred to third-party privacy management tools without knowing their limitations or validating their operation.”

2. Failure to Properly Identify Verifiable Requests and Overcollection of Verification Information

The business offered a webform to enable consumers to exercise several of their CCPA rights, including the right to opt-out of the selling or sharing of personal information. However, using the webform to exercise any of those rights required consumers to provide certain personal information, including a picture of the consumer holding an “identity document.” This approach created two problems: (i) it resulted in the collection of sensitive personal information (e.g., a drivers license) to make the request, and (ii) it failed to distinguish requests to opt-out of the sale or sharing of personal information, which are not verifiable consumer requests. In short, according to the CPPA, the webform collected more personal information than necessary for verifiable consumer requests and failed to authenticate consumers in a compliant manner, ultimately leading to complaints from consumers.

Practical Takeaways

This case illustrates the kind of avoidable but costly missteps that any business could make. Conducting an annual review of CCPA compliance, as required under the law, is an obvious step to help ensure ongoing compliance. But here are some more specific items to consider as well:

  • Test your links and forms regularly across devices and browsers. Don’t assume that what’s written in your privacy policy functions properly.
  • Review webforms and verification procedures to ensure they correctly identify, route, and respond to verifiable consumer requests without collecting unnecessary personal data. Also, assess whether backend processes and training support procedures outlined in online privacy policies.
  • Vet and monitor third-party vendors responsible for CCPA compliance tools. Require written assurances of compliance and retain the right to audit their systems and processes, while also checking to ensure the services provided are compliant.
  • Document your due diligence and monitoring to illustrate a focus on compliance. Mistakes happen, but the business can mount a stronger defense to allegations of non-compliance when it can show an ongoing effort to achieve compliance.

Rhode Island’s Governor recently signed the Rhode Island Judicial Security Act (H5892), which aims to bolster the privacy and security of current and former judicial officers and their families by introducing several measures to safeguard their personal information.

Definition of Protected Individuals

The Act defines “protected individuals” as current, retired, or recalled justices, judges, and magistrates of the Rhode Island unified judicial system, as well as federal judicial officers residing in Rhode Island.

Definition of Personal Information

Personal information is defined to mean the Social Security number, residence address, home phone number, mobile phone number, or personal email address of, and identifiable to, the protected individual or their immediate family member.

Restrictions on Public Posting

 Protected individuals may file a written notice of their status as a protected individual, for themselves and immediate family, with any state, county, and municipal agencies, as well as with any person, data broker, business, or association.

Following receipt of this notice, these entities shall:

  • mark as confidential the protected individual’s or immediate family member’s personal information,
    • remove within 72 hours any publicly available personal information of the protected individual or immediate family member, and
    • obtain written permission from the protected individual prior to publicly posting or displaying the personal information of the protected individual or immediate family members.  

After receiving a protected individual’s written request, a person, data broker, business, or association shall also:

  • ensure that the protected individual’s or the immediate family member’s personal information is not made available on any website or subsidiary website under their control, and
  • not transfer this information to any other person, business, or association through any medium.

The Act further prohibits data brokers from selling, licensing, trading, or otherwise making available for consideration the personal information of a protected individual or immediate family member.

Enforcement and Legal Recourse:

Protected individuals or their immediate family members can seek injunctive or declaratory relief in court if their personal information is disclosed in violation of the act. Violators may be required to pay the individual’s costs and reasonable attorneys’ fees.

The law will take effect January 1, 2026.

Rhode Island’s Judicial Security Act bears a striking resemblance to New Jersey’s Daniel’s Law. Daniel’s Law prohibits the disclosure of the residential addresses and unpublished phone numbers of judicial officers, prosecutors, and law enforcement officers on websites controlled by New Jersey state, county, and local government agencies.

Entities subject to the Act should promptly review and, where necessary, revise their data handling practices to ensure compliance with the Act’s restrictions on disclosing protected judicial information.

On July 23, 2025, the White House released America’s AI Action Plan, a comprehensive national strategy designed to strengthen the United States’ position in artificial intelligence through investment in innovation, infrastructure, and international diplomacy and security. The plan, issued in response to Executive Order 14179, reflects a pro-innovation approach to AI policy—one that aims to accelerate adoption while mitigating security and integrity risks through targeted government action, collaboration with the private sector, and modernization of key systems.

The plan does not introduce new laws or regulatory mandates. Instead, it focuses on leveraging existing authorities, enhancing voluntary standards, and enabling responsible AI development and deployment at scale.

Pillar 1: Driving AI Innovation

The first pillar emphasizes enabling cutting-edge research, workforce readiness, and private-sector growth. Federal agencies are directed to align funding, tax guidance, and educational programs to support AI upskilling and integration across industries.

Key actions include:

  • Removing “red tape” and onerous regulation, calling for suggestions to remove regulatory barriers to innovation, and for federal funding to be directed away from states with “burdensome AI regulations.”
  • Treasury guidance to allow tax-free reimbursement of AI training expenses under IRC §132.
  • Coordination among agencies like the Department of Labor, NSF, and Department of Education to embed AI literacy into training and credentialing programs.
  • Confronting the growing threat of synthetic media, including deepfakes and falsified evidence. Federal agencies—particularly the Department of Justice—are tasked with developing technologies to detect AI-generated content and preserve the integrity of judicial and administrative proceedings.
  • Launching a new AI Workforce Research Hub to study the impact of AI on economic productivity and labor markets.
  • The Department of Defense will create an AI and Autonomous Systems Virtual Proving Ground to simulate real-world scenarios and ensure readiness and safety.
  • Agencies will increase investment in quality datasets, standards, and measurement science to support reliable, scalable AI.

Notably, the plan does not invoke terms such as “discrimination” or “bias” in employment or algorithmic decision-making contexts—an omission that may reflect the administration’s focus on economic opportunity and innovation over regulatory constraint. However, bias is referenced in the context of safeguarding free speech and preventing censorship in AI-generated content.

Pillar 2: Building Infrastructure for the AI Age

This second pillar recognizes that AI requires new infrastructure—digital, physical, and institutional—to thrive safely and at scale. The plan outlines federal efforts to modernize government systems, support critical infrastructure security, and establish testing environments for AI tools.

Highlights include:

  • A commitment to “security by design” principles, encouraging developers to build cybersecurity, privacy, and safety into AI products from the ground up.
  • Ensuring the nation has the workforce ready to build, operate, and maintain an infrastructure that can support America’s AI future – with jobs such as electricians and advanced HVAC technicians.

These initiatives aim to reinforce public trust while enabling widespread AI adoption in sectors such as transportation, energy, defense, and public services.

Pillar 3: Advancing International Diplomacy and Security

The third pillar focuses on global leadership, international coordination, and national security. It underscores the need to shape global AI norms and standards in line with democratic values, while protecting U.S. interests against adversarial use of AI.

Strategic priorities include:

  • Strengthening cross-border partnerships to promote responsible AI development and interoperability.
  • Addressing threats from foreign actors who may use AI for disinformation, cyberattacks, or military advantage.
  • Encouraging export controls, intelligence coordination, and diplomatic engagement around emerging AI technologies.

This pillar reflects the administration’s intent to ensure that AI supports—not undermines—international stability, democratic resilience, and national defense.

Legal and Strategic Takeaways

  • Policy Through Enablement: The plan reflects a shift away from regulation and toward enabling frameworks—creating opportunities for private-sector leadership in shaping standards, tools, and data ecosystems.
  • Synthetic Media Enforcement: With federal agencies actively addressing deepfakes and AI-generated content, litigation and evidentiary practices are likely to evolve. Legal practitioners should monitor developments in forensic tools and admissibility standards.
  • Cybersecurity Imperatives: The emphasis on “security by design” may influence future procurement requirements, vendor due diligence, and contractual obligations—especially for organizations working with or for the government.

The AI Action Plan presents a clear vision of the United States as a global AI leader—by empowering innovators, modernizing infrastructure, and projecting democratic values abroad. While the plan avoids broad regulatory mandates, it signals rising expectations around safety, authenticity, and international coordination.

Earlier this year, North Dakota’s Governor signed HB 1127,  which introduces new compliance obligations for financial corporations operating in North Dakota. This new law will take effect on August 1, 2025.

The law applies to certain “financial corporations.” Under the law, financial corporation means all entities regulated by the Department of Financial Institutions, excluding credit unions, as well as banks and similar institutions organized under North Dakota or U.S. law. Entities covered by the law include collection agencies, money brokers, money transmitters, mortgage loan originators, and trust companies.

Covered financial corporations must implement a WISP. HB 1127 requires the implementation of comprehensive, written information security programs tailored to each organization’s size, complexity, and the sensitivity of customer information they handle. The law mandates specific program elements, including risk assessments, designated security personnel, implementation of technical safeguards, regular testing, incident response planning, and prompt notification of security events to authorities, discussed further below.

The law defines “information security program” as “the administrative, technical, or physical safeguards a financial corporation uses to access, collect, distribute, process, protect, store, use, transmit, dispose of, or otherwise handle customer information.” 

HB 1127 also outlines several elements required for the programs, which include, among other things:

  • Designated Security Leadership: The information security program must denote a qualified individual responsible for implementing, overseeing, and enforcing the program.
  • Risk Assessment: foundational to the information security program is the written risk assessment, which identifies reasonably foreseeable internal and external risks to the security, confidentiality, and integrity of customer information.
  • Safeguards: The corporation must design and implement safeguards to control and mitigate the risks identified through the risk assessment. This should include a periodic review of the corporation’s data retention policy.
  • Testing and Monitoring: the above safeguards’ key controls, systems, and procedures must be regularly tested or otherwise monitored.
  • Incident Response Planning: The corporation must establish a written incident response plan designed to promptly respond to and recover from any security event materially affecting the confidentiality, integrity, or availability of customer information the corporation controls.
  • Notification Requirements: the corporation must notify the state’s Commissioner of Financial Institutions of a “notification event” – defined as “the acquisition of unencrypted customer information without the authorization of the individual to which the information pertains.” For notification events implicating five hundred or more consumers, the corporation must notify the Commissioner as soon as possible, but no later than forty-five days after the discovery of the event.
  • Oversee Service Providers: The corporation must take reasonable steps to select and retain service providers capable of maintaining the safeguards of customer information. Moreover, the corporation must periodically assess the service providers based on the risk they present.
  • Annual Report to Board: Must designate a qualified individual to report in writing at least annually to the corporation’s board of directors or similar on the overall status of the information security program and material matters related to the program, including risk assessment.

If you have questions about compliance with these new requirements or related issues, contact a Jackson Lewis attorney to discuss.