Leaders charged with safeguarding data privacy and cybersecurity often assume that size equates to security—that large, well-resourced organizations must have airtight defenses against cyberattacks and data breaches. It’s a natural assumption: mature enterprises tend to have robust policies, advanced technology, and deep security teams. Yet, as recent events remind us, even the biggest organizations can be compromised. Sophistication and scale do not guarantee immunity.

On October 21, 2025, the New York Department of Financial Services (DFS) issued guidance on managing risks associated with third-party service providers, urging the entities they regulate to take a more active role in assessing and monitoring their vendors’ cybersecurity practices.

The message is clear: strong internal controls are only as good as the weakest external connection. An organization’s exposure to risk extends well beyond its own systems and policies. Its a message that entities beyond those regulated by DFS should heed. Consider, for example, the DOL mandate that affects any organization sponsoring an ERISA-covered employee benefit plan – fiduciaries must assess the cybersecurity of plan service providers.

DFS emphasizes that third-party relationships—whether for data hosting, software development, cloud services, or payment processing—must be governed by a structured risk-management framework. The guidance highlights several key components: thorough vendor due diligence before onboarding, contractual provisions addressing cybersecurity responsibilities, ongoing monitoring of vendors’ controls, and incident-response coordination. These expectations are not new, but DFS’s renewed attention signals that regulators continue to see third-party risk as a critical vulnerability.

Importantly, the guidance reminds organizations that performing these steps is not just a compliance exercise—it’s a form of self-protection. Even when a company has invested heavily in its own cybersecurity defenses, it can still be affected by a breach through a vendor’s compromised system or careless employee. The reputational and financial fallout from such an event can be just as severe as if the company’s own network had been directly attacked.

Organizations can take several practical steps in response:

  • Assess vendor criticality and data access. Identify which vendors handle sensitive information or provide essential services. DFS suggests that entities classify vendors based on the vendor’s risk profile, considering factors such as system access, data sensitivity, location, and how critical the services is to its operations. Again, this is a step all organizations should consider when evaluating their vendors. 
  • Require detailed cybersecurity questionnaires or certifications. Review vendors’ security controls, policies, and incident-response plans.
  • Incorporate strong contract provisions. Ensure that agreements specify breach notification timelines, audit rights, and responsibilities for remediation costs. The DFS guidance includes several examples of baseline contract provisions, including how AI may be used in the course of performing services. There also are other important provisions DFS does not specifically call out, such as indemnity, insurance requirements, limitation of liability. Organizations should have qualified counsel review these critical provisions to help ensure contract terms do not stray too far from initial proposals and assurances.
  • Monitor continuously. Risk assessments should not be one-time exercises; regular reviews and periodic attestations help keep oversight current. Third party service provides have personnel changes, system updates, new offerings, as well as financial challenges during the term of a services agreement. These and other factors are likely to have an impact on data privacy and cybersecurity efforts.
  • Plan for the worst. Integrate vendors into incident-response exercises so all parties understand roles and communication channels in a breach.

By taking these steps, organizations not only strengthen their own resilience but also strengthen a defensible position if litigation follows a third-party breach. Courts and regulators increasingly look for evidence that a company acted reasonably in selecting and managing its vendors.

The DFS guidance serves as a reminder that in today’s interconnected environment, no organization can outsource accountability for cybersecurity. Vigilant oversight of third-party relationships is not simply a best practice—it’s an operational necessity.

Key Takeaways

  • Outlines basic steps to determine whether a business may need to perform a risk assessment under the California Consumer Privacy Act (CCPA) in connection with its use of dashcams
  • Provide a resource for exploring the basic requirements for conducting and reporting risk assessments

If you have not reviewed the recently approved, updated CCPA regulations, you might want to soon. There are several new requirements, along with many modifications and clarifications to existing rules. In this post, we discuss a new requirement – performing risk assessments – in the context of dashcam and related fleet management technologies.

In short, when performing a risk assessment, the business needs to assess whether the risk to consumer privacy from the processing of personal information outweighs the benefits to consumers, the business, others, and the public, and, if so, restricting or prohibiting that processing, as appropriate.

Of course, the first step to determine whether a business needs to perform a risk assessment under the CCPA is to determine whether the CCPA applies to the business. We discussed those basic requirements in Part 1 of our post on risk assessments under the CCPA.

If you are still reading, you have probably determined that your organization is a “business” covered by the CCPA and, possibly, your business is using certain fleet management technologies, such as dashcam or other vehicle tracking technologies. Even if that is not the case, the remainder of this post may be of interest for “businesses” under the CCPA that are curious about examples applying the new risk assessment requirement.

As discussed in Part 1 of our post on the basics of CCPA risk assessments, businesses are required to perform risk assessments when their processing of personal information presents “significant risk” to consumer privacy. The regulations set out certain types of processing activities involving personal information that would trigger a risk assessment. Depending on the nature and scope of the dashcam technology deployed, a business should consider whether a risk assessment is required.

Dashcams and similar devices increasingly come with an array of features. As the name suggests, these devices include cameras that can record activity inside and outside the vehicle. They also can be equipped with audio recording capabilities permitting the recording of voice in and outside the vehicle. Additionally, dashcams can play a role in logistics, as they often include GPS technology, and they can contribute significantly to worker and public safety through telematics. In general, telematics help businesses understand how the vehicle is being driven – acceleration, hard stops, swerving, etc. More recently, dashcams can have biometrics and AI technologies embedded in them. A facial scan can help determine if the driver is authorized to be driving that vehicle. AI technology also might be used to help determine if the driver is driving safely – is the driver falling asleep, eating, using their phone, wearing a seatbelt, and so on.

Depending on how a dashcam is equipped or configured, businesses subject to the CCPA should consider whether the dashcam involves the processing of personal information that requires a risk assessment.

For instance, a risk assessment is required when processing “sensitive personal information.” Remember that sensitive personal information includes, among other elements, precise geolocation data and biometric information for identifying an individuals. While the regulations include an exception for certain employment-related processing, businesses would have to assess whether those apply.

Another example of processing personal information that requires a risk assessment is profiling a consumer through “systematic observation” of that consumer when they are acting in their capacity as an educational program applicant, job applicant, student, employee, or independent contractor for the business. The regulations define “systematic observation” to mean:

methodical and regular or continuous observation. This includes, for example, methodical and regular or continuous observation using Wi-Fi or Bluetooth tracking, radio frequency identification, drones, video or audio recording or live-streaming, technologies that enable physical or biological identification or profiling; and geofencing, location trackers, or license-plate recognition.

The regulation also defines profiling as:

any form of automated processing of personal information to evaluate certain personal aspects (including intelligence, ability, aptitude, predispositions) relating to a natural person and in particular to analyze or predict aspects concerning that natural person’s performance at work, economic situation, health (including mental health), personal preferences, interests, reliability, predispositions, behavior, location, or movements.

Considering the range of use cases for vehicle/fleet tracking technologies, and depending on their capabilities and configurations, it is conceivable that in some cases the processing of personal information by such technology could be considered a “significant risk,” requiring a risk assessment under the CCPA.

In that case, Part 2 of our post on risk assessments outlines the steps a business needs to take to conduct a risk assessment, including what must be included in the required risk assessment report, and timely certifying the assessment to the California Privacy Protection Agency.

It is important to note that this is only one of a myriad of potential processing activities that businesses engage in that might trigger a risk assessment requirement. Businesses will need to identify those activities and assess next steps. If the business finds comparable activities, it may be able to minimize the risk assessment burden, by conducting a single assessment for those comparable activities.

Again, the new CCPA regulations represent a fundamental shift toward proactive privacy governance under the CCPA. Rather than simply reacting to consumer requests and data breaches, covered businesses must now systematically evaluate and document the privacy implications of their data processing activities before they begin. With compliance deadlines approaching in 2026, organizations should begin now to establish the cross-functional processes, documentation practices, and governance structures necessary to meet these new obligations.

test

As we discussed in Part 1 of this post, the California Privacy Protection Agency (CPPA) has approved significant updates to California Consumer Privacy Act (CCPA) regulations, which were formally approved by the California Office of Administrative Law on September 23, 2025. We began to outline the requirements for a significant new obligation under the CCPA – namely, the obligation to conduct a risk assessment for certain activities involving the processing of personal information.

In Part 1, we summarized the rules that determine when a risk assessment requirement would apply – that is, when covered businesses process personal information that presents a “significant risk.” In this Part 2, we will summarize the requirements for conducting a compliant risk assessment. These include:

  • Determining which stakeholders should be involved in the risk assessment process and how
  • Establishing appropriate purposes and objectives for conducting the risk assessment
  • Satisfying timing and record keeping obligations
  • Preparing risk assessment reports that meet certain content requirements
  • Timely submitting certifications of required risk assessments to the CPPA

Who Must Be Involved in the Risk Assessment?

The regulations emphasize a collaborative, multi-stakeholder approach to risk assessments. Businesses must involve relevant stakeholders whose duties include the specific processing activity that necessitated the risk assessment. For example, a business should include the person who determined how to collect the personal information for the processing that triggered the risk assessment obligation. A business also may include third parties involved in the risk assessment process, such as experts in detecting and mitigating bias in automated decision-making tools (ADMT).  

Establishing appropriate purposes and objectives for conducting the risk assessment

According to the new regulations:

The goal of a risk assessment is restricting or prohibiting the processing of personal information if the risks to consumer privacy outweighs the benefits resulting from processing to the consumer, the business, other stakeholders, and the public.

In working toward that goal, businesses need to identify the purpose of the risk assessment. That purpose cannot be generic – “we are conducting this risk assessment to improve our services.” Rather, the stated purpose must be more specific. Suppose a business would like to systematically observe an employee when processing store purchases (whether physically at the register or online as a call center employee) in an effort to decrease consumer wait times. The business would need to do more than simply state the purpose as “improving service,” it might identify decreasing consumer wait times for processing purchases as the relevant purpose.

Satisfying timing and record keeping obligations.

In general, risk assessments must be completed before initiating the processing activity that triggers the requirement. This proactive approach ensures that businesses evaluate privacy risks before they materialize rather than retrofitting assessments after the fact.

Note that businesses may need to conduct a risk assessment for activities they initiated prior to January 1, 2026. More specifically, in the case of processing activities triggering a risk assessment requirement (see Part 1) that the business initiated prior to January 1, 2026 and that continues after January 1, 2026, the business must conduct and document a risk assessment no later than December 31, 2027.

Once completed, risk assessments must be reviewed and updated at least every three years. However, if material changes occur to the processing activity, businesses must update the assessment within 45 days of the change. Material changes might include significant increases in the volume of personal information processed, new uses of the data, or changes to the technologies employed.

Businesses must retain risk assessment documentation for as long as the processing continues or for five years after completing the assessment, whichever is longer. This extended retention period recognizes that risk assessments may be relevant to future enforcement actions or litigation.

Preparing risk assessment reports that meet certain content requirements.

Importantly, risk assessments must result in documented reports that reflect the input and analysis of diverse perspectives. The regulations require identifying the individuals who provided information for the assessment (excluding legal counsel to preserve attorney-client privilege) as well as the date, names, and positions of those who reviewed and approved the assessment. This documentation requirement ensures accountability and demonstrates that the assessment received appropriate organizational attention.

Specifically, the regulations prescribe detailed content requirements for risk assessment reports. Each assessment must document the following elements:

  • The specific purpose of processing in concrete terms rather than generic descriptions. As noted above, businesses cannot simply state that they process data “for business purposes” but must articulate the precise objectives, such as “to provide personalized product recommendations based on browsing history and purchase patterns.”
  • The categories of personal and sensitive personal information processed, including documentation of the minimum necessary information required to achieve the stated purpose. This requirement operationalizes data minimization principles by forcing businesses to justify each category of data collected.
  • The operational elements of the processing, including the method of collecting personal information, retention periods, the number of consumers affected, and any disclosures to consumers about the processing. This provides a comprehensive view of the data lifecycle. In the case of ADMT, any assumptions or limitation on the logic and how the business will use the ADMT output need to be included.
  • The benefits from the processing to both the business and consumers. Businesses must articulate what value the processing creates, whether through improved services, enhanced security, cost savings, or other outcomes.
  • The negative impacts to consumers’ privacy associated with the processing. This critical element requires honest assessment of risks such as unauthorized access, discriminatory outcomes, loss of autonomy, surveillance concerns, or reputational harm.
  • Safeguards the business will implement to mitigate identified negative impacts. These might include technical controls like encryption and access restrictions; organizational measures like privacy training and incident response plans; or procedural safeguards like human review of automated decisions.
  • Whether the business will proceed with the processing after weighing the benefits against the risks. The CPPA has explicitly stated that the goal of risk assessments is to restrict or prohibit processing when risks to consumer privacy outweigh the benefits. This represents a substantive requirement, not merely a documentation exercise.
  • The individuals who provided information for the assessment (excluding legal counsel), along with the date, names, and positions of those who reviewed and approved it. This creates an audit trail demonstrating organizational engagement with the process.

Note that businesses may leverage risk assessments prepared for other regulatory frameworks, such as data protection impact assessments under the GDPR or privacy threshold analyses for federal agencies. However, those other assessments must contain the required information or be supplemented with any outstanding elements.

Timely submitting certifications of required risk assessments to the CPPA

Businesses required to complete a risk assessment must submit certain information to the CPPA. The submission requirements to the CPPA follow a phased schedule. For risk assessments conducted in 2026 and 2027, businesses must submit required information to the CPPA by April 1, 2028. For assessments conducted after 2027, submissions are due by April 1 of the following year. These submissions must include a point of contact, timing of the risk assessment, categories of personal and sensitive personal information covered, and identification of the executive management team member responsible for the assessment’s compliance.

As noted in Part 1, the new CCPA regulations represent a fundamental shift toward proactive privacy governance under the CCPA. Rather than simply reacting to consumer requests and data breaches, covered businesses must now systematically evaluate and document the privacy implications of their data processing activities before they begin. With compliance deadlines approaching in 2026, organizations should begin now to establish the cross-functional processes, documentation practices, and governance structures necessary to meet these new obligations.

The California Privacy Protection Agency (CPPA) has adopted significant updates to the California Consumer Privacy Act (CCPA) regulations, which were formally approved by the California Office of Administrative Law on September 23, 2025. These comprehensive regulations address automated decision-making technology, cybersecurity audits, and risk assessments, with compliance deadlines beginning in 2026. Among these updates, the risk assessment requirements represent a substantial new compliance obligation for many businesses subject to the CCPA.

Of course, as a threshold matter, businesses must first determine whether they are subject to the CCPA. For businesses that are not sure of whether the CCPA applies to them, our earlier discussion here may be helpful. If your business is subject to the CCPA, read on.

When Is a Risk Assessment Required?

The new regulations require businesses to conduct risk assessments when their processing of personal information presents “significant risks” to consumer privacy. The CPPA has defined specific processing activities that trigger this requirement:

  • Selling or sharing personal information.
  • Processing “sensitive personal information.” However, there is a narrow exception for limited human resources-related uses such as payroll, benefits administration, and legally mandated reporting. Employers will have to examine carefully which activities are excluded and which are not. Sensitive personal information under the CCPA includes precise geolocation, racial or ethnic origin, religious beliefs, genetic data, biometric information, health information, sexual orientation, and citizenship status, among other categories.
  • Using automated decision-making technology (ADMT) to make significant decisions about consumers. Significant decisions include those resulting in the provision or denial of financial services, lending, housing, education enrollment, employment opportunities, compensation, or healthcare services. More on ADMT to come.
  • Profiling a consumer through “systematic observation” when they are acting in their capacity as an educational program applicant, job applicant, student, employee, or independent contractor for the business. Systematic observation means methodical and regular or continuous observation, such as through Wi-Fi or Bluetooth tracking, radio frequency identification, drones, video or audio recording or live-streaming, technologies that enable physical or biological identification or profiling; and geofencing, location trackers, or license-plate recognition. Businesses engaged in workplace monitoring and using performance management applications may need to consider those activities under this provision.
  • Profiling a consumer based upon their presence in a “sensitive location.” A sensitive location means the following physical places: healthcare facilities including hospitals, doctors’ offices, urgent care facilities, and community health clinics; pharmacies; domestic violence shelters; food pantries; housing/emergency shelters; educational institutions; political party offices; legal services offices; union offices; and places of worship.
  • Processing personal information to train ADMT for a significant decisions, or train  facial recognition, biometric, or other technology to verify identity. This recognizes the heightened privacy risks associated with developing systems that may later be deployed at scale.

What is Involved in Completing a Risk Assessment?

For businesses engaged in activities with personal information that will require a risk assessment, it is important to note that there are a number of steps set forth in the new CCPA regulations for performing those assessments. These include:

  • Determining which stakeholders should be involved in the risk assessment process and the n nature of that involvement.
  • Establishing appropriate purposes and objectives for conducting the risk assessment
  • Satisfying timing and record keeping obligations.
  • Preparing risk assessment reports that meet certain content requirements.
  • Timely submitting certifications of required risk assessments to the CPPA

In Part 2 of this post we will discuss the requirements above to help businesses that have to perform one or more risk assessments develop a process for doing so.

The new CCPA regulations represent a fundamental shift toward proactive privacy governance under the CCPA. Rather than simply reacting to consumer requests and data breaches, covered businesses must now systematically evaluate and document the privacy implications of their data processing activities before they begin. With compliance deadlines approaching in 2026, organizations should begin now to establish the cross-functional processes, documentation practices, and governance structures necessary to meet these new obligations.

According to Cybersecurity Dive, artificial intelligence is no longer experimental technology as more than 70% of S&P 500 companies now identify AI as a material risk in their public disclosures, according to a recent report from The Conference Board. In 2023, that percentage was 12%.

The article reports that major companies are no longer just testing AI in isolated pilots; they’re embedding it across core business systems including product design, logistics, credit modeling, and customer-facing interfaces. At the same time, it is important to note, these companies acknowledge confronting significant security and privacy challenges, among others, in their public disclosures.

  • Reputational Risk: Leading the way is reputational risk, with more than a third of companies worried about potential brand damage. This concern centers on scenarios like service breakdowns, mishandling of consumer privacy, or customer-facing AI tools that fail to meet expectations.
  • Cybersecurity Risk: One in five S&P 500 companies explicitly cite cybersecurity concerns related to AI deployment. According to Cybersecurity Dive, AI technology expands the attack surface, creating new vulnerabilities that malicious actors can exploit. Compounding these risks, companies face dual exposure—both from their own AI implementations and from third-party AI applications.
  • Regulatory Risk: Companies are also navigating a rapidly shifting legal landscape as state and federal governments scramble to establish guardrails while supporting continued innovation.

One of the biggest drivers of these risks, perhaps, is a lack of governance. PwC’s 2025 Annual Corporate Director’s Survey reveals that only 35% of corporate boards have formally integrated AI into their oversight responsibilities—a clear indication that governance structures are struggling to keep pace with technological deployment.

Not surprisingly, innovation seems to be moving quite a bit faster than governance. That gap is contributing to various risks identified by most of the S&P 500. Extrapolating that reality, there is a good chance that small and mid-sized companies are in a similar position. Enhancing governance, such as through sensible risk assessment, robust security frameworks, training, etc., may help to narrow that gap.

Governor Gavin Newsom recently signed SB 446 into law, introducing significant changes to California’s data breach notification requirements. The bill establishes deadlines for notifying consumers and the state’s Attorney General when personal information of California residents has been involved in a data breach.

What’s Changed Under SB 446

Previously, California law required businesses to notify affected individuals of data breaches “without unreasonable delay.” Under SB 446, businesses must notify affected individuals within 30 calendar days of discovering or being notified of a data breach. However, the law includes some flexibility to accommodate the practical realities of incident response. Specifically, businesses may delay notification when necessary for legitimate law enforcement purposes or to determine the full scope of the breach and restore the integrity of data systems.

For breaches affecting more than 500 California residents, existing law requires businesses to notify the California Attorney General. SB 446 adds a deadline for those notifications. Specifically, the California Attorney General must be notified within 15 calendar days of notifying affected consumers of a security breach (again, for breaches affecting more than 500 California residents).

Considerations for Businesses

All 50 states and several cities have breach notification laws, as well as notification requirements under federal law, such as HIPAA and banking regulations. Over the years, many of those laws have been updated in several respects – notification deadlines, definitions of personal information, requirements to provide ID theft services and credit monitoring, etc.  It is imperative to stay on top of these legal and compliance obligations in order to help maintain preparedness.

SB 446 takes effect January 1, 2026, giving businesses a few months to review and update their incident response plans. Organizations handling California residents’ personal information should act now to ensure they can meet the 30-day notification requirement. This includes establishing clear internal procedures for breach detection, assessment, documentation, and notification.

test

Businesses across many industries naturally want to showcase their satisfied customers. Whether it’s a university featuring successful graduates, a retailer highlighting happy shoppers, or a healthcare facility showcasing thriving patients, these real-world testimonials can be powerful marketing tools. However, when it comes to healthcare providers subject to HIPAA, using patient images and information for promotional purposes requires careful navigation of both federal privacy rules and state law requirements.

In a recent case, the failure to comply with these requirements resulted in a $182,000 fine and a two year compliance program for a Delaware nursing home, according to the resolution agreement.

The Office for Civil Rights (OCR), which enforces the HIPAA Privacy and Security Rules, recently announced an enforcement action that serves as an important reminder of these obligations. The case involved a nursing home that posted photographs of approximately 150 facility residents over a period of time to its social media page. These postings were part of a campaign to highlight the success residents were achieving at the nursing home. When a resident complained to OCR, the agency investigated and found the covered entity had not obtained the required HIPAA authorizations or complied with breach notification requirements. The enforcement actions that followed underscore that even seemingly benign marketing practices can trigger significant compliance issues under HIPAA.

Understanding HIPAA’s Authorization Requirements

Under HIPAA, covered entities may generally use and disclose protected health information (PHI) for treatment, payment, and healthcare operations, and certain other purposes, without patient authorization. Marketing activities, however, fall outside these permissible uses. In the OCR investigation, the covered entity didn’t simply share photographs—it also disclosed information about residents’ care to tell “success stories” of patients at their facilities. This combination of visual identification and health information, according to the OCR, constituted a use of PHI requiring express patient authorization under HIPAA.

The authorization requirement isn’t merely a technicality. HIPAA authorizations must meet specific regulatory standards, such as a clear description of the information to be disclosed, the purpose of the disclosure, and a date or event after which the authorization will cease to be valid. A patient’s informal agreement or willingness to participate doesn’t satisfy these requirements.

The Breach Notification Complication

The OCR investigation revealed another compliance failure: not providing the required breach notification. Under HIPAA’s Breach Notification Rule, a disclosure not permitted under the Privacy Rule can constitute a reportable breach requiring notification to affected individuals and potentially to OCR and the media. This means that a marketing misstep can go beyond just failing to get an authorization.

Lessons from Social Media Cases

This isn’t an isolated concern. Similar issues have arisen when healthcare providers, such as dentists and other practitioners, responded to patient complaints on platforms like Google and Yelp. Well-intentioned responses that acknowledge treating a patient or try to resolve the patient’s concerns can violate HIPAA. These cases make clear that covered entities must think carefully about any use or disclosure of patient information outside the core functions of treatment, payment, and healthcare operations, even when the patient may have disclosed the same information already.

State Law Adds Another Layer, Including for Regulation of AI and Biometrics

HIPAA compliance alone may not be sufficient, particularly when potentially more stringent protections exist at state law. Many states have laws and common law obligations requiring consent before using a person’s image or likeness for commercial purposes, as well as specifics concerning what that consent should look like. Covered entities must ensure they’re meeting both HIPAA authorization requirements and any applicable state law consent requirements. They also should be sure to understand the technologies they are using, including whether they are inadvertently collecting biometric data.

Looking ahead, covered entities should be aware that several states have begun enacting or amending laws addressing how businesses can use digital replicas of individuals, particularly in the AI context. As healthcare organizations increasingly adopt AI technologies, questions about using patient images or data to create or train AI systems, will require careful analysis under both existing HIPAA rules and these emerging state laws.

The Bottom Line

The message for HIPAA covered entities is clear: think before you post, promote, or publicize to good work you do for your patients. Even when patients are willing participants in marketing efforts, formal HIPAA authorizations and state law consents may be required. The cost of non-compliance—including financial settlements, required corrective action plans, and reputational harm—far exceeds the investment in proper authorization processes. When in doubt about whether patient information can be used for a particular purpose, covered entities should consult with privacy counsel to ensure full compliance with both federal and state requirements.

Recently, California’s Governor signed Assembly Bill (AB) 45, which builds on existing California laws, such as the Confidentiality of Medical Information Act, seeking to protect individuals seeking certain healthcare services. AB 45 takes effect January 1, 2026.

Specifically, the law prohibits the collection, use, disclosure, sale, sharing, or retention of personal information of a natural person located at or with the precise geolocation of a “family planning center” – in general, a clinic or center that provides reproductive health care services.

Some exceptions apply, such as to perform services or provide goods requested by the natural person, or as provided in a collective bargaining agreement. Also, the proscription described above does not apply to covered entities and business associates as defined under HIPAA, although for the exception to apply to business associates, they must be contractually obligated to comply with all state and federal privacy laws.

Persons aggrieved by a violation of this prohibition have a private right of action, which permits treble damages and recovery of attorneys’ fees and costs.

AB 45 also makes it unlawful to, directly or through a third party, geofence for certain purposes an entity that provides certain in-person health care services (e.g., medical, surgical, psychiatric, mental health, behavioral health, preventative, rehabilitative, supportive, consultative, referral). Those purposes include, but are not limited to, identifying the persons receiving such services or sending notifications or advertisements to such persons. Any person who violates this section can be subject to a $25,000 penalty per violation.

However, there are several exceptions, such as:

  • The owner of an in-person health care entity may geofence its own location to provide necessary health care services.
  • Geofencing is conducted solely for certain approved research purposes that comply with applicable federal regulations.
  • Geofencing either by (I) labor organizations if the geofencing does not result in the labor union’s collection of names or personal information without the express consent of an individual and is for activities concerning workplace conditions, worker or patient safety, labor disputes, or organizing, or (II) a third party vendor, including, but not limited to, a social media platform, that collects personal information from a labor organization solely to carry out the purposes in (I).

AB 45 also provides protections for personally identifiable research records developed for the kind of research described above. Those protections provide that such records may not be released in response to another state’s law enforcement activities, including subpoenas or requests, that would interfere with certain rights of a person, such as under California’s Reproductive Privacy Act.   

Federal and state laws, including under HIPAA, continue to expand protections for information related to health services, including whether or not a person is receiving services, as well as the types of services, such as reproductive health services. Persons or entities seeking to collect, process, or share this information need to be aware of this growing patchwork of law.

If you have questions about AB 45 or related issues, contact a Jackson Lewis attorney to discuss.

On September 17, 2025, the Florida Agency for Health Care Administration (AHCA) will hold its first public meeting to discuss proposed rules designed to enhance transparency and preparedness around health care information system breaches. AHCA is Florida’s agency responsible for the state’s Medicaid program, the licensure of the state’s health care facilities, and the sharing of health care data through the Florida Center of Health Information and Policy Analysis.

The proposed rules would apply broadly to a wide range of licensed health care providers and facilities under AHCA’s regulatory authority. This includes, among others, hospitals, nursing homes, assisted living facilities, ambulatory surgical centers, hospice providers, home health agencies, intermediate care facilities for individuals with developmental disabilities, clinical laboratories, rehabilitation centers, and health care clinics. In practice, nearly every licensed entity that delivers health care services in Florida or participates in Medicaid could be subject to the new obligations if approved.

Key Provisions

Mandatory Breach Reporting. Providers would be required to report “information technology incidents” to AHCA within 24 hours of having a reasonable belief that an incident may have occurred. For this purpose, an information technology incident means:   

an observable occurrence or data disruption or loss in an information technology system or network that permits or is caused by unauthorized access of data in electronic form. Good faith access by an authorized employee does not constitute an information technology incident, provided that the data is not used in an unauthorized manner or for an unauthorized purpose.

Notably the reporting obligation is not limited to an unauthorized access or acquisition of protected health information. Also, reports would need to be submitted through the Agency’s adverse incident reporting system using a standardized form. This short timeframe signals the Agency’s intent to receive timely information about potential breaches that could affect patient care or compromise sensitive health information.

Written Continuity Plans. Providers covered by the rule would need to maintain a written “continuity plan.” This plan is defined as a detailed policy that sets out procedures to maintain critical operations and essential patient care services during any disruption of normal operations.

Importantly, according to the proposed rules, continuity plans must not only have a process for performing redundant on-site and off-site data backups, but one that verifies the restorability of back-ups.  When facing a ransomware attack, for example, it is little help to have backed-up files, if the organization cannot restore them.

Additionally, the continuity plan must include procedures for restoring critical systems and patient services, and securely restoring backed-up data.

Post-Incident Documentation. Upon AHCA’s request, providers would be obligated to furnish documentation relating to an information technology incident. This could include police or forensic investigation reports, internal policies, details of the information disclosed, remedial measures taken, and the provider’s continuity plan. The rule is intended to ensure that providers not only respond to incidents but also demonstrate how they investigated, contained, and addressed them.

However, in many cases, some of these materials are prepared at the direction of counsel in anticipation of litigation and subject to the attorney client privilege. Providers concerned about the disclosure of such materials, which could include confidential business and proprietary information, as well as sensitive information about the organization’s IT infrastructure, should consult with counsel.

Next Steps

If adopted, the proposed rule would impose significant operational and compliance requirements on Florida’s licensed health care providers. Facilities and organizations subject to AHCA licensure should review their current cybersecurity incident response procedures, reporting mechanisms, and continuity planning to ensure they align with the proposed requirements, if adopted.

The rapid adoption of AI notetaking and transcription tools has transformed how organizations (and individuals) capture, analyze, and share meeting and other content. But as these technologies expand, so too do the legal and compliance risks. A recent putative class action lawsuit filed in federal court in California against Otter.ai, a leading provider of AI transcription services, highlights the potential pitfalls for organizations relying on these tools.

The Complaint Against Otter.ai

Filed in August 2025, Brewer v. Otter.ai alleges that Otter’s “Otter Notetaker” and “OtterPilot” services recorded, accessed, and used the contents of private conversations without obtaining proper consent. According to the complaint, the AI-powered notetaker:

  • Joins Zoom, Google Meet, and Microsoft Teams meetings as a participant and transmits conversations to Otter in real time for transcription.
  • Records meeting participants’ conversations even if they are not Otter accountholders. The lead plaintiff in this case is not an Otter accountholder.
  • Uses those recordings to train Otter’s automatic speech recognition (ASR) and machine learning models.
  • Provides little or no notice to non- accountholders and shifts the burden of obtaining permissions onto its accountholders.

The lawsuit asserts a wide range of claims, including violations of:

  • Federal law: the Electronic Communications Privacy Act (ECPA) and the Computer Fraud and Abuse Act (CFAA).
  • California law: the California Invasion of Privacy Act (CIPA), the Comprehensive Computer Data and Fraud Access Act, common law intrusion upon seclusion and conversion, and the Unfair Competition Law (UCL).

The plaintiffs allege that Otter effectively acted as an unauthorized third party eavesdropper, intercepting communications and repurposing them for product training without consent.

Key Legal Takeaways

The Otter.ai complaint underscores several important legal themes that organizations using AI notetakers should carefully consider:

  1. Consent Gaps Are a Liability
    Under California wiretap laws, recording or intercepting communications typically requires the consent of all parties. The complaint emphasizes that Otter sought permission only from meeting hosts (and sometimes not even them), but not from all participants. This “single-consent” model is risky in states like California that require all-party consent.
  2. Secondary Use of Data Raises Privacy Risks
    Beyond transcription, Otter allegedly used recorded conversations to train its AI models. Even if data is “de-identified,” the complaint notes that de-identification is imperfect, particularly with voice data and conversational context. Organizations allowing vendors to reuse data for training AI models should scrutinize whether proper disclosures and consents exist.
  3. Vendor Contracts and Shifting Responsibility
    Otter’s privacy policy placed responsibility on accountholders to obtain permissions from others before capturing or sharing data. Courts may find this approach insufficient, especially when the vendor is the party processing and monetizing the data.
  4. Unfair Business Practices
    Plaintiffs also claim that Otter’s conduct violated California’s Unfair Competition Law by depriving individuals of control over their data while enriching the company. This theory—loss of data value as a consumer injury—could gain traction in privacy-related class actions.

Broader Risks for Organizations Using AI Notetakers

Even if an organization is not the technology provider, using AI notetaking tools in the workplace creates real risk. Companies should consider:

  • Employee and Third-Party Notice: Are employees, clients, or customers clearly informed when AI notetakers are in use? Does the notice satisfy federal and state recording laws?
  • Consent Management: Is the organization obtaining and documenting consent where required? What about meetings that cross jurisdictions with differing consent rules?
  • Confidentiality and Privilege: If a meeting involves sensitive legal, HR, or business discussions, does the use of third-party AI notetakers risk waiving attorney-client privilege or exposing trade secrets?
  • Data Use, Security, and Retention: How does the vendor store, use, and share transcription data? Who has access to them? Do they contain personal information that must be safeguarded? Can recordings be deleted upon request? Are they used for training or product development?
  • Comparative Practices: Some vendors offer features that allow any participant to pause or prevent recording—an important safeguard. Organizations should evaluate whether their chosen tool provides these protections.

Practical Steps for Risk Mitigation

Organizations should take proactive measures when adopting AI notetakers:

  1. Conduct a Legal Review: Assess whether recording practices align with ECPA, state wiretap laws, and international requirements (such as GDPR).
  2. Update Policies: Ensure meeting and privacy policies address the use of AI notetakers, including requirements for notice and consent.
  3. Review Vendor Agreements: Negotiate contractual limits on data use, retention, and training.
  4. Consider Potential Use Cases: The nature and content of the discussion captured by the AI notetaker can trigger a range of other legal, compliance, and contractual obligations. Additionally, consider the organization’s position when third parties, such as customers or job applicants, use AI notetakers during a meeting.
  5. Enable Safeguards: Where possible, configure tools to require pre-meeting notices and allow participants to decline recording.
  6. Train Employees: Make sure staff understand when and how to use AI transcription tools appropriately, especially in sensitive contexts.

Conclusion

The Brewer v. Otter.ai complaint is a reminder that AI notetaking tools carry both benefits and significant risks. Organizations leveraging these technologies must balance efficiency with compliance—ensuring that recording, consent, and data-use practices align with evolving privacy and other laws.