A healthcare provider delivering pain management services in Florida and other states faces a $1.19 million civil monetary penalty from the U.S. Department of Health and Human Services (HHS), Office for Civil Rights (OCR). The OCR investigation stems from a data breach, but not the type of breach we are used to seeing in the news – it was not a ransomware attack, business email compromise, or some other type of attack by an unknown hacker. Similar to many other OCR enforcement actions, however, a lack of basic safeguards under the Security Rule drove the penalty.

According to the OCR:

  • On May 3, 2018, the covered entity retained an independent contractor to provide business consulting services.
  • The contractor’s services ceased in August of 2018.
  • On February 20, 2019, the covered entity discovered that on three occasions, between September 7, 2018, and February 3, 2019, the contractor impermissibly accessed the provider’s electronic medical record (EMR) system and accessed the electronic protected health information (ePHI) of approximately 34,310 individuals. The contractor used that information to generate approximately 6,500 false Medicare claims.
  • On February 21, 2019, the covered entity terminated the independent contractor’s access to its systems, and in early April of that same year filed a breach report with OCR. The report described that the compromised PHI included names, addresses, phone numbers, email addresses, dates of birth, Social Security numbers, chart numbers, insurance information, and primary care information.

Evidently, the contractor continued to have access to the covered entity’s information systems for 6 months following the point at which services ended, according to the OCR.

“Current and former workforce can present threats to health care privacy and security—risking continuity of care and trust in our health care system,” said OCR Director Melanie Fontes Rainer. “Effective cybersecurity and compliance with the HIPAA Security Rule means being proactive in reviewing who has access to health information and responding quickly to suspected security incidents.” 

The OCR commenced an investigation and reported findings that the covered entity:

  • did not conduct a thorough and accurate risk analysis prior to the breach incident, or until September 30, 2022, more than three years after the incident,
  • had not implemented policies and procedures to regularly review records of information system activity containing ePHI,
  • did not implement termination procedures designed to remove access to ePHI for workforce members who had separated, and
  • did not implement policies and procedures addressing access to workstations.

It is worth noting that the $1.19 million penalty comes after a reduction for “Recognized Security Practices.” Recall that following an amendment enacted in 2022, the HITECH Act now requires the OCR to take into account Recognized Security Practices in connection with certain enforcement and audit activities under the HIPAA Security Rule. In short, if a covered entity can demonstrate Recognized Security Practices as being in place continuously for the 12 months prior to a security incident, a reduction in the amount of civil monetary may be warranted.

In this case, OCR provided the covered entity an opportunity to adequately demonstrate that it had RSPs in place. The covered entity did and OCR applied a reduction to the penalty.

Regulated entities, including healthcare providers, often cite to “controls” they have in place, believing they are sufficient to address their compliance obligations. This application of the rule for Recognized Security Practices is a good example of why that is not the case. That is, while it is important to maintain good controls, those efforts still need to be measured against the applicable compliance requirements, such as set forth under the HIPAA Security Rule.

No organization can eliminate data breach risks altogether, regardless of industry, size, or even if the organization has taken significant steps to safeguard their systems and train employees to avoid phishing attacks. Perhaps the most significant reason these risks remain: third-party service providers or vendors.

For most businesses, particularly small to medium-sized businesses, service providers play a critical role helping to manage and grow their customers’ businesses.

Consider vacation rental and property management businesses. Whether operating an active website, maintaining online reservation and property management platforms, or recruiting and managing a growing workforce, these businesses wind up collecting, processing, and storing large amounts of personal information.

With a national occupancy rate of approximately 56%, a vacation rental company with 100 units for weekly rental might expect to collect personal information from about 5,000 individuals annually (25 weeks rented X2 persons per rental X100 properties). My crude math leaves out website visitors, cancellations, employees (and their family members), and other factors. After three years in business, the company might easily be storing personal information of up to 15-20,000 individuals in their systems.

There are lots of good resources online helping to protect VR businesses from online scams, including those that could lead to a data breach. “Vacation Rental Scams: 20 Red Flags for Spotting Hoax Guests” and “How to Protect Your Vacation Rental from Phishing Attacks” by Lodgify are good examples.

But what happens when the VR business’ guest and/or employee data is breached while in the possession of a vendor?

Last year, as reported on the Maine Attorney General’s Office website, Resort Data Processing (RDP) experienced a data breach affecting over 60,000 individuals caused by a “SQL injection vulnerability which allowed an unauthorized third party to redirect payment card information from in-process transactions on our RDP’s clients’ on-premises Internet Reservation Module (“IRM”) server.” Affected individuals likely included consumers who stayed at properties owned by RDP’s business customers. At least, that is what one plaintiffs law firm advertised about the incident.

Addressing this risk can be daunting, especially for small businesses that may feel as though they have insufficient bargaining power to influence contract terms with their vendors. But there are several strategies these organizations might consider to strengthen their position and minimize compliance and litigation risks.

  • Identify all third parties that collect, access, or maintain personal information on behalf of the business.
  • Investigate what personal information they access, collect, and maintain and assess how to minimize that information.
  • Make cybersecurity a part of the procurement process. Don’t be afraid ask pointed questions and seek evidence of the vendor’s privacy and cybersecurity policies and procedures. This should be part of value proposition the vendor brings to the table.
  • Review service agreements to see what changes might be possible to protect the company.
  • A vendor still may have a breach, so plan for it. Remember, the affected data may be owned by the business and not the vendor, making the business responsible for notification and related obligations. The business may be able to assign those obligations to the vendor, but it likely will be the business’ responsibility to ensure the incident response steps taken by the vendor are compliant.

Experienced and effective counsel can be instrumental here, both with negotiating stronger terms in service agreements and improving preparedness in the event of a data breach.

Massachusetts’ highest court recently issued an opinion that delves into the complex intersection of privacy law and modern technology. The case centers around whether the collection and transmission of users’ web browsing activities to third parties without their consent constitutes a violation of the Massachusetts Wiretap Act.

However, the claim is not unique to Massachusetts. In recent years, plaintiffs in California, Pennsylvania, and Florida have filed claims under state-specific statutes and the Federal Wiretap Act alleging violations when data is collected and shared without consent of the individual website visitor. 

The Massachusetts Court’s analysis hinged on the definitions of “communication” and “interception” under the state wiretap act. The term “communication” presented a particular challenge, as the Court found it ambiguous in the context of web browsing activities. The Court ultimately concluded that web browsing activities do not clearly fall under the statutory definition of “communication.”

In examining the legislative history of the wiretap act, the Court noted that it was primarily concerned with the secret interception of person-to-person conversations and messaging, rather than interactions with a website. This historical perspective further supported the Court’s decision to rule in favor of the website owners.

As a result, the Court reversed the lower court’s denial of defendants’ motions to dismiss the case, concluding that the alleged conduct did not fall under the wiretap act’s purview. While some states have begun to dismiss claims under wiretap laws similar to Massachusetts, it is likely not the end of attempts to bring claims for website tracking.  This is especially true in states where attempts to dismiss claims under wiretap laws have been regularly denied.  Similarly, as other privacy laws propagate plaintiffs’ counsel will have new rules to attempt to bring claims.

Notwithstanding the Massachusetts Court’s ruling, website owners should take steps to avoid potential risks of privacy claims related to the use of tracking technology. First, they should assess and understand the tracking or monitoring technologies in use on their website.  Once the applicable technologies are understood, website owners should consider ways to ensure transparency – clearly informing users about the types of tracking technologies being used and their purposes. This may be achieved in a number of ways, including through a comprehensive privacy policy, website banner, and/or cookie notice. Further, website owners should analyze and, as applicable, implement a means to obtain consent from users before deploying tracking or monitoring technologies.

Implementing robust data security measures to protect the collected data from unauthorized access or breaches is also essential. Regularly reviewing and updating privacy practices to comply with evolving regulations, such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA), and the myriad of other state consumer data privacy laws, can further safeguard against potential claims.

On November 8, 2024, the California Privacy Protection Agency (CPPA) voted to proceed with formal rulemaking regarding artificial intelligence (AI) and cybersecurity audits. This comes on the heels of the California Civil Rights Department moving forward with its own regulations about AI.

The current version of the proposed regulations covers several areas:

  1. Automated Decision-Making Technology (ADMT):

The current draft regulations propose establishing consumers’ rights to access and opt out of businesses’ use of ADMT.

They also require businesses to disclose their use of ADMT and provide meaningful information about the logic involved, as well as the significance and potential consequences of such processing for the consumer.

  1. Cybersecurity Audits:

The draft regulations propose mandating certain businesses to conduct annual cybersecurity audits to ensure compliance with the California Consumer Privacy Act (CCPA) and other relevant regulations. And specify the criteria and standards for these audits, including the scope, methodology, and reporting requirements.

  1. Risk Assessments:

The draft regulations require businesses to perform regular risk assessments to identify and mitigate potential privacy risks associated with their data processing activities.

Under the regulations, businesses would need to document their risk assessment processes and findings, and make these available to the CPPA upon request.

  1. Insurance Regulations:

 Clarifies when insurance companies must comply with the CCPA, ensuring that consumer data handled by these entities is adequately protected.

The proposed regulations will enter a 45-day public comment period, during which stakeholders can submit written and oral comments.  The CPPA will hold public hearings to gather additional feedback and discuss potential revisions to the proposed rules.

After the public comment period, the CPPA will review all feedback and make necessary adjustments to the regulations. This stage may involve multiple rounds of revisions and additional public consultations.

Once the CPPA finalizes the regulations, they will be submitted to the Office of Administrative Law (OAL) for review and approval. If approved, the regulations are expected to become effective by mid-2025.

The California Civil Rights Council published its most recent version proposed revisions to Fair Employment and Housing Act (FEHA) regulations that include automated decision-making and extended the comment period to 30 days. You can read more about the proposed revisions here from Jackson Lewis Attorneys Sayaka Karitani and Robert Yang.

Governor Newsom recently signed two significant bills focused on protecting digital likeness rights: Assembly Bill (AB)1836 and Assembly Bill (AB) 2602. These legislative measures aim to address the complex issues surrounding the commercial use of an individual’s digital rights and establish guidelines for responsible AI use in the digital age.

California AB 1836 addresses the use of likeness and digital replica rights for various individuals and establishes regulatory safeguards for digital replicas and avatars used in commercial settings. The bill outlines the following key provisions:

  • The law defines digital replicas as any digital representation of an individual that is created using their likeness, voice, or other personal attributes.
  • Explicit consent must be obtained from individuals before their digital replicas can be used for any commercial purpose. Consent must be documented and cannot be implied or assumed.
  • The law restricts the use of digital replicas in contexts that could mislead or deceive consumers, including political endorsements, commercial advertisements, and other public statements without the individual’s explicit consent.
  • Violations of AB 1836 can result in significant penalties, including fines and potential civil lawsuits. The bill empowers individuals to seek damages if their digital replicas are used without consent.

AB 2602 complements AB 1836 by further strengthening the legal framework surrounding digital replicas. AB 2602 specifically addresses the following aspects:

  • Sets forth stringent privacy protections for individuals whose digital replicas are used in any capacity. This includes safeguarding personal data and ensuring that digital replicas are not exploited for unauthorized purposes.
  • Mandates that any use of digital replicas must be accompanied by clear disclosures indicating the nature of the replica and the purpose for which it is being used. This ensures that consumers are informed and not misled.
  • Imposes harsher penalties for violations, including higher fines and longer statutes of limitations for filing civil lawsuits. It also provides for criminal charges in severe cases of misuse.
  • Businesses using digital replicas must undergo regular third-party audits to verify compliance with AB 2602. These audits will help maintain transparency and accountability.

While California has taken a pioneering step with AB 1836 and AB 2602, other states have also enacted or proposed legislation to address digital replica rights. The following are examples of how different states handle these rights:

  • New York: New York has robust laws protecting individuals’ rights to their likeness and voice. The state’s Civil Rights Law Sections 50 and 51 provide individuals with the right to control the commercial use of their image and voice. Explicit consent is required for any commercial usage, similar to AB 1836.
  • Florida: Florida’s statutes also protect individuals’ rights to their likeness. The Florida Statutes Section 540.08 mandates that explicit consent must be obtained for using an individual’s name, photograph, or likeness for commercial purposes. The law provides a framework similar to California’s AB 1836.
  • Illinois: Illinois has the Right of Publicity Act, which prohibits the unauthorized use of an individual’s identity for commercial purposes. The Act is comprehensive, covering various aspects of an individual’s persona, including their voice, signature, photograph, and likeness. Violations can lead to considerable fines and damages.
  • Texas: Texas recognizes an individual’s right to control the use of their likeness and voice through the Texas Property Code Section 26.001, which requires written consent for commercial use. The law is designed to protect individuals from unauthorized exploitation of their persona.

Employers and businesses must be aware of the following takeaways from AB 1836, AB 2602, and other state legislation to ensure compliance and avoid potential legal repercussions:

  • Obtain Clear Consent: Employers must implement mechanisms to obtain clear and documented consent from employees or any individuals whose digital replicas will be used for commercial activities, such as marketing. Employers might consider following a similar rule whenever using an employee’s digital replica.
  • Review Existing Practices: It is essential for businesses to review their current practices involving digital replicas and ensure they align with the new legal requirements. This includes updating contracts and privacy policies to meet the standards set by different states.
  • Train Employees: Businesses should provide training to employees on the implications of these laws and the importance of obtaining consent before using digital replicas. This training should cover the specific requirements of the states in which the business operates.
  • Monitor Compliance: Establish a compliance monitoring system to regularly check that all practices involving digital replicas adhere to the provisions of AB 1836, AB 2602, and other relevant state legislation. Regular audits and updates can help maintain compliance across multiple jurisdictions.

California Assembly Bills 1836 and 2602 mark significant developments in the realm of digital replica rights, emphasizing the need for explicit consent, transparency, and enhanced privacy protections. If you have questions about AB 1836 and 2602 or related issues, contact a Jackson Lewis attorney to discuss.

Artificial Intelligence (AI) has created numerous opportunities for growth and economic development throughout California.  However, the unregulated use of AI can lead to a Pandora’s Box of undesirable consequences. A regulatory framework that leads to inconsistent results likely will lead to other problems.  Acknowledging this, the most recent California legislature included a bevy of bills aimed at regulating the use of AI, a formal, legal definition of AI to use across various California statutes. 

On September 28, 2024, Governor Newsom signed Assembly Bill (AB) 2885, which defines AI as

an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objective, infer from the input it receives how to generate outputs that can influence physical or virtual environments. 

The purpose of this definition is to standardize the definition of AI across various California statutes, including the California Business and Professions Code, Education Code, and Government Code.  According to the California legislature, this definition is broad enough to cover all conceivable uses of AI, yet it limits what is considered AI solely to “engineered or machine-based systems” (i.e., not biological organisms).  Moving forward, we can expect the legislature to continue using this definition of AI as it navigates the novel legal issues that arise in our ever-evolving technological world.

The amendments of this bill take effect January 1, 2025.

Announcing its fourth ransomware cybersecurity investigation and settlement, the Office for Civil Rights (OCR) also observed there has been a 264% increase in large ransomware breaches since 2018.

Here, the OCR reached an agreement with a medium-size private healthcare provider following a ransomware attack relating to potential violations of the HIPAA Security Rule. The settlement included a payment of $250,000 and a promise by the covered entity to take certain steps regarding the security of PHI.

“Cybercriminals continue to target the heath care sector with ransomware attacks. Health care entities that do not thoroughly assess the risks to electronic protected health information and regularly review the activity within their electronic health record system leave themselves vulnerable to attack, and expose their patients to unnecessary risks of harm,” OCR Director Melanie Fontes Rainer.

In this case, the OCR announcement states that nearly 300,000 patients were affected by the ransomware attack. Like most OCR investigations under similar circumstances, the agency examines the covered entity’s compliance with the Security Rule. And, as described in many of its settlements, the OCR focuses on the administrative, physical, and/or technical standards it believes the covered entity or business associate failed to satisfy. By focusing on these actions now, a covered entity facing an OCR investigation, perhaps because of a ransomware or other data breach, likely will be in a stronger defensible position.

These actions include: 

  • Conduct an accurate and thorough risk analysis to determine the potential risks and vulnerabilities to the confidentiality, integrity, and availability of its ePHI; 
  • Implement a risk management plan to address and mitigate security risks and vulnerabilities identified in their risk analysis; 
  • Develop a written process to regularly review records of information system activity, such as audit logs, access reports, and security incident tracking reports; 
  • Develop policies and procedures for responding to an emergency or other occurrence that damages systems that contain ePHI; 
  • Develop written procedures to assign a unique name and/or number for identifying and tracking user identity in its systems that contain ePHI; and 
  • Review and revise, if necessary, written policies and procedures to comply with the HIPAA Privacy and Security Rules.  

The OCR also recommends the following steps to mitigate or prevent cyber-threats: 

  • Review all vendor and contractor relationships to ensure business associate agreements are in place as appropriate and address breach/security incident obligations. 
  • Integrate risk analysis and risk management into business processes; conducted regularly and when new technologies and business operations are planned. 
  • Ensure audit controls are in place to record and examine information system activity. 
  • Implement regular review of information system activity. 
  • Utilize multi-factor authentication to ensure only authorized users are accessing ePHI. 
  • Encrypt ePHI to guard against unauthorized access to ePHI. 
  • Incorporate lessons learned from incidents into the overall security management process. 
  • Provide training specific to organization and job responsibilities and on regular basis; reinforce workforce members’ critical role in protecting privacy and security. 

Of course, taking these steps should include documenting that you took them. During an OCR investigation, the agency is not going to take your word for the good work that you and your team did. You will need to be able to show the steps taken, and that means written policies and procedures, written assessments, sign in sheets for training and the materials covered during the training, etc.

HIPAA covered entities and business associates are not all the same, and some will be expected to have a more robust program than others. The good news is that the regulations contemplate this risk-based approach to compliance. But all covered entities and business associates need to take some action in these areas to protect the PHI they collect and maintain.

If there is one thing artificial intelligence (AI) systems need is data and lots of it as training AI is essential for achieving success for a given use case. A recent investigation by Australia’s privacy regulator into the country’s largest medical imaging provider, I-MED Radiology Network, illustrates concerns about the use of medical data to AI systems. This investigation may offer important insights for healthcare providers in the U.S. also trying to leverage the benefits of AI. They too grapple with where those applications intersect with privacy and data security laws, including the Health Insurance Portability and Accountability Act (HIPAA).

The Australian Case: I-MED Radiology’s Alleged AI Data Misuse

The Office of the Australian Information Commissioner (OAIC) has initiated an inquiry into allegations that I-MED Radiology Network shared patient chest x-rays with Harrison.ai, a health technology company, to train AI models without first obtaining patient consent. According to reports, a leaked email indicates that Harrison.ai distanced itself from responsibility for patient consent, asserting that compliance with privacy regulations was I-MED’s obligation. Harrison.ai has since stated that the data used was de-identified and that it complied with all legal obligations.

Under Australian privacy law, particularly the Australian Privacy Principles (APPs), personal information can only be disclosed for its intended or a secondary use that the patient would reasonably expect. It remains unclear whether training AI on medical data qualifies as a “reasonable expectation” for secondary use.

The OAIC’s preliminary inquiries into I-MED Radiology may ultimately clarify how medical data can be used in AI contexts under Australian law, and may offer insights for healthcare providers across borders, including those in the United States.

HIPAA Considerations for U.S. Providers Using AI

The investigation of I-MED raises significant issues that U.S. healthcare providers, subject to HIPAA, should consider, especially given the growing adoption of AI tools in medical diagnostics and treatment. To date, the U.S. Department of Health and Human Services (HHS) has not provided any specific guidance for HIPAA covered entities or business associates concerning AI. In April 2024, HHS publicly shared its plan for promoting responsible use of artificial intelligence (AI) in automated and algorithmic systems by state, local, tribal, and territorial governments in the administration of public benefits – PDF. In October 2023, HHS and the Health Sector Cybersecurity Coordination Center (HC3) published a white paper entitled, AI-Augmented Phishing and the Threat to the Health Sector. More is expected.  

HIPAA regulates the privacy and security of protected health information (PHI), generally requiring covered entities to obtain patient consent or authorization before using or disclosing PHI for purposes outside of certain exceptions, such as treatment, payment, or healthcare operations (TPO).

In the context of AI, the use of de-identified data for research or development purposes—such as training AI systems—can generally proceed without specific patient authorization where that the data meets HIPAA’s strict de-identification standards. HIPAA generally defines de-identified information as data from which all identifiable information has been removed in such a way that it cannot be linked back to the individual.

However, U.S. healthcare providers must ensure that de-identification is properly executed, particularly when AI is involved, as the re-identification risks in AI models can be heightened due to the vast amounts of data processed and the sophisticated methods used to analyze it. Therefore, even when de-identified data is used, entities should carefully evaluate the robustness of their de-identification methods and consider whether additional safeguards are needed to mitigate any risks of re-identification.

Risk of Regulatory Scrutiny

While HIPAA does not currently impose specific obligations on AI use beyond general privacy and security requirements, the I-MED case highlights how AI-driven data practices can attract regulatory attention. U.S. healthcare providers should be prepared for similar scrutiny from federal and state regulators as AI becomes more integrated into healthcare systems.

In addition, there is increasing pressure on policymakers to update healthcare privacy laws, including HIPAA, to address the unique challenges posed by AI and machine learning. Providers should stay informed about potential regulatory changes and proactively implement AI governance frameworks that ensure compliance with both current and emerging legal standards.

Conclusion: Lessons for U.S. Providers

The ongoing investigation into I-MED Radiology’s alleged misuse of medical data for AI training underscores the importance of ensuring legal compliance, patient transparency, and robust data governance in AI applications. For U.S. healthcare providers subject to HIPAA, the case offers several key takeaways:

  1. Develop/Expand Governance to Address AI. AI technologies, including generative AI, are affecting all parts of an organization, from providing core services, to IT, to HR, and marketing as well. Different use cases will drive varied considerations making a clear yet adaptable governance structure important for ensuring compliance and minimizing organizational risk.
  2. Ensure proper de-identification: When using de-identified data for AI training, healthcare entities should verify that their de-identification methods meet HIPAA’s stringent standards and account for AI’s re-identification risks.
  3. Monitor evolving AI regulations: With increased regulatory attention on AI, healthcare providers should prepare for potential legal developments and enhance their AI governance frameworks accordingly.

By staying proactive, U.S. healthcare providers can harness the power of AI while maintaining compliance with privacy laws and safeguarding patient trust.

According to the California legislature, audio recordings, video recordings, and still images can be compelling evidence of the truth.  However, the proliferation of Artificial Intelligence (AI), specifically, generative AI, has made it drastically easier to create fake content that is almost impossible to distinguish from authentic content.  To address this concern, California’s Governor signed Senate Bill (SB) 942, which requires businesses that provide generative AI systems to make accessible tools to detect whether content was created by AI.

SB 942 defines “covered provider” as “a person [or business] that creates, codes, or otherwise produces generative artificial intelligence systems[, and] that has over 1,000,000 monthly visitors or users and is publicly accessible within the geographic boundaries of the state.”  Under SB 942, a covered provider must offer a publicly accessible AI detection tool at no cost. This tool allows users to assess whether the content was created or altered by AI and provides system provenance data (i.e., information explaining where the data originated) without revealing personal information.

Moreover, AI-generated content must include clear and conspicuous disclosures identifying it as such. Latent disclosures must also convey information about the content’s origin and authenticity, detectable by the AI detection tool.

While this law will not end the challenges employers face trying to discern deepfakes from reality,  it might help to avoid some critical missteps. Recall the disruption experienced by a school community, in particular its high school principal, in Pikesville, Maryland, when a recording suggested the principal made racially insensitive and antisemitic remarks. It took several months for the Baltimore County Police Department to investigate and conclude that the recording was a fake, a “deepfake,” generated by AI technology. The increased transparency that SB 942 could bring may have reduced or eliminated the flood of calls to the school, heightened security, and employment actions taken against the principal.

 Violations of the act can result in civil penalties of $5,000 per violation, enforceable by the Attorney General, city attorneys, or county counsels.  This means that certain California businesses that provide generative AI services should create a plan for implementing an AI detection tool that allows consumers to distinguish between AI versus human-created content.

Fortunately, technologies exist and are being developed to help organizations address these transparency issues. For example, the Coalition for Content Provenance and Authenticity (C2PA) “addresses the prevalence of misleading information online through the development of technical standards for certifying the source and history (or provenance) of media content.” C2PA may be used to embed metadata into AI-generated content to help verify its source and other information.

 The requirements of SB 942 take effect January 1, 2026.