California lawmakers have proposed new legislation to reshape the growing use of artificial intelligence (AI) in the workplace. While this bill aims to protect workers, employers have expressed concerns about how it might affect business efficiency and innovation.

What Does California’s Senate Bill 7 (SB 7) Propose?

SB 7, also known as the “No Robo Bosses Act,” introduces several key requirements and provisions restricting how employers use automated decision systems (ADS) powered by AI. These systems are used in making employment-related decisions, including hiring, promotions, evaluations, and terminations. The pending bill seeks to ensure that employers use these systems responsibly and that AI only assists in decision-making rather than replacing human judgment entirely.

The bill is significant for its privacy, transparency, and workplace safety implications, areas that are fundamental as technology becomes more integrated into our daily work lives.

Privacy and Transparency Protections

SB 7 includes measures to safeguard worker privacy and ensure that personal data is not misused or mishandled. The bill prohibits the use of ADS to infer or collect sensitive personal information, such as immigration status, religious or political beliefs, health data, sexual or gender orientation, or other statuses protected by law. These limitations could significantly limit an employer’s ability to use ADS to streamline human resources administration, even if the ADS only assists but does not replace human decision making. Notably, the California Consumer Privacy Act, which treats applicants and employees of covered businesses as consumers, permits the collection of such information.

Additionally, if the bill is enacted, employers and vendors will have to provide written notice to workers if an ADS is used to make employment-related decisions that affect them. The notice must provide a clear explanation of the data being collected and its intended use. Affected workers also must receive a notice after an employment decision is made with ADS. This focus on transparency aims to ensure that workers are aware of how their data is being used.

Workplace Safety

Beyond privacy, SB 7 also highlights workplace safety by prohibiting the use of ADS that could violate labor laws or occupational health and safety standards. Employers would need to make certain that ADS follow existing safety regulations, and that this technology does not compromise workplace health and safety. Additionally, ADS restrictions imposed by this pending bill could affect employers’ ability to proactively address or monitor potential safety risks with the use of AI.

Oversight & Enforcement

SB 7 prohibits employers from relying primarily on an ADS for significant employment-related decisions, such as hiring and discipline, and requires human involvement in the process. The bill grants workers the right to access and correct their data used by ADS, and they can appeal ADS employment-related decisions. A human reviewer must also evaluate the appeal. Employers cannot discriminate or retaliate against a worker for exercising their rights under this law.

The Labor Commissioner would be responsible for enforcing the bill, and workers may bring civil actions for alleged violations. Employers may face civil penalties for non-compliance.

What’s Next?

While SB 7 attempts to keep pace with the evolution of AI in the workplace, there will likely be ongoing debate about these proposed standards and which provisions will ultimately become law. Jackson Lewis will continue to monitor the status of SB 7.

If you have questions about California’s pending legislation and how it could affect your organization, contact a Jackson Lewis attorney to discuss.

A recent series of articles by the International Association of Privacy Professionals discusses a trend in privacy litigation focused on breach of contract and breach of warranty claims.  

Practical Takeaways

  • Courts are increasingly looking at website privacy policies, terms of use, privacy notices, and other statements from organizations and assessing breach of contract and warranty claims when individuals allege businesses failed to uphold their stated (or unstated) data protection promises (or obligations).
  • To avoid such claims, businesses should review their data privacy and security policies and public statements to ensure they accurately reflect their data protection practices, invest in robust security measures, and conduct regular audits to maintain compliance.

Privacy policies are no longer just formalities; they can become binding commitments. Courts are scrutinizing these communications to determine whether businesses are upholding their promises regarding data protection. Any discrepancies between stated policies and actual practices can lead to breach of contract claims. In some cases, similar obligations can be implied through behavior or other circumstances and create a contract.

There are several ways these types of claims arise. The following outlines the concepts that plaintiffs are asserting:

  • Breach of Express Contract: These claims arise when a plaintiff alleges a business failed to adhere to the specific terms outlined in their privacy policies. For example, if a company promises to “never” share user data with third parties, but does so.
  • Breach of Implied Contract: Even in the absence of explicit terms, businesses can face claims based on implied contracts. This occurs when there is an expectation of privacy and/or security based on the nature of the relationship between the business and its customers.
  • Breach of Express Warranty: Companies that make specific assurances about the security and confidentiality of user data can be held liable if they fail to meet these assurances.
  • Breach of Implied Warranty: These claims are based on the expectation that a company’s data protection measures will meet certain standards of quality and reliability.

How to avoid being a target:

  1. Ensure Accuracy in Privacy Policies, Notices, Terms: Even if a business takes the steps described below and others to strengthen its data privacy and security safeguards, those efforts still may be insufficient to support strong statements concerning such safeguards made in policies, notices, and terms. Accordingly, businesses should carefully review and scrutinize their privacy policies, notices, terms, and conditions for collecting, processing, and safeguarding personal information. This effort should involve the drafters of those communications working with IT, legal, marketing, and other departments to ensure the communications are clear, accurate, and reflective of their actual data protection practices.
  2. Assess Privacy and Security Expectations and Obligations. As noted above, breach of contract claims may not always arise from express contract terms. Businesses should be aware of circumstances that might suggest an agreement with customers concerning their personal information, and then work to address the contours of that promise.
  3. Strengthen Data Privacy and Security Protections. A business may be comfortable with its public privacy policies and notices, feel that it has satisfied implied obligations, but still face breach of contract or warranty claims. In that case, having a mature and documented data privacy and security program can go a long way toward strengthening the business’s defensible position. Such a program includes adopting comprehensive privacy and security practices and regularly updating them to address new threats. At a minimum, the program should comply with applicable regulatory obligations, as well as industry guidelines. The business should regularly review the program, its practices, changes in service, etc., as well as publicly stated policies and notices, as well as customer agreements, to ensure that data protection measures align with stated policies.

On March 10, 2025, California Attorney General Rob Bonta announced an investigative sweep targeting the location data industry, emphasizing compliance with the California Consumer Privacy Act (CCPA). This announcement follows the California legislature proposing a bill that, if passed, would impose restrictions on the collection and use of geolocation data.

Of course, concerns about geolocation tracking are not limited to California.

In California, the Attorney General’s investigation involved sending letters to advertising networks, mobile app providers, and data brokers that appear to the Attorney General to be in violation of the CCPA. These letters notify recipients of potential violations and request additional information regarding their business practices. The focus is on how businesses offer and effectuate consumers’ rights to stop the sale and sharing of personal information and to limit the use of sensitive personal information, including geolocation data.

To avoid enforcement actions, businesses in the location data industry must ensure compliance with the CCPA.

  1. Understand Consumer Rights: The CCPA grants California consumers several rights, including the right to know what personal information is being collected, the right to opt out of the sale or sharing of their personal information, and the right to delete their personal information. Because precise geolocation data is “sensitive personal information” under the CCPA, consumer rights also include the right to limit the use or disclosure of such information. Businesses must clearly communicate these rights to consumers (which includes employees).
  2. Implement Opt-Out/Limitation Mechanisms: As noted, businesses must provide consumers with the ability to opt out of the sale and sharing of their personal information, and to limit the use or disclosure of sensitive personal information. This includes implementing clear and accessible opt-out/limitation request mechanisms on websites and mobile apps. Once a consumer opts out, businesses cannot sell or share their personal information unless they receive authorization to do so again.
  3. Transparency and Accountability: Businesses must be transparent about their data collection and disclosure practices. This includes providing detailed privacy policies that explain what data is collected, how it is used, and the categories of third parties to whom it is disclosed. Additionally, businesses should be prepared to respond to inquiries from the Attorney General’s office and provide documentation of their compliance efforts.

If you have questions about the current California investigation into geolocation or need assistance in ensuring compliance with the CCPA, contact a Jackson Lewis attorney to discuss.

In 2024, Colorado passedthe first comprehensive state-level law in the U.S. regulating the use of artificial intelligence, the Artificial Intelligence Act (the Act). It imposed strict requirements on developers and users of “high-risk” AI systems, particularly in sectors like employment, housing, finance, and healthcare. The Act drew criticism for its complexity, breadth, and potential to stifle innovation.

In early 2025, lawmakers introduced Senate Bill (SB) 25-318 as a response to growing concerns from the tech industry, employers, and even Governor Jared Polis, who reluctantly signed the Act into law last year.

SB25-318 aimed to soften and clarify some of the more burdensome aspects of the original legislation before its compliance deadline of February 1, 2026.

Amendments proposed under SB 25-318 included:

  • An exception to the definition of “developer” if the person offers an AI system with open model weights and meets specified conditions.
  • Exemptions for specified technologies.
  • Elimination of the duty of a developer or deployer to use reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination and the requirement to notify the state attorney general of such risk.
  • An exemption from specified disclosure requirements for developers if they meet certain financial and operational criteria.

Despite its intention to strike a balance between innovation and regulation, SB25-318 was voted down 5-2 by the Senate Business, Labor, and Technology Committee on May 5, 2025.

With SB25-318 dead, the original Act remains intact, and the next step is for the Colorado Attorney General to issue rules and/or guidance.  As it now stands, businesses and developers operating in Colorado must prepare for full compliance by early 2026 unless this date is otherwise extended.    

If you have questions about compliance with Colorado’s Artificial Intelligence Act or related issues, contact a Jackson Lewis attorney to discuss.

Online retailer Harriet Carter Gifts recently obtained summary judgment from the district court in a class action under Pennsylvania wiretap law. At the heart of this case is the interpretation and application of the Pennsylvania Wiretapping and Electronic Surveillance Control Act of 1978 (WESCA), a statute designed to regulate the interception of electronic communications. The court’s primary task was to determine whether the actions of Harriet Carter Gifts and NaviStone constituted an unlawful interception under this law.

In 2021, the district court sided with the defendants, granting summary judgment because NaviStone was a direct party to the communications, and thus, no interception occurred under WESCA. However, the Third Circuit Court of Appeals overturned this decision. The appellate court clarified that there is no broad direct-party exception to civil liability under WESCA. Consequently, the case was remanded to determine “whether there is a genuine issue of material fact about where the interception occurred.”

On remand, the district court examined whether Popa could be deemed to have consented to the interception of her data by NaviStone through the privacy policy posted on Harriet Carter’s website. The court focused on whether the privacy policy was sufficiently conspicuous to provide constructive notice to Popa.

The enforceability of browsewrap agreements, which are terms and conditions posted on a website without requiring explicit user consent, was another critical aspect of the case. The court found that Harriet Carter’s privacy policy was reasonably conspicuous and aligned with industry standards. The court noted that the privacy policy was linked in the footer of every page on the Harriet Carter website, labeled “Privacy Statement,” and was in white font against a blue background. This placement was consistent with common industry practices in 2018 when the violation was alleged, which typically involved placing privacy policies in the footer of websites.

This led the court to conclude that Popa had constructive notice of the terms, reinforcing the notion of implicit consent. Notably, the court found implicit consent without any evidence that Popa had actual knowledge of the terms of the privacy statement.  Rather, the court found a reasonably prudent person would be on notice of the privacy statement’s terms. 

Based on these findings, the court granted summary judgment in favor of the defendants. The court determined that Popa’s WESCA claim failed because she had implicitly consented to the interception by NaviStone, as outlined in Harriet Carter’s privacy statement. 

The case of Popa vs. Harriet Carter Gifts, Inc. and NaviStone, Inc. emphasizes the necessity for clear and accessible privacy policies in the digital era. It also brings attention to the complex legal issues related to user consent and the interception of electronic communications. For inquiries regarding compliance with consumer consent requirements for websites, please contact a Jackson Lewis attorney to discuss further.

In late March 2025, the Florida Bar Board of Governors unanimously endorsed the recommendation of its Special Committee on Cybersecurity and Privacy Law that law firms should adopt written incident response plans (IRPs) to better prepare for and respond to data security incidents. The recommendation reflects a growing recognition across professional service industries—particularly law firms—of the serious risks posed by cyber threats and the need for structured, proactive responses.

The message is simple: law firms must be prepared.

As most practitioners will observe, it is not a matter of if an organization will experience a data breach, but when. Development and implementation of an IRP can be challenging as the nature of legal practice poses unique challenges, even for smaller firms. At the same time, as stewards of vast amounts of highly sensitive client and employee data, often spanning multiple industries, jurisdictions, and confidentiality regimes, such data sets make law firms attractive targets for threat actors, especially those seeking access to intellectual property, litigation strategies, and regulatory or financial information, not to mention sensitive personal information.

What Makes Law Firms Different?

Unlike organizations in several other industries, law firms often lack centralized compliance infrastructures or in-house technical expertise. Client confidentiality obligations and the attorney-client privilege can complicate both the detection and disclosure of incidents. In some cases, firm may confuse confidentiality for security, when both are needed.

In addition, unlike most other professional service providers, law firms grapple with a set of comprehensive rules of professional responsibility that increasingly delve into data privacy and cybersecurity issues. Of course, those rules sit on top of generally applicable business regulation that law firms also face. See, for example, our recent discussion about the Florida Information Protection Act (FIPA) which mandates that certain entities, including law firms, implement reasonable measures to protect electronic data containing personal information.

When engaging a new client, a simple engagement letter may no longer be sufficient, especially for law firms representing certain businesses, particularly those that are heavily regulated. Consider law firms that defend medical malpractice claims. Their clients are most likely healthcare providers covered by the privacy and security regulations under the Health Insurance Portability and Accountability Act (HIPAA). That makes these firms “business associates” to the extent those services involve access to “protected health information.” Just like their healthcare provider clients, business associate law firms are required to maintain an incident response plan. 45 CFR 164.308(a)(6). So, even before Recommendation 25-1, many law firms may have already been obligated to maintain an IRP, at least with respect to certain information collected from or on behalf of certain clients.

Given these realities for law firms, the Florida Bar’s recommendation is both timely and necessary, even if not unprecedented. Notably, in 2018, the ABA issued Formal Opinion 483 which made a similar recommendation. Law firms considering an IRP should consult Formal Opinion 483.

What Should a Law Firm’s Incident Response Plan Include?

A comprehensive and tailored IRP should be risk-based and scalable to firm size, practice areas, and existing infrastructure. Here are some components all firms should consider including in their IRP:

  1. Governance and Roles
    Define the internal response team and assign roles, including legal, IT, communications, HR, and leadership. Identify outside partners, such as breach counsel, forensics, and public relations firms.
  2. Data Mapping and Risk Assessment
    Map the data your firm collects, stores, and shares. Understand where sensitive client and employee information resides and how it is secured. A risk assessment will help prioritize which systems and data are most critical.
  3. Incident Detection and Reporting
    Establish processes for identifying, reporting, and escalating suspected incidents. Time is critical when responding to ransomware, business email compromise, or other attacks. Remember to have a plan to communication outside of the firm’s existing systems which may not be operable.
  4. Investigation and Containment
    Outline steps to contain and investigate an incident, including coordination with law enforcement, insurance carriers, forensic investigators, and legal advisors.
  5. Notification and Legal Obligations
    Address client communications, breach notification laws, ethical duties, and contractual terms that may require specific responses.
  6. Post-Incident Review and Testing
    After resolving an incident, assess what went well and what needs improvement. Regular tabletop exercises and plan updates are essential.

Additional Tools and Resources

With cyber threats evolving and legal obligations expanding, law firms must treat incident response planning as an ethical, professional, and business imperative. The Florida Bar’s recommendation should serve as a wake-up call. By building a strong IRP, law firms can better protect client confidences, meet regulatory requirements, and preserve their professional reputation.

On March 24, 2025, Virginia’s Governor vetoed House Bill (HB) 2094, known as the High-Risk Artificial Intelligence Developer and Deployer Act. This bill aimed to establish a regulatory framework for businesses developing or using “high-risk” AI systems.

The Governor’s veto message emphasized concerns that HB 2094’s stringent requirements would stifle innovation and economic growth, particularly for startups and small businesses. The bill would have imposed nearly $30 million in compliance costs on AI developers, a burden that could deter new businesses from investing in Virginia. The Governor argued that the bill’s rigid framework failed to account for the rapidly evolving nature of the AI industry and placed an onerous burden on smaller firms lacking large legal compliance departments.

The veto of HB 2094 in Virginia reflects a broader debate in AI legislation across the United States. As AI technology continues to advance, both federal and state governments are grappling with how to regulate its use effectively.

At the federal level, AI legislation has been marked by contrasting approaches between administrations. Former President Biden’s Executive Orders focused on ethical AI use and risk management, but many of these efforts were revoked by President Trump this year. Trump’s new Executive Order, titled “Removing Barriers to American Leadership in Artificial Intelligence,” aims to foster AI innovation by reducing regulatory constraints.

State governments are increasingly taking the lead in AI regulation. States like Colorado, Illinois, and California have introduced comprehensive AI governance laws. The Colorado AI Act of 2024, for example, uses a risk-based approach to regulate high-risk AI systems, emphasizing transparency and risk mitigation. While changes to the Colorado law are expected before its 2026 effective date, it may emerge as a prototype for others states to follow. 

Takeaways for Business Owners

  1. Stay Informed: Keep abreast of both federal and state-level AI legislation. Understanding the regulatory landscape will help businesses anticipate and adapt to new requirements.
  2. Proactive Compliance: Develop robust AI governance frameworks to ensure compliance with existing and future regulations. This includes conducting risk assessments, implementing transparency measures, and maintaining proper documentation.
  3. Innovate Responsibly: While fostering innovation is crucial, businesses must also prioritize ethical AI practices. This includes preventing algorithmic discrimination and ensuring the responsible use of AI in decision-making processes.

If you have questions about compliance with AI regulation, contact a member of our Artificial Intelligence Group or the Jackson Lewis attorney with whom you regularly work.

On Friday, the U.S. Department of Health and Human Services (HHS), Office for Civil Rights (OCR) announced the fifth enforcement action under its Risk Analysis Initiative. In this case, OCR reached a settlement with Health Fitness Corporation (Health Fitness), a wellness vendor providing services to employer-sponsored group health plans.

This announcement is interesting for several reasons. It furthers the OCR’s Risk Analysis Initiative. The enforcement action is a reminder to business associates about HIPAA compliance. The development also points to a significant development under ERISA for plan fiduciaries and service providers to their plans.

The OCR Risk Analysis Initiative

Anyone who takes a look at prior OCR enforcement actions will notice several trends. One of those trends relates to enforcement actions following a data breach. In those cases, the OCR frequently alleges the target of the action failed to satisfy the risk analysis standard under the Security Rule. This standard is fundamental – it involves assessing the threats and vulnerabilities to electronic protected health information (ePHI), a process that helps to shape the covered entity or business associate’s approach to the other standards, and goes beyond a simply gap analysis.

“Conducting an accurate and thorough risk analysis is not only required but is also the first step to prevent or mitigate breaches of electronic protected health information,” said OCR Acting Director Anthony Archeval.  “Effective cybersecurity includes knowing who has access to electronic health information and ensuring that it is secure.”

For those wondering how committed the OCR is to its enforcement initiatives, you need not look further than its Right to Access Initiative. On March 6, 2025, the agency announced its 53rd enforcement action. According to that announcement, it involved a $200,000 civil monetary penalty imposed against a public academic health center and research university for violating an individual’s right to timely access her medical records through a personal representative.

The DOL Cybersecurity Rule

Businesses that sponsor a group health plan or other ERISA employee benefit plans might want to review the OCR’s announcement and resolution agreement concerning Health Fitness a little more carefully. In 2024, the DOL’s Employee Benefits Security Administration (EBSA) issued Compliance Assistance Release No. 2024-01. That release makes clear that the fiduciary obligation to assess the cybersecurity of plan service providers applies to all ERISA-covered employee benefit plans, including wellness programs for group health plans.

OCR commenced it investigation of Health Fitness after receiving four reports from Health Fitness, over a three-month period (October 15, 2018, to January 25, 2019), of breaches of PHI.  According to the OCR, “Health Fitness reported that beginning approximately in August 2015, ePHI became discoverable on the internet and was exposed to automated search devices (web crawlers) resulting from a software misconfiguration on the server housing the ePHI.” Despite these breaches, according to the OCR, Health Fitness had failed to conduct an accurate and thorough risk analysis, until January 19, 2024.

For Health Fitness, it agreed to implement a corrective action plan that OCR will monitor for two years and paid $227,816 to OCR. For ERISA plan fiduciaries, an important question is what they need to do to assess the cybersecurity of plan service providers like Health Fitness during the procurement process and beyond.

We provide some thoughts in our earlier article and want to emphasize that plan fiduciaries need to be involved in the process. Cybersecurity is often a risk left to the IT department.  However, doing so may leave even the most ardent IT professional ill equipped or insufficiently informed about the threats and vulnerabilities of the particular service provider. When it come to ERISA plans, this means properly assessing the threats and vulnerabilities as they relate to the aspects of plan administration being handled by the service provider.

Third-party plan service providers and plan fiduciaries should begin taking reasonable and prudent steps to implement safeguards that will adequately protect plan data. EBSA’s guidance should help the responsible parties get there, along with the plan fiduciaries and plan sponsors’ trusted counsel and other advisors.

In February, a coalition of healthcare organizations sent a letter to President Donald J. Trump and the U.S. Department of Health and Human Services (HHS) (the Letter), urging the immediate rescission of a proposed update to the Security Rule under HIPAA. The update is aimed at strengthening safeguards for securing electronic protected health information.

According to The HIPAA Journal, the data breach trend in the healthcare industry over the past 14 years is up, not down. This is the case despite the HIPAA Security Rule having been in effect since 2005.

The HIPAA Journal goes on to provide some sobering statistics:

Between October 21, 2009, when OCR first started publishing summaries of data breach reports on its “Wall of Shame”, and and December 31, 2023, 5,887 large healthcare data breaches have been reported. On January 22, 2023, the breach portal listed 857 data breaches as still; under investigation. This time last year there were 882 breaches listed as under investigation, which shows OCR has made little progress in clearing its backlog of investigations – something that is unlikely to change given the chronic lack of funding for the department.

There have been notable changes over the years in the main causes of breaches. The loss/theft of healthcare records and electronic protected health information dominated the breach reports between 2009 and 2015. The move to digital record keeping, more accurate tracking of electronic devices, and more widespread adoption of data encryption have been key in reducing these data breaches. There has also been a downward trend in improper disposal incidents and unauthorized access/disclosure incidents, but data breaches continue to increase due to a massive increase in hacking incidents and ransomware attacks. In 2023, OCR reported a 239% increase in hacking-related data breaches between January 1, 2018, and September 30, 2023, and a 278% increase in ransomware attacks over the same period. In 2019, hacking accounted for 49% of all reported breaches. In 2023, 79.7% of data breaches were due to hacking incidents.

The letter, signed by numerous healthcare organizations, outlines several key concerns regarding the proposed HIPAA Security Rule update, including:

  1. Financial and Operational Burdens: The letter argues that the proposed regulation would impose significant financial and operational burdens on healthcare providers, particularly those in rural areas. The unfunded mandates associated with the new requirements could strain the resources of hospitals and healthcare systems, leading to higher healthcare costs for patients and reduced investment in other critical areas.
  2. Conflict with Existing Law: The Letter points to an amendment to the Health Information Technology for Economic and Clinical Health (HITECH) Act, arguing the proposed enhancements to the Security Rule conflict with the HITECH Act amendment. However, the HITECH Act amendment sought to incentivize covered entities to adopt “recognized security practices” that might minimize (not necessarily eliminate) remedies for HIPAA Security Rule violations and the length and extent of audits and investigations.
  3. Timeline and Feasibility: The letter highlights concerns about the timeline for implementing the proposed requirements. The depth and breadth of the new mandates, combined with an unreasonable timeline, present significant challenges for healthcare providers. 

No doubt, the Trump Administration is intent on reducing regulation on business. However, it will be interesting to see whether it softens or even eliminates the proposed rule in response to the Letter, despite the clear trend of more numerous and damaging data breaches in the healthcare sector, and an increasing threat landscape facing all U.S. businesses.

According to one survey, Florida is fourth on the list of states with the most reported data breaches. No doubt, data breaches continue to be a significant risk for all business, large and small, across the U.S., including the Sunshine State. Perhaps more troubling is that class action litigation is more likely to follow a data breach. A common claim in those cases – the business did not do enough to safeguard personal information from the attack. So, Florida businesses need to know about the Florida Information Protection Act (FIPA) which mandates that certain entities implement reasonable measures to protect electronic data containing personal information.

According to a Law.com article:

The monthly average of 2023 data breach class actions was 44.5 through the end of August, up from 20.6 in 2022.

While a business may not be able to completely prevent a data breach, adopting reasonable safeguards can minimize the risk of one occurring, as well as the severity of an attack. Additionally, maintaining reasonable safeguards to protect personal information strengthens the businesses’ defensible position should it face an government agency investigation or lawsuit after an attack.  

Entities Subject to FIPA

FIPA applies to a broad range of organizations, including:

   •    Covered Entities: This encompasses any sole proprietorship, partnership, corporation, or other legal entity that acquires, maintains, stores, or uses personal information…so, just about any business in the state. There are no exceptions for small businesses.

   •    Governmental Entities: Any state department, division, bureau, commission, regional planning agency, board, district, authority, agency, or other instrumentality that handles personal information.

   •    Third-Party Agents: Entities contracted to maintain, store, or process personal information on behalf of a covered entity or governmental entity. This means that just about any vendor or third party service provider that maintains, stores, or processes personal information for a covered entity is also covered by FIPA.

Defining “Reasonable Measures” in Florida

FIPA requires:

Each covered entity, governmental entity, or third-party agent shall take reasonable measures to protect and secure data in electronic form containing personal information.

While FIPA mandates the implementation of “reasonable measures” to protect personal information, it does not provide a specific definition, leaving room for interpretation. However, guidance can be drawn from various sources:

  •    Regulatory Guidance: For businesses that are more heavily regulated, such as healthcare entities, they can looked to federal and state frameworks that apply to them, such as the Health Insurance Portability and Accountability Act (HIPAA). Entities in the financial sector may be subject to both federal regulations, like the Gramm-Leach-Bliley Act, and state-imposed data protection requirements. The Florida Attorney General’s office may offer insights or recommendations on what constitutes reasonable measures. Here is one example, albeit not comprehensive.
  •   Standards in Other States: Several other states have outlined more specific requirements for protecting personal information. Examples include New York and Massachusetts

Best Practices for Implementing Reasonable Safeguards

Very often, various data security frameworks have several overlapping provisions. With that in mind, covered businesses might consider the following nonexhaustive list of best practices toward FIPA compliance. Many of the items on this list will seem obvious, even basic. But in many cases, these measures either simply have not been implemented or are not covered in written policies and procedures.

  • Conduct Regular Risk Assessments: Identify and evaluate potential vulnerabilities within your information systems to address emerging threats proactively.
  • Implement Access Controls: Restrict access to personal information to authorized personnel only, ensuring that employees have access solely to the data necessary for their roles.
  • Encrypt Sensitive Data: Utilize strong encryption methods for personal information both at rest and during transmission to prevent unauthorized access.
  • Develop and Enforce Written Data Security Policies, and Create Awareness: Establish comprehensive data protection policies and maintain them in writing. Once completed, information about relevant policies and procedures need to shared with employees, along with creating awareness about the changing risk landscape.
  • Maintain and Practice Incident Response Plans: Prepare and regularly update a response plan to address potential data breaches promptly and effectively, minimizing potential damages. Letting this plan sit on the shelf will have minimal impact on preparedness when facing a real data breach. It is critical to conduct tabletop and similar exercises with key members of leadership.
  • Regularly Update and Patch Systems: Keep all software and systems current with the latest security patches to protect against known vulnerabilities.

By diligently implementing these practices, entities can better protect personal information, comply with Florida’s legal requirements, and minimize risk.