For businesses subject to the California Consumer Privacy Act (CCPA), a compliance step often overlooked is the requirement to annually update the businesses online privacy policy. Under Cal. Civ. Code § 1798.130(a)(5), CCPA-covered businesses must among other things update their online privacy policies at least once every 12 months. Note that CCPA regulations establish content requirements for online privacy policies, one of which is that the policy must include “the date the privacy policy was last updated.” See 11 CCR § 7011(e)(4).

As businesses continue to grow, evolve, adopt new technologies, or otherwise make online and offline changes in their business, practices, and/or operations, CCPA required privacy policies may no longer accurately or completely reflect the collection and processing of personal information. Consider, for example, the adoption of emerging technologies, such as so-called “artificial intelligence” tools. These tools may be collecting, inferring, or processing personal information in ways that were not contemplated when preparing the organization’s last privacy policy update.

The business also may have service providers that collect and process personal information on behalf of the business in ways that are different than they did when they began providing services to the business.

Simply put: If your business (or its service providers) has adopted any new technologies or otherwise changed how it collects or processes personal information, your privacy policy may need an update.

Practical Action Items for Businesses

Here are some steps businesses can take to comply with the annual privacy policy review and update requirement under the CCPA:

  • Inventory Personal Information
    Reassess what categories of personal information your organization collects, processes, sells, and shares. Consider whether new categories—such as biometric, geolocation, or video —have been added.
  • Review Data Use Practices
    Confirm whether your uses of personal information have changed since the last policy update. This includes whether you are profiling, targeting, or automating decisions based on the data.
  • Assess adoption of new technologies, such as AI and New Tech Tools
    Has your business adopted any new technologies or systems, such as AI applications? Examples may include:
    • AI notetakers, transcription, or summarization tools for use in meetings (e.g., Otter, Fireflies)
    • AI used for chatbots, personalized recommendations, or hiring assessments
  • Evaluate Third Parties and Service Providers
    Are you sharing or selling information to new third parties? Has your use of service providers changed, or have service providers changed their practices around the collection or processing of personal information?
  • Review Your Consumer Rights Mechanisms
    Are the methods for consumers to submit access, deletion, correction, or opt-out requests clearly stated and functioning properly?

These are only a few of the potential recent developments that may drive changes in an existing privacy policy. There may be additional considerations for businesses in certain industries and departments within those businesses that should be considered as well. Here are a few examples:

Retail Businesses

  • Loyalty programs collecting purchase history and predictive analytics data.
  • More advanced in-store cameras and mobile apps collecting biometric or geolocation information.
  • AI-driven customer service bots that gather interaction data.

Law Firms

  • Use of AI notetakers or transcription tools during client calls.
  • Remote collaboration tools that collect device or location data.
  • Marketing platforms that profile client interests based on website use.

HR Departments (Across All Industries)

  • AI tools used for resume screening and candidate profiling.
  • Digital onboarding platforms collecting sensitive identity data.
  • Employee productivity and monitoring software that tracks usage, productivity, or location.

The online privacy policy is not just a static compliance document—it’s a dynamic reflection of your organization’s data privacy practices. As technologies evolve and regulations expand, taking time once a year to reassess and update your privacy disclosures is not only a legal obligation in California but a strategic risk management step. And, while we have focused on the CCPA in this article, inaccurate or incomplete online privacy policies can elevate compliance and litigation risks under other laws, including the Federal Trade Commission Act and state protections against deceptive and unfair business practices.

On June 20, 2025, Texas Governor Greg Abbott signed SB 2610 into law, joining a growing number of states that aim to incentivize sound cybersecurity practices through legislative safe harbors. Modeled on laws in states like Ohio and Utah, the new Texas statute provides that certain businesses that “demonstrate[] that at the time of the breach the entity implemented and maintained a cybersecurity program” meeting the requirements in the new law may be shielded from exemplary (punitive) damages in the event of a data breach lawsuit.

This development comes amid a clear uptick in data breach class action litigation across the country. Notably, plaintiffs’ attorneys are no longer just targeting large organizations following breaches that expose millions of records. Recent cases have been filed against small and midsize businesses, even when the breach affected relatively few individuals.

What the Texas Law Does

SB 2610 erects a shield from liability to protect certain businesses (those under 250 employees) from exemplary damages in a tort action resulting from a data breach. That shield applies only if the business demonstrates that at the time of the breach the entity implemented and maintained a cybersecurity program that meets certain requirements, which may include compliance with a recognized framework (e.g., NIST, ISO/IEC 27001). This is not immunity from all liability—it applies only to punitive damages—but it can be a significant limitation on financial exposure.

This is a carrot, not a stick. The law does not impose new cybersecurity obligations or penalties. Instead, it encourages proactive investment in cybersecurity by offering meaningful protection when incidents occur despite those efforts.

Why the Size of the Entity Isn’t the Whole Story

A unique aspect of the Texas law is that it scales cybersecurity expectations in part based on business size. Specifically, for businesses with fewer than 20 employees, a “reasonable” cybersecurity program may mean something different than it does for one between 100 and 250 employees. But here’s the problem: Many businesses with small employee counts handle large volumes of sensitive data.

Consider:

  • A 10 employee law firm managing thousands of client files, including Social Security numbers and health records;
  • A small dental practice storing patient health histories and billing information;
  • A title or insurance agency processing mortgage, escrow, or policy documents for hundreds of customers each month.

These entities may employ fewer than 20 people but process exponentially more personal information than a 250-employee manufacturing plant. In this context, determining what qualifies as “reasonable” cybersecurity must focus on data risk, not just employee headcount.

Takeaways for Small and Midsize Organizations

  • Don’t assume you’re too small to be a target: Plaintiffs’ firms are increasingly focused on any breach with clear damages and weak safeguards—regardless of business size.
  • Adopt a framework: Implementing a recognized cybersecurity framework not only enhances your defense posture but could also help limit damages in litigation.
  • Document, document, document: The presumption under SB 2610 is available only if the business can demonstrate it created and followed a written cybersecurity program at the time of the breach.
  • Review annually: As threat landscapes evolve, your security program must adapt. Static programs are unlikely to satisfy the “reasonable conformity” standard over time.

Final Thought

Texas’s new law reinforces a growing national trend: states are rewarding—not just punishing—cybersecurity efforts. But the law also raises the bar for smaller businesses that may have historically viewed cybersecurity as a lower priority. If your organization handles personal data, no matter how many employees you have, it’s time to treat cybersecurity as a critical business function—and an essential legal shield.

Montana recently amended its privacy law through Senate Bill 297, effective October 1, 2025, strengthening consumer protections and requiring businesses to revisit their privacy policies that apply to citizens of Montana. Importantly, it lowered the threshold for applicability to persons and businesses who control or process the personal data of 25,000 or more consumers (previously 50,000), unless the controller uses that data solely for completing payments. For those who derive more than 25% of gross revenue from the sale of personal data, the threshold is now 15,000 or more consumers (previously 25,000).

With the amendments, nonprofits are no longer exempt unless they are set up to detect and prevent insurance fraud. Insurers are now similarly exempt.

When a consumer requests confirmation that a controller is processing their data, the controller can no longer disclose but must identify possession of: (1) social security numbers, (2) ID numbers, (3) financial account numbers, (4) health insurance or medical identification numbers, (5) passwords, security questions, or answers, or (6) biometric data.

Privacy notices must now include: (1) personal data categories, (2) controller’s purpose in possessing personal data, (3) categories controller sells or shares with third parties, (4) categories of third parties, (5) contact information for the controller, (6) explanation of rights and how to exercise them, and (7) the date privacy notice was last updated. Privacy notices must be accessible to and usable to people with disabilities and available in each language in which the controller provides a product or service. Any material changes to the controller’s privacy notice or practices require notices to affected consumers and the opportunity to withdraw consent. Notices need not be Montana-specific, but controllers must conspicuously post them on websites, in mobile applications, or through whatever medium the controller interacts with customers.

The amendments further clarified information the attorney general must publicly provide, including an online mechanism for consumers to file complaints. Further, the attorney general may now issue civil investigative demands and need not issue any notice of violation or provide a 60-day period for the controller to correct the violation.

Artificial Intelligence (AI) is transforming businesses—automating tasks, powering analytics, and reshaping customer interactions. But like any powerful tool, AI is a double-edged sword. While some adopt AI for protection, attackers are using it to scale and intensify cybercrime. Here’s a high-level discussion at emerging AI-powered cyber risks in 2025—and steps organizations can take to defend.

AI-Generated Phishing & Social Engineering

Cybercriminals now use generative AI to craft near-perfect phishing messages—complete with accurate tone, logos, and language—making them hard to distinguish from real communications . Voice cloning tools enable “deepfake” calls from executives, while deepfake video can simulate someone giving fraudulent instructions.

Thanks to AI, according to Tech Advisors, phishing attacks are skyrocketing—phishing surged 202% in late 2024, and over 80% of phishing emails now incorporate AI, with nearly 80% of recipients opening them. These messages are bypassing filters and fooling employees.

Adaptive AI-Malware & Autonomous Attacks

It is not just the threat actors but the AI itself that drives the attack. According to Cyber Defense Magazine reporting:  

Compared to the traditional process of cyber-attacks, the attacks driven by AI have the capability to automatically learn, adapt, and develop strategies with a minimum number of human interventions. These attacks proactively utilize the algorithms of machine learning, natural language processing, and deep learning models. They leverage these algorithms in the process of determining and analyzing issues or vulnerabilities, avoiding security and detection systems, and developing phishing campaigns that are believable.

As a result, attacks that once took days now unfold in minutes, and detection technology struggles to keep up, permitting faster, smarter strikes to slip through traditional defenses.

Attacks Against AI Models Themselves

Cyberattacks are not limited to business email compromises designed to effect fraudulent transfers or to demand a ransom payment in order to suppress sensitive and compromising personal information. Attackers are going after AI systems themselves. Techniques include:

  • Data poisoning – adding harmful or misleading data into AI training sets, leading to flawed outputs or missed threats.
  • Prompt injection – embedding malicious instructions in user inputs to hijack AI behavior.
  • Model theft/inversion – extracting proprietary data or reconstructing sensitive training datasets.

Compromised AI can lead to skipped fraud alerts, leaked sensitive data, or disclosure of confidential corporate information. Guidance from NIST, Adversarial Machine Learning A Taxonomy and Terminology of Attacks and Mitigations, digs into these quite a bit more, and outlines helpful mitigation measures.

Deepfakes & Identity Fraud

Deepfake audio and video are being used to mimic executives or trusted contacts, instructing staff to transfer funds, disclose passwords, or bypass security protocols.

Deepfakes have exploded—some reports indicate a 3,000% increase in deepfake fraud activity. These attacks can erode trust, fuel financial crime, and disrupt decision-making.

Supply Chain & Third-Party Attacks

AI accelerates supply chain attacks, enabling automated scanning and compromise of vendor infrastructures. Attackers can breach a small provider and rapidly move across interconnected systems. These ripple-effect attacks can disrupt entire industries and critical infrastructure, far beyond the initial target. We have seen these effects with more traditional supply chain cyberattacks. AI will only amplify these attacks.  

Enhancing Cyber Resilience, Including Against AI Risks

Here’s some suggestions for stepping up defenses and mitigating risk:

  1. Enhance Phishing Training for AI-level deception
    Employees should recognize not just misspellings, but hyper-realistic phishing, voice calls, and video
 impersonations. Simulations should evolve to reflect current AI tactics.
  2. Inventory, vet, and govern AI systems
    Know which AI platforms you use—especially third-party tools. Vet them for data protection, model integrity, and update protocols. Keep a detailed registry and check vendor security practices. Relying on a vendor’s SOC report simply may not be sufficient, particularly is not read carefully and in context.
  3. Validate AI inputs and monitor outputs
    Check training data for poisoning. Test and stress AI models to spot vulnerabilities. Use filters and anomaly detection to flag suspicious inputs or outputs.
  4. Use AI to defend against AI
    Deploy AI-driven defensive tools—like behavior-based detection, anomaly hunting, and automated response platforms—so you react in real time.
  5. Adopt zero trust and multi-factor authentication (MFA)
    Require authentication for every access, limit internal privileges, and verify every step—even when actions appear internal.
  6. Plan for AI-targeted incidents
    Update your incident response plan with scenarios like model poisoning, deepfake impersonation, or AI-driven malware. Include legal, communications, and other relevant stakeholders in your response teams.
  7. Share intelligence and collaborate
    Tap into threat intelligence communities, “Information Sharing and Analysis Centers” or “ISACs”, to share and receive knowledge of emerging AI threats.

Organizations that can adapt to a rapidly changing threat landscape will be better position to defend against these emerging attack vectors and mitigate harm.

A recent breach involving Indian fintech company Kirana Pro serves as a reminder to organizations worldwide: even the most sophisticated cybersecurity technology cannot make up for poor administrative data security hygiene.

According to a June 7 article in India Today, KiranaPro suffered a massive data wipe affecting critical business information and customer data. The company’s CEO believes the incident was likely the result of a disgruntled former employee, though he has not ruled out the possibility of an external hack, according to reporting. TechCrunch explained:

The company confirmed it did not remove the employee’s access to its data and GitHub account following his departure. “Employee offboarding was not being handled properly because there was no full-time HR,” KiranaPro’s chief technology officer, Saurav Kumar, confirmed to TechCrunch.

Unfortunately, this is not a uniquely Indian problem. Globally, organizations invest heavily in technical safeguards—firewalls, multi-factor authentication, encryption, endpoint detection, and more. These tools are essential, but not sufficient.

The Silent Risk of Inactive Accounts

One of the most common (and preventable) vectors for insider incidents or credential abuse is failure to promptly deactivate system access when an employee departs. Whether termination is amicable or not, if a former employee retains credentials to email, cloud storage, or enterprise software, the organization is vulnerable. These accounts may be exploited intentionally (as suspected in the KiranaPro case) or unintentionally if credentials are stolen or phished later.

Some organizations assume their IT department is handling these terminations automatically. Others rely on inconsistent handoffs between HR, legal, and IT teams. Either way, failure to follow a formal offboarding checklist—and verify deactivation—may be a systemic weakness, not a fluke.

It’s Not Just About Tech—It’s About Governance

This breach illustrates the point that information security is as much about governance and process as it is about technology. Managing who has access to what systems, when, and why is a core component of security frameworks such as NIST, ISO 27001, and the CIS Controls. In fact, user access management—including timely revocation of access upon employee separation—is a foundational expectation in every major cybersecurity risk assessment.

Organizations should implement the following best practices:

  1. Establish a formal offboarding procedure. Involve HR, IT, and Legal to ensure immediate deactivation of all accounts upon separation.
  2. Automate user provisioning and deprovisioning where possible, using identity and access management (IAM) tools.
  3. Maintain a system of record for all access rights. Periodically audit active accounts and reconcile them against current employees and vendors.
  4. Train supervisors and HR personnel to notify IT or security teams immediately upon termination or resignation. There also may be cases where monitoring an employee’s system activity in anticipation of termination may be prudent.

The Takeaway

Wherever your company does business and regardless of industry, the fundamentals are the same: a lapse in basic access control can cause as much damage as a ransomware attack. The KiranaPro incident is a timely cautionary tale. Organizations must view cybersecurity not only as a technical discipline but as an enterprise-wide responsibility.

In today’s hybrid and remote work environment, organizations are increasingly turning to digital employee management platforms that promise productivity insights, compliance enforcement, and even behavioral analytics. These tools—offered by a growing number of vendors—can monitor everything from application usage and website visits to keystrokes, idle time, and screen recordings. Some go further, offering video capture, geolocation tracking, AI-driven risk scoring, sentiment analysis, and predictive indicators of turnover or burnout.

While powerful, these platforms also carry real legal and operational risks if not assessed, configured, and governed carefully.

Capabilities That Go Beyond Traditional Monitoring

Modern employee management tools have expanded far beyond “punching in,” reviewing emails, and tracking websites visited. Depending on the features selected and how the platform is configured, employers may have access to:

  • Real-time screen capture and video recording
  • Automated time tracking and productivity scoring
  • Application and website usage monitoring
  • Keyword or behavior-based alerts (e.g., data exfiltration risks)
  • Behavioral biometrics or mouse/keyboard pattern analysis
  • AI-based sentiment or emotion detection
  • Geolocation or IP-based presence tracking
  • Surveys and wellness monitoring tools

Not all of these tools are deployed in every instance, and many vendors allow companies to configure what they monitor. Some important questions arise, such as who at the company is making the decisions on how to configure the tool, what data is collected, is the collection permissible, who has access , how are decisions made using that data, and what safeguards are in place to protect the data. But even limited use can present privacy and employment-related risks if not governed effectively.

Legal and Compliance Risks

While employers generally have some leeway to monitor their employees on company systems, existing and emerging law, particularly concerning AI, along with considering best practices, employee relations, and other factors should help with developing some guidelines.

  • Privacy Laws: State and international privacy laws (like the California Consumer Privacy Act, GDPR, and others) may require notice, consent, data minimization, and purpose limitation. Even in the U.S., where workplace privacy expectations are often lower, secretive or overly broad monitoring can trigger complaints or litigation.
  • Labor and Employment Laws: Monitoring tools that disproportionately affect certain groups or are applied inconsistently may prompt discrimination or retaliation claims. Excessive monitoring activities could trigger bargaining obligations and claims concerning protected concerted activity.
  • AI-Driven Features: Platforms that employ AI or automated decision-making—such as behavioral scoring or predictive analytics—may be subject to emerging AI-specific laws and guidance, such as New York City’s Local Law 144, Colorado’s AI Act, and AI regulations recently approved by the California Civil Rights Department under the Fair Employment and Housing Act (FEHA) concerning the use of automated decision-making systems.
  • Data Security and Retention: These platforms collect sensitive behavioral data. If poorly secured or over-retained, that data could become a liability in the event of a breach or internal misuse.

Governance Must Extend Beyond IT

Too often, these tools are procured and managed primarily, sometimes exclusively, by IT or security teams without broader organizational involvement. Given the nature of data these tools collect and analyze, as well as their potential impact on members of a workforce, a cross-functional approach is a best practice.

Involving stakeholders from HR, legal, compliance, data privacy, etc., can have significant benefits not only at the procurement and implementation stages, but also throughout the lifecycle of these tools. This includes regular reviews of feature configurations, access rights, data use, decision making, and staying abreast of emerging legal requirements.

Governance considerations should include:

  • Purpose Limitation and Transparency: Clear internal documentation and employee notices should explain what is being monitored, why, and how the information will be used.
  • Access Controls and Role-Based Permissions: Not everyone needs full access to dashboards or raw monitoring data. Access should be limited to what’s necessary and tied to a specific function.
  • Training and Oversight: Employees who interact with the monitoring dashboards must understand the scope of permitted use. Misuse of the data—whether for personal curiosity, retaliation, or outside policy—should be addressed appropriately.
  • Data Minimization and Retention Policies: Avoid “just in case” data collection. Align retention schedules with actual business need and regulatory requirements.
  • Ongoing Review of Vendor Practices: Some vendors continuously add or enable new features that may shift the risk profile. Governance teams should review vendor updates and periodically reevaluate what’s enabled and why.

A Tool, Not a Silver Bullet

Used thoughtfully, employee management platforms can be a valuable part of a company’s compliance and productivity strategy. But they are not “set it and forget it” solutions. The insights they provide can only be trusted—and legally defensible—if there is strong governance around their use.

Organizations must manage not only their employees, but also the people and tools managing their employees. That means recognizing that tools like these sit at the intersection of privacy, ethics, security, and human resources—and must be treated accordingly.

The Oklahoma State Legislature recently enacted Senate Bill 626, amending its Security Breach Notification Act, effective January 1, 2026, to address gaps in the state’s current cybersecurity framework (the “Amendment”).  The Amendment includes new definitions, mandates reporting to the state Attorney General, clarifies compliance with similar laws, and provides revised penalty provisions, including affirmative defenses.

Definitions

The Amendment provides clearer definitions related to security breaches, specifying what constitutes “personal information” and “reasonable safeguards.”

  • Personal Information:  The existing definition for “Personal Information” was expanded to also include (1) a unique electronic identifier or routing code in combination with any required security code, access code, or password that would permit access to an individual’s financial account and (2) unique biometric data such as a fingerprint, retina or iris image, or other unique physical or digital representation of biometric data to authenticate a specific individual.
  • Reasonable Safeguards:  The Amendment provides an affirmative defense in a civil action under the law for individuals or entities that have “Reasonable safeguards” in place, which are defined as “policies and practices that ensure personal information is secure, taking into consideration an entity’s size and the type and amount of personal information. The term includes, but is not limited to, conducting risk assessments, implementing technical and physical layered defenses, employee training on handling personal information, and establishing an incident response plan”.

Mandated Reporting and Exceptions

In the new year, entities required to provide notice to impacted individuals under the law in case of a breach will also be required to notify the Attorney General. The notification must include specific details including, but not limited to, the type of personal information impacted the nature of the breach, the number of impacted individuals, the estimated monetary impact of the breach to the extent such can be determined, and any reasonable safeguards the entity employs. The notification to the Attorney General must occur no more than 60 days after notifying affected residents.

However, breaches affecting fewer than 500 residents, or fewer than 1,000 residents in the case of credit bureaus, are exempt from the requirement to notify the Attorney General.

In addition, an exception from individual notification is provided for entities that comply with notification requirements under the Oklahoma Hospital Cybersecurity Protection Act of 2023 or the Health Insurance Portability and Accountability Act of 1996 (HIPAA) if such entities provide the requisite notice to the Attorney General.

What Entities Should Do Now

  1. Inventory data.  Conduct an inventory to determine what personal information is collected given the newly covered data elements.
  • Review and update policies and practices.  Reevaluate and update current information security policies and procedures to ensure proper reasonable safeguards are in place.  Moreover, to ensure that an entity’s policies and procedures remain reasonably designed, they should be periodically reviewed and updated.

If you have any questions about the revisions to Oklahoma’s Security Breach Notification Act or related issues, contact a Jackson Lewis attorney to discuss.

“Our cars know how fast you’re driving, where you’re going, how long you stay there. They know where we work, they know whether we stop for a drink on the way home, whether we worship on the weekends, and what we do on our lunch hours.” OR Representative David Gomberg

The Oregon Legislature recently enacted House Bill 3875, amending the Oregon Consumer Privacy Act (OCPA) effective September 28. 2025, to broaden its scope to include motor vehicle manufacturers and their affiliates that control or process personal data from a consumer’s use of a vehicle or its components.

While this expansion is clear in its application to vehicle manufacturers, it raises important questions for automobile dealerships, particularly those “affiliated”—formally or informally—with manufacturers. Dealerships should consider whether they may now be subject to the full scope of Oregon’s privacy law. Of course, they may be subject directly to the OCPA in their own right.

The Amendment: HB 3875

HB 3875 modifies ORS 646A.572 to extend the OCPA’s privacy obligations to:

“A motor vehicle manufacturer or an affiliate of the motor vehicle manufacturer that controls or processes personal data obtained from a consumer’s use of a motor vehicle or a vehicle’s technologies or components.”

Who Counts as an “Affiliate”?

To determine whether a dealership is subject to these new obligations, one must examine the OCPA’s definition of affiliate:

“Affiliate” means a person that, directly or indirectly through one or more intermediaries, controls, is controlled by or is under common control with another person such that:

      (a) The person owns or has the power to vote more than 50 percent of the outstanding shares of any voting class of the other person’s securities;

      (b) The person has the power to elect or influence the election of a majority of the directors, members or managers of the other person;

      (c) The person has the power to direct the management of another person; or

      (d) The person is subject to another person’s exercise of the powers described in paragraph (a), (b) or (c) of this subsection.

This definition introduces some ambiguity for dealerships. Many dealerships operate as independent businesses, even if they sell only one manufacturer’s vehicles and display that brand prominently. While they may be contractually tied to a manufacturer, they may not meet the legal standard of being controlled by or under common control with that manufacturer as described in the definition.

However, certain dealership groups—particularly those owned or operated by manufacturers or holding companies—may clearly fall within the definition of “affiliate.”

Dealerships should evaluate their corporate structure and agreements with manufacturers to determine whether this definition might apply to them.

Why This Matters

Entities subject to the OCPA must comply with a range of privacy requirements, including:

  • Providing transparent privacy notices
  • Obtaining consumer consent for data collection and sharing under certain circumstances
  • Offering consumer rights such as access, correction, deletion, and data portability
  • Implementing reasonable data security measures

These obligations extend to any personal data collected through vehicle technologies, such as navigation systems, driver behavior analytics, location data, and mobile app integrations.

Federal Context: FTC Enforcement

Dealerships should also remain aware of federal obligations. Under the Gramm-Leach-Bliley Act (GLBA), auto dealers engaged in leasing or financing must follow privacy and safeguard rules enforced by the Federal Trade Commission (FTC).

The FTC has published detailed guidance for auto dealers, including:

What Dealerships Should Do Now

Even if a dealership is not legally an “affiliate” under the OCPA or subject to a similar state comprehensive privacy law,  the trend toward regulating vehicle-generated data suggests it’s time to proactively review data practices. Dealerships should:

  1. Conduct a data inventory to identify what personal data is collected, especially from connected vehicle systems.
  2. Update privacy notices and practices in accordance with state and federal law.
  3. Review contracts with manufacturers and vendors for data-sharing provisions and compliance obligations.
  4. Train staff on new privacy responsibilities and how to respond to consumer data requests.

California lawmakers have proposed new legislation to reshape the growing use of artificial intelligence (AI) in the workplace. While this bill aims to protect workers, employers have expressed concerns about how it might affect business efficiency and innovation.

What Does California’s Senate Bill 7 (SB 7) Propose?

SB 7, also known as the “No Robo Bosses Act,” introduces several key requirements and provisions restricting how employers use automated decision systems (ADS) powered by AI. These systems are used in making employment-related decisions, including hiring, promotions, evaluations, and terminations. The pending bill seeks to ensure that employers use these systems responsibly and that AI only assists in decision-making rather than replacing human judgment entirely.

The bill is significant for its privacy, transparency, and workplace safety implications, areas that are fundamental as technology becomes more integrated into our daily work lives.

Privacy and Transparency Protections

SB 7 includes measures to safeguard worker privacy and ensure that personal data is not misused or mishandled. The bill prohibits the use of ADS to infer or collect sensitive personal information, such as immigration status, religious or political beliefs, health data, sexual or gender orientation, or other statuses protected by law. These limitations could significantly limit an employer’s ability to use ADS to streamline human resources administration, even if the ADS only assists but does not replace human decision making. Notably, the California Consumer Privacy Act, which treats applicants and employees of covered businesses as consumers, permits the collection of such information.

Additionally, if the bill is enacted, employers and vendors will have to provide written notice to workers if an ADS is used to make employment-related decisions that affect them. The notice must provide a clear explanation of the data being collected and its intended use. Affected workers also must receive a notice after an employment decision is made with ADS. This focus on transparency aims to ensure that workers are aware of how their data is being used.

Workplace Safety

Beyond privacy, SB 7 also highlights workplace safety by prohibiting the use of ADS that could violate labor laws or occupational health and safety standards. Employers would need to make certain that ADS follow existing safety regulations, and that this technology does not compromise workplace health and safety. Additionally, ADS restrictions imposed by this pending bill could affect employers’ ability to proactively address or monitor potential safety risks with the use of AI.

Oversight & Enforcement

SB 7 prohibits employers from relying primarily on an ADS for significant employment-related decisions, such as hiring and discipline, and requires human involvement in the process. The bill grants workers the right to access and correct their data used by ADS, and they can appeal ADS employment-related decisions. A human reviewer must also evaluate the appeal. Employers cannot discriminate or retaliate against a worker for exercising their rights under this law.

The Labor Commissioner would be responsible for enforcing the bill, and workers may bring civil actions for alleged violations. Employers may face civil penalties for non-compliance.

What’s Next?

While SB 7 attempts to keep pace with the evolution of AI in the workplace, there will likely be ongoing debate about these proposed standards and which provisions will ultimately become law. Jackson Lewis will continue to monitor the status of SB 7.

If you have questions about California’s pending legislation and how it could affect your organization, contact a Jackson Lewis attorney to discuss.

A recent series of articles by the International Association of Privacy Professionals discusses a trend in privacy litigation focused on breach of contract and breach of warranty claims.  

Practical Takeaways

  • Courts are increasingly looking at website privacy policies, terms of use, privacy notices, and other statements from organizations and assessing breach of contract and warranty claims when individuals allege businesses failed to uphold their stated (or unstated) data protection promises (or obligations).
  • To avoid such claims, businesses should review their data privacy and security policies and public statements to ensure they accurately reflect their data protection practices, invest in robust security measures, and conduct regular audits to maintain compliance.

Privacy policies are no longer just formalities; they can become binding commitments. Courts are scrutinizing these communications to determine whether businesses are upholding their promises regarding data protection. Any discrepancies between stated policies and actual practices can lead to breach of contract claims. In some cases, similar obligations can be implied through behavior or other circumstances and create a contract.

There are several ways these types of claims arise. The following outlines the concepts that plaintiffs are asserting:

  • Breach of Express Contract: These claims arise when a plaintiff alleges a business failed to adhere to the specific terms outlined in their privacy policies. For example, if a company promises to “never” share user data with third parties, but does so.
  • Breach of Implied Contract: Even in the absence of explicit terms, businesses can face claims based on implied contracts. This occurs when there is an expectation of privacy and/or security based on the nature of the relationship between the business and its customers.
  • Breach of Express Warranty: Companies that make specific assurances about the security and confidentiality of user data can be held liable if they fail to meet these assurances.
  • Breach of Implied Warranty: These claims are based on the expectation that a company’s data protection measures will meet certain standards of quality and reliability.

How to avoid being a target:

  1. Ensure Accuracy in Privacy Policies, Notices, Terms: Even if a business takes the steps described below and others to strengthen its data privacy and security safeguards, those efforts still may be insufficient to support strong statements concerning such safeguards made in policies, notices, and terms. Accordingly, businesses should carefully review and scrutinize their privacy policies, notices, terms, and conditions for collecting, processing, and safeguarding personal information. This effort should involve the drafters of those communications working with IT, legal, marketing, and other departments to ensure the communications are clear, accurate, and reflective of their actual data protection practices.
  2. Assess Privacy and Security Expectations and Obligations. As noted above, breach of contract claims may not always arise from express contract terms. Businesses should be aware of circumstances that might suggest an agreement with customers concerning their personal information, and then work to address the contours of that promise.
  3. Strengthen Data Privacy and Security Protections. A business may be comfortable with its public privacy policies and notices, feel that it has satisfied implied obligations, but still face breach of contract or warranty claims. In that case, having a mature and documented data privacy and security program can go a long way toward strengthening the business’s defensible position. Such a program includes adopting comprehensive privacy and security practices and regularly updating them to address new threats. At a minimum, the program should comply with applicable regulatory obligations, as well as industry guidelines. The business should regularly review the program, its practices, changes in service, etc., as well as publicly stated policies and notices, as well as customer agreements, to ensure that data protection measures align with stated policies.