For businesses subject to the California Consumer Privacy Act (CCPA), a compliance step often overlooked is the requirement to annually update the businesses online privacy policy. Under Cal. Civ. Code § 1798.130(a)(5), CCPA-covered businesses must among other things update their online privacy policies at least once every 12 months. Note that CCPA regulations establish content requirements for online privacy policies, one of which is that the policy must include “the date the privacy policy was last updated.” See 11 CCR § 7011(e)(4).

As businesses continue to grow, evolve, adopt new technologies, or otherwise make online and offline changes in their business, practices, and/or operations, CCPA required privacy policies may no longer accurately or completely reflect the collection and processing of personal information. Consider, for example, the adoption of emerging technologies, such as so-called “artificial intelligence” tools. These tools may be collecting, inferring, or processing personal information in ways that were not contemplated when preparing the organization’s last privacy policy update.

The business also may have service providers that collect and process personal information on behalf of the business in ways that are different than they did when they began providing services to the business.

Simply put: If your business (or its service providers) has adopted any new technologies or otherwise changed how it collects or processes personal information, your privacy policy may need an update.

Practical Action Items for Businesses

Here are some steps businesses can take to comply with the annual privacy policy review and update requirement under the CCPA:

  • Inventory Personal Information
    Reassess what categories of personal information your organization collects, processes, sells, and shares. Consider whether new categories—such as biometric, geolocation, or video —have been added.
  • Review Data Use Practices
    Confirm whether your uses of personal information have changed since the last policy update. This includes whether you are profiling, targeting, or automating decisions based on the data.
  • Assess adoption of new technologies, such as AI and New Tech Tools
    Has your business adopted any new technologies or systems, such as AI applications? Examples may include:
    • AI notetakers, transcription, or summarization tools for use in meetings (e.g., Otter, Fireflies)
    • AI used for chatbots, personalized recommendations, or hiring assessments
  • Evaluate Third Parties and Service Providers
    Are you sharing or selling information to new third parties? Has your use of service providers changed, or have service providers changed their practices around the collection or processing of personal information?
  • Review Your Consumer Rights Mechanisms
    Are the methods for consumers to submit access, deletion, correction, or opt-out requests clearly stated and functioning properly?

These are only a few of the potential recent developments that may drive changes in an existing privacy policy. There may be additional considerations for businesses in certain industries and departments within those businesses that should be considered as well. Here are a few examples:

Retail Businesses

  • Loyalty programs collecting purchase history and predictive analytics data.
  • More advanced in-store cameras and mobile apps collecting biometric or geolocation information.
  • AI-driven customer service bots that gather interaction data.

Law Firms

  • Use of AI notetakers or transcription tools during client calls.
  • Remote collaboration tools that collect device or location data.
  • Marketing platforms that profile client interests based on website use.

HR Departments (Across All Industries)

  • AI tools used for resume screening and candidate profiling.
  • Digital onboarding platforms collecting sensitive identity data.
  • Employee productivity and monitoring software that tracks usage, productivity, or location.

The online privacy policy is not just a static compliance document—it’s a dynamic reflection of your organization’s data privacy practices. As technologies evolve and regulations expand, taking time once a year to reassess and update your privacy disclosures is not only a legal obligation in California but a strategic risk management step. And, while we have focused on the CCPA in this article, inaccurate or incomplete online privacy policies can elevate compliance and litigation risks under other laws, including the Federal Trade Commission Act and state protections against deceptive and unfair business practices.

Montana recently amended its privacy law through Senate Bill 297, effective October 1, 2025, strengthening consumer protections and requiring businesses to revisit their privacy policies that apply to citizens of Montana. Importantly, it lowered the threshold for applicability to persons and businesses who control or process the personal data of 25,000 or more consumers (previously 50,000), unless the controller uses that data solely for completing payments. For those who derive more than 25% of gross revenue from the sale of personal data, the threshold is now 15,000 or more consumers (previously 25,000).

With the amendments, nonprofits are no longer exempt unless they are set up to detect and prevent insurance fraud. Insurers are now similarly exempt.

When a consumer requests confirmation that a controller is processing their data, the controller can no longer disclose but must identify possession of: (1) social security numbers, (2) ID numbers, (3) financial account numbers, (4) health insurance or medical identification numbers, (5) passwords, security questions, or answers, or (6) biometric data.

Privacy notices must now include: (1) personal data categories, (2) controller’s purpose in possessing personal data, (3) categories controller sells or shares with third parties, (4) categories of third parties, (5) contact information for the controller, (6) explanation of rights and how to exercise them, and (7) the date privacy notice was last updated. Privacy notices must be accessible to and usable to people with disabilities and available in each language in which the controller provides a product or service. Any material changes to the controller’s privacy notice or practices require notices to affected consumers and the opportunity to withdraw consent. Notices need not be Montana-specific, but controllers must conspicuously post them on websites, in mobile applications, or through whatever medium the controller interacts with customers.

The amendments further clarified information the attorney general must publicly provide, including an online mechanism for consumers to file complaints. Further, the attorney general may now issue civil investigative demands and need not issue any notice of violation or provide a 60-day period for the controller to correct the violation.

In today’s hybrid and remote work environment, organizations are increasingly turning to digital employee management platforms that promise productivity insights, compliance enforcement, and even behavioral analytics. These tools—offered by a growing number of vendors—can monitor everything from application usage and website visits to keystrokes, idle time, and screen recordings. Some go further, offering video capture, geolocation tracking, AI-driven risk scoring, sentiment analysis, and predictive indicators of turnover or burnout.

While powerful, these platforms also carry real legal and operational risks if not assessed, configured, and governed carefully.

Capabilities That Go Beyond Traditional Monitoring

Modern employee management tools have expanded far beyond “punching in,” reviewing emails, and tracking websites visited. Depending on the features selected and how the platform is configured, employers may have access to:

  • Real-time screen capture and video recording
  • Automated time tracking and productivity scoring
  • Application and website usage monitoring
  • Keyword or behavior-based alerts (e.g., data exfiltration risks)
  • Behavioral biometrics or mouse/keyboard pattern analysis
  • AI-based sentiment or emotion detection
  • Geolocation or IP-based presence tracking
  • Surveys and wellness monitoring tools

Not all of these tools are deployed in every instance, and many vendors allow companies to configure what they monitor. Some important questions arise, such as who at the company is making the decisions on how to configure the tool, what data is collected, is the collection permissible, who has access , how are decisions made using that data, and what safeguards are in place to protect the data. But even limited use can present privacy and employment-related risks if not governed effectively.

Legal and Compliance Risks

While employers generally have some leeway to monitor their employees on company systems, existing and emerging law, particularly concerning AI, along with considering best practices, employee relations, and other factors should help with developing some guidelines.

  • Privacy Laws: State and international privacy laws (like the California Consumer Privacy Act, GDPR, and others) may require notice, consent, data minimization, and purpose limitation. Even in the U.S., where workplace privacy expectations are often lower, secretive or overly broad monitoring can trigger complaints or litigation.
  • Labor and Employment Laws: Monitoring tools that disproportionately affect certain groups or are applied inconsistently may prompt discrimination or retaliation claims. Excessive monitoring activities could trigger bargaining obligations and claims concerning protected concerted activity.
  • AI-Driven Features: Platforms that employ AI or automated decision-making—such as behavioral scoring or predictive analytics—may be subject to emerging AI-specific laws and guidance, such as New York City’s Local Law 144, Colorado’s AI Act, and AI regulations recently approved by the California Civil Rights Department under the Fair Employment and Housing Act (FEHA) concerning the use of automated decision-making systems.
  • Data Security and Retention: These platforms collect sensitive behavioral data. If poorly secured or over-retained, that data could become a liability in the event of a breach or internal misuse.

Governance Must Extend Beyond IT

Too often, these tools are procured and managed primarily, sometimes exclusively, by IT or security teams without broader organizational involvement. Given the nature of data these tools collect and analyze, as well as their potential impact on members of a workforce, a cross-functional approach is a best practice.

Involving stakeholders from HR, legal, compliance, data privacy, etc., can have significant benefits not only at the procurement and implementation stages, but also throughout the lifecycle of these tools. This includes regular reviews of feature configurations, access rights, data use, decision making, and staying abreast of emerging legal requirements.

Governance considerations should include:

  • Purpose Limitation and Transparency: Clear internal documentation and employee notices should explain what is being monitored, why, and how the information will be used.
  • Access Controls and Role-Based Permissions: Not everyone needs full access to dashboards or raw monitoring data. Access should be limited to what’s necessary and tied to a specific function.
  • Training and Oversight: Employees who interact with the monitoring dashboards must understand the scope of permitted use. Misuse of the data—whether for personal curiosity, retaliation, or outside policy—should be addressed appropriately.
  • Data Minimization and Retention Policies: Avoid “just in case” data collection. Align retention schedules with actual business need and regulatory requirements.
  • Ongoing Review of Vendor Practices: Some vendors continuously add or enable new features that may shift the risk profile. Governance teams should review vendor updates and periodically reevaluate what’s enabled and why.

A Tool, Not a Silver Bullet

Used thoughtfully, employee management platforms can be a valuable part of a company’s compliance and productivity strategy. But they are not “set it and forget it” solutions. The insights they provide can only be trusted—and legally defensible—if there is strong governance around their use.

Organizations must manage not only their employees, but also the people and tools managing their employees. That means recognizing that tools like these sit at the intersection of privacy, ethics, security, and human resources—and must be treated accordingly.

The Oklahoma State Legislature recently enacted Senate Bill 626, amending its Security Breach Notification Act, effective January 1, 2026, to address gaps in the state’s current cybersecurity framework (the “Amendment”).  The Amendment includes new definitions, mandates reporting to the state Attorney General, clarifies compliance with similar laws, and provides revised penalty provisions, including affirmative defenses.

Definitions

The Amendment provides clearer definitions related to security breaches, specifying what constitutes “personal information” and “reasonable safeguards.”

  • Personal Information:  The existing definition for “Personal Information” was expanded to also include (1) a unique electronic identifier or routing code in combination with any required security code, access code, or password that would permit access to an individual’s financial account and (2) unique biometric data such as a fingerprint, retina or iris image, or other unique physical or digital representation of biometric data to authenticate a specific individual.
  • Reasonable Safeguards:  The Amendment provides an affirmative defense in a civil action under the law for individuals or entities that have “Reasonable safeguards” in place, which are defined as “policies and practices that ensure personal information is secure, taking into consideration an entity’s size and the type and amount of personal information. The term includes, but is not limited to, conducting risk assessments, implementing technical and physical layered defenses, employee training on handling personal information, and establishing an incident response plan”.

Mandated Reporting and Exceptions

In the new year, entities required to provide notice to impacted individuals under the law in case of a breach will also be required to notify the Attorney General. The notification must include specific details including, but not limited to, the type of personal information impacted the nature of the breach, the number of impacted individuals, the estimated monetary impact of the breach to the extent such can be determined, and any reasonable safeguards the entity employs. The notification to the Attorney General must occur no more than 60 days after notifying affected residents.

However, breaches affecting fewer than 500 residents, or fewer than 1,000 residents in the case of credit bureaus, are exempt from the requirement to notify the Attorney General.

In addition, an exception from individual notification is provided for entities that comply with notification requirements under the Oklahoma Hospital Cybersecurity Protection Act of 2023 or the Health Insurance Portability and Accountability Act of 1996 (HIPAA) if such entities provide the requisite notice to the Attorney General.

What Entities Should Do Now

  1. Inventory data.  Conduct an inventory to determine what personal information is collected given the newly covered data elements.
  • Review and update policies and practices.  Reevaluate and update current information security policies and procedures to ensure proper reasonable safeguards are in place.  Moreover, to ensure that an entity’s policies and procedures remain reasonably designed, they should be periodically reviewed and updated.

If you have any questions about the revisions to Oklahoma’s Security Breach Notification Act or related issues, contact a Jackson Lewis attorney to discuss.

As the integration of technology in the workplace accelerates, so do the challenges related to privacy, cybersecurity, and the ethical use of artificial intelligence (AI). Human resource professionals and in-house counsel must navigate a rapidly evolving landscape of legal and regulatory requirements. This National Privacy Day, it’s crucial to spotlight emerging issues in workplace technology and the associated implications for data privacy, cybersecurity, and compliance.

We explore here practical use cases raising these issues, highlight key risks, and provide actionable insights for HR professionals and in-house counsel to manage these concerns effectively.

1. Wearables and the Intersection of Privacy, Security, and Disability Law

Wearable devices have a wide range of use cases including interactive training, performance monitoring, and navigation tracking. Wearables such as fitness trackers and smartwatches became more popular in HR and employee benefits departments when they were deployed in wellness programs to monitor employees’ health metrics, promote fitness, and provide a basis for doling out insurance premium incentives. While these tools offer benefits, they also collect sensitive health and other personal data, raising significant privacy and cybersecurity concerns under the Health Insurance Portability and Accountability Act (HIPAA), the Americans with Disabilities Act (ADA), and state privacy laws.

Earlier this year, the Equal Employment Opportunity Commission (EEOC) issued guidance emphasizing that data collected through wearables must align with ADA rules. More recently, the EEOC withdrew that guidance in response to an Executive Order issued by President Trump. Still, employers should evaluate their use of wearables and whether they raise ADA issues, such as voluntary use of such devices when collecting confidential medical information, making disability-related inquiries, and using aggregated or anonymized data to prevent discrimination claims.

Beyond ADA compliance, cybersecurity is critical. Wearables often collect sensitive data and transmit same to third-party vendors. Employers must assess these vendors’ data protection practices, including encryption protocols and incident response measures, to mitigate the risk of breaches or unauthorized access.

Practical Tip: Implement robust contracts with third-party vendors, requiring adherence to privacy laws, breach notification, and security standards. Also, ensure clear communication with employees about how their data will be collected, used, and stored.

2. Performance Management Platforms and Employee Monitoring

Platforms like Insightful and similar performance management tools are increasingly being used to monitor employee productivity and/or compliance with appliable law and company policies. These platforms can capture a vast array of data, including screen activity, keystrokes, and time spent on tasks, raising significant privacy concerns.

While such tools may improve efficiency and accountability, they also risk crossing boundaries, particularly when employees are unaware of the extent of monitoring and/or where the employer doesn’t have effective data minimization controls in place. State laws like the California Consumer Privacy Act (CCPA) can place limits on these monitoring practices, particularly if employees have a reasonable expectation of privacy. They also can require additional layers of security safeguards and administration of employee rights with respect to data collected and processed using the platform.

Practical Tip: Before deploying such tools, assess the necessity of data collection, ensure transparency by notifying employees, and restrict data collection to what is strictly necessary for business purposes. Implement policies that balance business needs with employee rights to privacy.

3. AI-Powered Dash Cams in Fleet Management

AI-enabled dash cams, often used for fleet management, combine video, audio, GPS, telematics, and/or biometrics to monitor driver behavior and vehicle performance, among other things. While these tools enhance safety and efficiency, they also present significant privacy and legal risks.

State biometric privacy laws, such as Illinois’s Biometric Information Privacy Act (BIPA) and similar laws in California, Colorado, and Texas, impose stringent requirements on biometric data collection, including obtaining employee consent and implementing robust data security measures. Employers must also assess the cybersecurity vulnerabilities of dash cam providers, given the volume of biometric, location, and other data they may collect.

Practical Tip: Conduct a legal review of biometric data collection practices, train employees on the use of dash cams, and audit vendor security practices to ensure compliance and minimize risk.

4. Assessing Vendor Cybersecurity for Employee Benefits Plans

Third-party vendors play a crucial role in processing data for retirement plans, such as 401(k) plan, as well as health and welfare plans. The Department of Labor (DOL) emphasized in recent guidance the importance of ERISA plan fiduciaries’ role to assess the cybersecurity practices of such service providers.

The DOL’s guidance underscores the need to evaluate vendors’ security measures, incident response plans, and data breach notification practices. Given the sensitive nature of data processed as part of plan administration—such as Social Security numbers, health records, and financial information—failure to vet vendors properly can lead to breaches, lawsuits, and regulatory penalties, including claims for breach of fiduciary duty.

Practical Tip: Conduct regular risk assessments of vendors, incorporate cybersecurity provisions into contracts, and document the due diligence process to demonstrate compliance with fiduciary obligations.

5. Biometrics for Access, Time Management, and Identity Verification

Biometric technology, such as fingerprint or facial recognition systems, is widely used for identity verification, physical access, and timekeeping. While convenient, the collection of biometric data carries significant privacy and cybersecurity risks.

BIPA and similar state laws require employers to obtain written consent, provide clear notices about data usage, and adhere to stringent security protocols. Additionally, biometrics are uniquely sensitive because they cannot be changed if compromised in a breach.

Practical Tip: Minimize reliance on biometric data where possible, ensure compliance with consent and notification requirements, and invest in encryption and secure storage systems for biometric information. Check out our Biometrics White Paper.

6. HIPAA Updates Affecting Group Health Plan Compliance

Recent changes to the HIPAA Privacy Rule, including provisions related to reproductive healthcare, significantly impact group health plans. The proposed HIPAA Security Rule amendments also signal stricter requirements for risk assessments, access controls, and data breach responses.

Employers sponsoring group health plans must stay ahead of these changes by updating their HIPAA policies and Notice of Privacy Practices, training staff, and ensuring that business associate agreements (BAAs) reflect the new requirements.

Practical Tip: Regularly review HIPAA compliance practices and monitor upcoming changes to ensure your group health plan aligns with evolving regulations.

7. Data Breach Notification Laws and Incident Response Plans

Many states have updated their data breach notification laws, lowering notification thresholds, shortening notification timelines, and expanding the definition of personal information. Employers should revise their incident response plans (IRPs) to align with these changes.

Practical Tip: Ensure IRPs reflect updated laws, test them through simulated breach scenarios, and coordinate with legal counsel to prepare for reporting obligations in case of an incident.

8. AI Deployment in Recruiting and Retention

AI tools are transforming HR functions, from recruiting to performance management and retention strategies. However, these tools require vast amounts of personal data to function effectively, increasing privacy and cybersecurity risks.

The EEOC and other regulatory bodies have cautioned against discriminatory impacts of AI, particularly regarding protected characteristics like disability, race, or gender. (As noted above, the EEOC recently withdrew its AI guidance under the ADA and Title VII following an Executive Order by the Trump Administration.) For example, the use of AI in hiring or promotions may trigger compliance obligations under the ADA, Title VII, and state laws.

Practical Tip: Conduct bias audits of AI systems, implement data minimization principles, and ensure compliance with applicable anti-discrimination laws.

9. Employee Use of AI Tools

Moving beyond the HR department, AI tools are fundamentally changing how people work.  Tasks that used to require time-intensive manual effort—creating meeting minutes, preparing emails, digesting lengthy documents, creating PowerPoint decks—can now be completed far more efficiently with assistance from AI.  The benefits of AI tools are undeniable, but so too are the associated risks.  Organizations that rush to implement these tools without thoughtful vetting processes, policies, and training will expose themselves to significant regulatory and litigation risk.     

Practical Tip: Not all AI tools are created equal—either in terms of the risks they pose or the utility they provide—so an important first step is developing criteria to assess, and then going through the process of assessing, which AI tools to permit employees to use.  Equally important is establishing clear ground rules for how employees can use those tools.  For instance, what company information are they permitted to use to prompt the tool; what are the processes for ensuring the tool’s output is accurate and consistent with company policies and objectives; and should employee use of AI tools be limited to internal functions or should they also be permitted to use these tools to generate work product for external audiences. 

10. Data Minimization Across the Employee Lifecycle

At the core of many of the above issues is the principle of data minimization. The California Privacy Protection Agency (CPPA) has emphasized that organizations must collect only the data necessary for specific purposes and ensure its secure disposal when no longer needed.

From recruiting to offboarding, HR professionals must assess whether data collection practices align with the principle of data minimization. Overcollection not only heightens privacy risks but also increases exposure in the event of a breach.

Practical Tip: Develop a data inventory mapping employee information from collection to disposal. Regularly review and update policies to limit data retention and enforce secure deletion practices.

Conclusion

The rapid adoption of emerging technologies presents both opportunities and challenges for employers. HR professionals and in-house counsel play a critical role in navigating privacy, cybersecurity, and AI compliance risks while fostering innovation.

By implementing robust policies, conducting regular risk assessments, and prioritizing data minimization, organizations can mitigate legal exposure and build employee trust. This National Privacy Day, take proactive steps to address these issues and position your organization as a leader in privacy and cybersecurity.

Insider threats continue to present a significant challenge for organizations of all sizes. One particularly concerning scenario involves employees who leave an organization and impermissibly take or download sensitive company data. These situations can severely impact a business, especially when departing employees abscond with confidential business information or trade secrets. Focusing on how the theft of such information could cripple a business’s operations, competitive advantage, etc. is warranted. It is critical not to overlook, however, other legal and regulatory implications stemming from the theft of certain data, including potential data breach notification obligations.

The Importance of Safeguarding Trade Secrets

Trade secrets generally refer to information that has commercial value because it’s kept secret. Examples include formulas, patterns, programs, devices, methods, and other valuable business data. Such data are often the lifeblood of a company’s competitive edge. These secrets must be safeguarded to retain their value and legal protections under the Uniform Trade Secrets Act (UTSA) which has been adopted by most states. Businesses will need to demonstrate that they took reasonable measures to protect their trade secrets.

Reasonable safeguards under the UTSA can include:

  • Implementing access controls to restrict employees’ ability to download or share sensitive information.
  • Requiring employees to sign confidentiality agreements and restrictive covenants.
  • Regularly training employees on the importance of data security and confidentiality.
  • Using monitoring tools to detect unusual access or downloads of sensitive data.

Failing to adopt such safeguards can jeopardize a company’s ability to claim protection for trade secrets and pursue legal remedies if those secrets are stolen. Companies should consult with trusted IT and legal advisors to ensure they have adequate safeguards.

Beyond Trade Secrets: Data Breach Concerns

While the theft of confidential business and trade secret information rightly garners attention, focusing exclusively on this aspect may cause companies to miss another critical risk: the theft of personal information. As part of their efforts to remove company information, departing employees may inadvertently or intentionally take personal information, such as employee or customer data, which could trigger significant legal obligations, particularly if accessed or acquired without authorization.

Contrary to common assumptions, data breach notification laws do not solely apply to stolen Social Security numbers. Most state data breach laws define “personal information” broadly to include elements such as:

  • Financial account information, including debit or credit card numbers.
  • Driver’s license or state identification numbers.
  • Health insurance and medical information.
  • Dates of birth.
  • Online account credentials, such as usernames and passwords.
  • Biometric data, such as fingerprints or facial recognition profiles.

The unauthorized access or acquisition of these data elements together with the individual’s name can constitute a data breach, requiring timely notification to affected individuals and, in some cases, regulatory authorities.

Broader Regulatory and Contractual Implications

In addition to state breach notification laws that seek to protect personal information, companies must consider other regulatory and contractual obligations when sensitive data is stolen. For example:

  • Publicly traded companies: Theft of critical business information by a departing employee may require disclosure under U.S. Securities and Exchange Commission (SEC) regulations if the theft is deemed material. If a company determines the materiality threshold has been reached, it has four days to report to the public.
  • Critical infrastructure businesses: Companies providing services in regulated industries, such as energy or healthcare, may have reporting obligations to regulatory authorities if sensitive confidential business data is compromised.
  • Contractual obligations: Many businesses enter into agreements with business customers that require notification if confidential business information or personal data is compromised.

Ignoring these obligations could expose organizations to fines, lawsuits, and reputational harm, compounding the difficulties already created by the theft of an organization’s confidential business information.

Taking a Comprehensive Approach to Data Theft

The theft of confidential business information by a departing employee can be devastating for a business. However, focusing solely on restrictive covenants, trade secrets, or business information risks overlooking the full scope of legal and regulatory obligations. To effectively respond to such incidents, companies should:

  1. Identify the nature of the stolen data: Assess whether the data includes personal information, trade secrets, or other sensitive information that could trigger specific legal obligations.
  2. Evaluate legal and regulatory obligations: Determine whether notification is required under state breach laws, SEC or other regulations (if applicable), industry-specific rules, or contractual agreements.
  3. Leverage restrictive covenant agreements: Assess appropriate legal or contractual remedies, including under restrictive covenant, confidentiality, and other agreements, as part of a broader strategy to address the theft.
  4. Implement safeguards: Strengthen data protection measures to mitigate the risk of future incidents, including employee training, enhanced monitoring, and robust exit procedures.

While dealing with insider threats is undoubtedly challenging, taking a comprehensive and proactive approach can help businesses protect their interests and minimize legal exposure. In today’s interconnected and highly regulated world, understanding the full scope of risks and obligations tied to data theft is essential for any business.

Ask any chief information security officer (CISO), cyber underwriter or risk manager, or cybersecurity attorney about what controls are critical for protecting an organization’s information systems, you’ll likely find multifactor authentication (MFA) at or near the top of every list. Government agencies responsible for helping to protect the U.S. and its information systems and assets (e.g., CISA, FBI, Secret Service) send the same message. But that message may be evolving a bit as criminal threat actors have started to exploit weaknesses in MFA.  

According to a recent report in Forbes, for example, threat actors are harnessing AI to break though multifactor authentication strategies designed to prevent new account fraud. “Know Your Customer” procedures are critical in certain industries for validating the identity of customers, such as financial services, telecommunications, etc. Employers increasingly face similar issues with recruiting employees, when they find, after making the hiring decision, that the person doing the work may not be the person interviewed for the position.

Threat actors have leveraged a new AI deepfake tool that can be acquired on the dark web to bypass the biometric systems that been used to stop new account fraud. According to the Forbes article, the process goes something like this:

1. Bad actors use one of the many generative AI websites to create and download a fake image of a person.

2. Next, they use the tool to synthesize a fake passport or a government-issued ID by inserting the fake photograph…

3. Malicious actors then generate a deepfake video (using the same photo) where the synthetic identity pans their head from left to right. This movement is specifically designed to match the requirements of facial recognition systems. If you pay close attention, you can certainly spot some defects. However, these are likely ignored by facial recognition because videos are prone to have distortions due to internet latency issues, buffering or just poor video conditions.

4. Threat actors then initiate a new account fraud attack where they connect a cryptocurrency exchange and proceed to upload the forged document. The account verification system then asks to perform facial recognition where the tool enables attackers to connect the video to the camera’s input.

5. Following these steps, the verification process is completed, and the attackers are notified that their account has been verified.”

Sophisticated AI tools are not the only MFA vulnerability. In December 2024, the Cybersecurity & Infrastructure Security Agency (CISA) issued best practices for mobile communications. Among its recommendations, CISA advised mobile phone users, in particular highly-targeted individuals,  

Do not use SMS as a second factor for authentication. SMS messages are not encrypted—a threat actor with access to a telecommunication provider’s network who intercepts these messages can read them. SMS MFA is not phishing-resistant and is therefore not strong authentication for accounts of highly targeted individuals.

In a 2023 FBI Internet Crime Report, the FBI reported more than 1,000 “SIM swapping” investigations. A SIM swap is just another technique by threat actors involving the “use of unsophisticated social engineering techniques against mobile service providers to transfer a victim’s phone service to a mobile device in the criminal’s possession.

In December, Infosecurity Magazine reported on another vulnerability in MFA. In fact, there are many reports about various vulnerabilities with MFA.

Are we recommending against the use of MFA. Certainly not. Our point is simply to offer a reminder that there are no silver bullets to achieving security of information systems and that AI is not only used by the good guys. An information security program, preferably one that is written (a WISP), requires continuous vigilance, and not just from the IT department, as new technologies are leveraged to bypass older technologies.

In 2024, Israel became the latest jurisdiction to enact comprehensive privacy legislation, largely inspired by the EU’s General Data Protection Regulation (“GDPR”). On August 5, 2024, Israel’s parliament, the Knesset, voted to approve the enactment of Amendment No. 13 (“the Amendment”) to the Israel Privacy Protection Law (“IPPL”). The amendment which will take effect on August 15, 2025, is considered an overhaul to the IPPL, which has been left largely untouched since the law’s enactment in 1996.

Key Features of the Amendment include:

  • Expansion of key definitions in the law
    • Personal Information – Expanded to include any “data related to an identified or identifiable person”.Highly Sensitive Information – Replaces the IPPL’s current definition of “sensitive information” and is similar in kind to the GDPR’s Special Categories of Data.  Types of information that qualify as highly sensitive information under the Amendment include biometric data, genetic data, location and traffic data, criminal records and assessment of personality types.Data Processing The Amendment broadens the definition of processing to include any operation on information, including receipt, collection, storage, copying, review, disclosure, exposure, transfer, conveyance, or granting access.Database Controller – The IPPL previously used the term “database owner”, and akin to the GDPR has changed the term to database controller, which is defined as the person or entity that determines the purpose of processing personal information in the database.
    • Database Holder – Similar to the GDPR’s “processor”, the Amendment includes the term database holder which is defined as an entity “external to the data controller that processes information on behalf of the data controller”, which due to the broad definition of data processing, captures a broad set of third-party service providers.
  • Mandatory Appointment of a Privacy Protection Officer & Data Security Officer
    • Equivalent to the GDPR’s Data Protection Officer (DPO) role, an entity that meets certain criteria based on size and industry (inclusive of both data controllers and processors), will be required to implement a new role in their organization entitled the Privacy Protection Officer, tasked with ensuring compliance with the IPPL and promoting data security and privacy protection initiatives within their organization.   Likewise, the obligation to appoint a Data Security Officer, which was a requirement for certain organizations prior to the Amendment, has now been expanded to apply to a broader set of entities.
  • Expansion of Enforcement Authority
    • The Privacy Protection Authority (“PPA”), Israel’s privacy regulator, has been given broader enforcement authority including a significant increase in financial penalties based on the number of data subjects impacted due to a violation, the type of violation and the violating entity’s financial turnover.  Financial penalties are capped at 5% of the businesses‘ annual turnover for larger organizations which could reach millions of dollars (e.g. a data processor that processes data without the controller’s permission in a database of 1,000,000 data subjects (8 ILS per data subject) can be fined 8,000,000 ILS (approx. $2.5 million USD)).  Small and micro businesses are capped at penalties of 140,000 ILS ($45,000 USD) per year. Other enhancements to the PPA’s authority include expansive investigative and supervisory powers as well as increased authority for the Head of the PPA to issue warnings and injunctions. 

Additional updates to the Amendment include expansion of the notice obligation in the case of a data breach, increased rights of data subjects, extension of the statute of limitations and exemplary damages. In following segments on the IPPL leading up to the August 2025 effective date, we will dive deeper on some of the key features of the Amendment, certain to have impact on entities with customers and/or employees in Israel.

Data privacy and security regulation is growing rapidly around the world, including in Israel. This legislative activity, combined with the growing public awareness of data privacy rights and concerns, makes the development of a meaningful data protection program an essential component of business operations.

On June 25, 2024, Rhode Island became the 20th state to enact a comprehensive consumer data protection law, the Rhode Island Data Transparency and Privacy Protection Act (“RIDTPPA”). The state joins Kentucky, Maryland, Minnesota, Nebraska, New Hampshire, and New Jersey in passing consumer data privacy laws this year.

The RIDTPPA takes effect on January 1, 2026.

To Whom does the law apply?

The law applies to two types of organizations, defined as “controllers”:

1. For-profit  entities that conduct business in the state of Rhode Island or that produce products or services that are targeted to residents of the state and that during the preceding calendar year did any of the following:

  • Controlled or processed the personal data of not less than thirty-five thousand (35,000) customers, excluding personal data controlled or processed solely for the purpose of completing a payment transaction, or
  • Controlled or processed the personal data of not less than ten thousand (10,000) customers and derived more than twenty percent (20%) of their gross revenue from the sale of personal data.

2. A commercial website or internet service provider conducting business in Rhode Island or with customers in Rhode Island or that is otherwise subject to Rhode Island jurisdiction and collects stores, and sells customers’ personally identifiable information.

Who is protected by the law?

Customer means an individual residing in Rhode Island who is acting in an individual or household context. The definition of customer does not include an individual acting in a commercial or employment context.

What data is protected by the law?

The law protects personal data, which is defined as any information that is linked or reasonably linkable to an identified or identifiable individual and does not include de-identified data or publicly available information.

RIDTPPA contains numerous exceptions for specific types of data including data that meets the definition of protected health information under HIPAA, personal data collected, processed, sold, or disclosed pursuant to the federal Gramm-Leach-Bliley Act, and personal data regulated by the federal Family Educations Rights and Privacy Act.

The law also provides heightened protection for sensitive data, which means personal data revealing racial or ethnic origin, religious beliefs, mental or physical health condition or diagnosis, sex life, sexual orientation, or citizenship or immigration status; the processing of genetic or biometric data for the purpose of uniquely identifying an individual; the personal data of a known child; or precise geolocation data.

What are the rights of customers?

Under the law, customers have the following rights with respect to data collected by for-profit  entities that conduct business in the state or produce products or services targeted to residents of the state and meet one of the relevant thresholds:

  • Confirm whether a controller is processing their personal data and access that data.
  • Correct inaccuracies in the data a controller is processing.
  • Have personal data deleted unless the retention of the personal data is permitted or required by law.
  • Port personal data.
  • Opt out of the processing of personal data for targeted advertising, the sale of personal data, or profiling in furtherance of automated decisions that produce legal or similarly significant effects concerning the customer.

Under the law, customers also have a right to receive notice from commercial websites or internet service providers of their data collection activities.

What obligations do controllers have?

Both categories of controllers under Rhode Island’s law are required to provide a notice of data collection activities. Controllers that are for-profit  entities conducting business in the state or producing products or services targeted to residents of the state and that meet one of the relevant thresholds have the following additional obligations:

  • Limit collection of personal data to what is adequate, relevant, and reasonably necessary in relation to the purposes for which the data are processed.
  • Establish, implement, and maintain reasonable administrative, technical, and physical data security practices to protect, the confidentiality, integrity, and accessibility of personal data.
  • Obtain consent prior to processing a customer’s sensitive personal data.
  • Conduct and document a data privacy and protection assessment for processing activities that represent heightened risk.
  • Contractually obligate any processors who will process personal data on behalf of the organization to adhere to specific data protection obligations including ensuring the security of the processing.

How is the law enforced?

The statute will be enforced by the Rhode Island Attorney General and does not provide for a right to cure. The statute does not create a private right of action.

If you have questions about Rhode Island’s privacy law or related issues please reach out to a member of our Privacy, Data, and Cybersecurity practice group to discuss.

On May 24, 2024, Minnesota’s governor signed an omnibus bill, HF4757 which included the new Consumer Data Privacy Act. The state joins Kentucky, Nebraska, New Hampshire, New Jersey, and Rhode Island in passing consumer data privacy laws this year.

Minnesota’s law takes effect July 31, 2025, except that postsecondary institutions and nonprofit corporations governed by Minnesota Statutes, chapter 317A, are not required to comply until July 31, 2029.

To who does the law apply?

The law applies to legal entities that conduct business in the state of Minnesota or that provide products or services that are targeted to residents of the state and that during the preceding calendar year did any of the following:

  • Controls or processes personal data of 100,00 consumers or more, excluding personal data controlled or processed solely for the purpose of completing a payment transaction, or,
  • Derives over 25 percent of gross revenue from the sale of personal data and processes or controls personal data of 25,000 consumers or more.

Companies that are deemed a “small business” as defined by the United States Small Business Administration under the Code of Federal Regulations, title 13, part 121, are exempt from compliance with the exception that they must not sell a consumer’s sensitive data without the consumer’s prior consent.

Who is protected by the law?

Consumer means an individual who is a resident of the State of Minnesota. The definition of consumer does not include an individual acting in a commercial or employment context.

What data is protected by the law?

The law protects personal data, which is defined as any information that is linked or reasonably linked to an identified or identifiable individual. Personal data excludes de-identified data and publicly available information.

The Consumer Data Privacy Act contains numerous exceptions for specific types of data including data that meets the definition of protected health information under HIPAA, personal data collected, processed, sold, or disclosed pursuant to the federal Gramm-Leach-Bliley Act, and personal data regulated by the federal Family Educations Rights and Privacy Act.

The law also provides heightened protection for sensitive data, which means personal data revealing racial or ethnic origin, religious beliefs, mental or physical health condition or diagnosis, sexual orientation, or citizenship or immigration status; the processing of biometric data or genetic information for the purpose of uniquely identifying an individual; the personal data of a known child; or specific geolocation data.

What are the rights of consumers?

Under the law, consumers have the following rights:

  • Confirm whether a controller is processing their personal data
  • Access to personal data a controller is processing
  • Correct inaccuracies in data a controller is processing
  • Have personal data deleted unless the retention of the personal data is required by law
  • Obtain a list of the categories of third parties to which the controller discloses personal data.
  • Port personal data
  • Opt out of the processing of personal data for targeted advertising, the sale of personal data, or profiling in furtherance of automated decisions that produce legal effects concerning a consumer or similarly significant effects concerning a consumer.

What obligations do controllers have?

Controllers under Minnesota’s law have the following obligations:

  • Provide consumers with a reasonably accessible, clear, and meaningful privacy notice.
  • Limit the collection of personal data to what is adequate, relevant, and reasonably necessary in relation to the purposes for which the data are processed.
  • Establish, implement, and maintain reasonable administrative, technical, and physical data security practices to protect, the confidentiality, integrity, and accessibility of personal data.
  • Document and maintain a description of the policies and procedures to comply with the law.
  • Conduct and document a data privacy and protection assessment for high-risk processing activities.
  • Contractually obligate service providers who will process personal data on behalf of the organization to adhere to specific data protection obligations including ensuring the security of the processing.

How is the law enforced?

The statute will be enforced by Minnesota’s attorney general. Prior to filing an enforcement action, the attorney general must provide the controller or processor with a warning letter identifying the specific provisions alleged to be violated. If after 30 days of issuance of the letter the attorney general believes the violation has not been cured, an enforcement action may be filed. The right to cure sunsets on January 31, 2026.

The statute specifies that it does not create a private right of action.

If you have questions about Minnesota’s privacy law or related issues please reach out to a member of our Privacy, Data, and Cybersecurity practice group to discuss.