A California federal district court recently granted class certification in a lawsuit against a financial services company.  The case involves allegations that the company’s website used third-party technology to track users’ activities without their consent, violating the California Invasion of Privacy Act (CIPA). Specifically, the plaintiffs allege that the company along with its third-party marketing software platform, intercepted and recorded visitors’ interactions with the website, creating “session replays” which are effectively video recordings of the users’ real-time interaction with the website forms. The technology at issue in the suit is routinely utilized by website operators to provide a record of a user’s interactions with a website, in particular web forms and marketing consents. 

The plaintiffs sought class certification for individuals who visited the company’s website, provided personal information, and for whom a certificate associated with their website visit was generated within a roughly year time frame. The company argued that users’ consent must be determined on an individual and not class-wide, basis.  The company asserted that implied consent could have come from multiple different sources including its privacy policies and third-party materials provided notice of data interception and thus should be viewed as consent. Some of the sources the company pointed to as notice included third-party articles on the issue.

The district court found those arguments insufficient and held that common questions of law and fact predominated as to all users. Specifically, the court found whether any of the sources provided notice of the challenged conduct in the first place to be a common issue. Further, the court found that it could later refine the class definition to the extent a user might have viewed a particular source that provided sufficient notice. The court also determined plaintiffs would be able to identify class members utilizing the company’s database, including cross-referencing contact and location information provided by users.

While class certification is not a decision on the merits and it is not determinative whether the company failed to provide notice or otherwise violated CIPA, it is a significant step in the litigation process. If certification is denied, the potential damages and settlement value are significantly lower.  However, if plaintiffs make it over the class certification hurdle, the potential damages and settlement value of the case increase substantially.

This case is a reminder to businesses to review their current website practices and implement updates or changes to address issues such as notice (regarding tracking technologies in use) and consent (whether express or implied) before collecting user data. It is also important when using third-party tracking technologies, to audit if vendors comply with privacy laws and have data protection measures in place.

If you have questions about website tracking technology and privacy compliance, contact a Jackson Lewis attorney to discuss.

President Trump recently fired the three democrats on the Privacy and Civil Liberties Oversight Board (PCLOB). Since these firings bring the Board to a sub-quorum level, they have the potential to significantly disrupt transatlantic transfers of employee and other personal data from the EU to the US under the EU-US Data Privacy Framework (DPF).

The PCLOB is an independent board tasked with oversight of the US intelligence community. It is a bipartisan board consisting of five members, three of whom represent the president’s political party and two represent the opposing party. The PCLOB’s oversight role was a significant element in the Trans-Atlantic Data Privacy Framework (TADPF) negotiations, helping the US demonstrate its ability to provide an essentially equivalent level of data protection to data transferred from the EU. Without this key element, it is highly likely there will be challenges in the EU to the legality of the TADPF. If the European Court of Justice invalidates the TADPF or the EU Commission annuls it, organizations that certify to the EU-US Data Privacy Framework will be without a mechanism to facilitate transatlantic transfers of personal data to the US. This could potentially impact transfers from the UK and Switzerland as well.

Organizations that rely on their DPF certification for transatlantic data transfers should consider developing a contingency plan to prevent potential disruption to the transfer of essential personal data. Steps to prepare for this possibility include reviewing existing agreements to identify what essential personal data is subject to ongoing transfers and the purpose(s), determining whether EU Standard Contractual Clauses would be an appropriate alternative and, if so, conducting a transfer impact assessment to ensure the transferred data will be subject to reasonable and appropriate safeguards.

As the integration of technology in the workplace accelerates, so do the challenges related to privacy, cybersecurity, and the ethical use of artificial intelligence (AI). Human resource professionals and in-house counsel must navigate a rapidly evolving landscape of legal and regulatory requirements. This National Privacy Day, it’s crucial to spotlight emerging issues in workplace technology and the associated implications for data privacy, cybersecurity, and compliance.

We explore here practical use cases raising these issues, highlight key risks, and provide actionable insights for HR professionals and in-house counsel to manage these concerns effectively.

1. Wearables and the Intersection of Privacy, Security, and Disability Law

Wearable devices have a wide range of use cases including interactive training, performance monitoring, and navigation tracking. Wearables such as fitness trackers and smartwatches became more popular in HR and employee benefits departments when they were deployed in wellness programs to monitor employees’ health metrics, promote fitness, and provide a basis for doling out insurance premium incentives. While these tools offer benefits, they also collect sensitive health and other personal data, raising significant privacy and cybersecurity concerns under the Health Insurance Portability and Accountability Act (HIPAA), the Americans with Disabilities Act (ADA), and state privacy laws.

Earlier this year, the Equal Employment Opportunity Commission (EEOC) issued guidance emphasizing that data collected through wearables must align with ADA rules. More recently, the EEOC withdrew that guidance in response to an Executive Order issued by President Trump. Still, employers should evaluate their use of wearables and whether they raise ADA issues, such as voluntary use of such devices when collecting confidential medical information, making disability-related inquiries, and using aggregated or anonymized data to prevent discrimination claims.

Beyond ADA compliance, cybersecurity is critical. Wearables often collect sensitive data and transmit same to third-party vendors. Employers must assess these vendors’ data protection practices, including encryption protocols and incident response measures, to mitigate the risk of breaches or unauthorized access.

Practical Tip: Implement robust contracts with third-party vendors, requiring adherence to privacy laws, breach notification, and security standards. Also, ensure clear communication with employees about how their data will be collected, used, and stored.

2. Performance Management Platforms and Employee Monitoring

Platforms like Insightful and similar performance management tools are increasingly being used to monitor employee productivity and/or compliance with appliable law and company policies. These platforms can capture a vast array of data, including screen activity, keystrokes, and time spent on tasks, raising significant privacy concerns.

While such tools may improve efficiency and accountability, they also risk crossing boundaries, particularly when employees are unaware of the extent of monitoring and/or where the employer doesn’t have effective data minimization controls in place. State laws like the California Consumer Privacy Act (CCPA) can place limits on these monitoring practices, particularly if employees have a reasonable expectation of privacy. They also can require additional layers of security safeguards and administration of employee rights with respect to data collected and processed using the platform.

Practical Tip: Before deploying such tools, assess the necessity of data collection, ensure transparency by notifying employees, and restrict data collection to what is strictly necessary for business purposes. Implement policies that balance business needs with employee rights to privacy.

3. AI-Powered Dash Cams in Fleet Management

AI-enabled dash cams, often used for fleet management, combine video, audio, GPS, telematics, and/or biometrics to monitor driver behavior and vehicle performance, among other things. While these tools enhance safety and efficiency, they also present significant privacy and legal risks.

State biometric privacy laws, such as Illinois’s Biometric Information Privacy Act (BIPA) and similar laws in California, Colorado, and Texas, impose stringent requirements on biometric data collection, including obtaining employee consent and implementing robust data security measures. Employers must also assess the cybersecurity vulnerabilities of dash cam providers, given the volume of biometric, location, and other data they may collect.

Practical Tip: Conduct a legal review of biometric data collection practices, train employees on the use of dash cams, and audit vendor security practices to ensure compliance and minimize risk.

4. Assessing Vendor Cybersecurity for Employee Benefits Plans

Third-party vendors play a crucial role in processing data for retirement plans, such as 401(k) plan, as well as health and welfare plans. The Department of Labor (DOL) emphasized in recent guidance the importance of ERISA plan fiduciaries’ role to assess the cybersecurity practices of such service providers.

The DOL’s guidance underscores the need to evaluate vendors’ security measures, incident response plans, and data breach notification practices. Given the sensitive nature of data processed as part of plan administration—such as Social Security numbers, health records, and financial information—failure to vet vendors properly can lead to breaches, lawsuits, and regulatory penalties, including claims for breach of fiduciary duty.

Practical Tip: Conduct regular risk assessments of vendors, incorporate cybersecurity provisions into contracts, and document the due diligence process to demonstrate compliance with fiduciary obligations.

5. Biometrics for Access, Time Management, and Identity Verification

Biometric technology, such as fingerprint or facial recognition systems, is widely used for identity verification, physical access, and timekeeping. While convenient, the collection of biometric data carries significant privacy and cybersecurity risks.

BIPA and similar state laws require employers to obtain written consent, provide clear notices about data usage, and adhere to stringent security protocols. Additionally, biometrics are uniquely sensitive because they cannot be changed if compromised in a breach.

Practical Tip: Minimize reliance on biometric data where possible, ensure compliance with consent and notification requirements, and invest in encryption and secure storage systems for biometric information. Check out our Biometrics White Paper.

6. HIPAA Updates Affecting Group Health Plan Compliance

Recent changes to the HIPAA Privacy Rule, including provisions related to reproductive healthcare, significantly impact group health plans. The proposed HIPAA Security Rule amendments also signal stricter requirements for risk assessments, access controls, and data breach responses.

Employers sponsoring group health plans must stay ahead of these changes by updating their HIPAA policies and Notice of Privacy Practices, training staff, and ensuring that business associate agreements (BAAs) reflect the new requirements.

Practical Tip: Regularly review HIPAA compliance practices and monitor upcoming changes to ensure your group health plan aligns with evolving regulations.

7. Data Breach Notification Laws and Incident Response Plans

Many states have updated their data breach notification laws, lowering notification thresholds, shortening notification timelines, and expanding the definition of personal information. Employers should revise their incident response plans (IRPs) to align with these changes.

Practical Tip: Ensure IRPs reflect updated laws, test them through simulated breach scenarios, and coordinate with legal counsel to prepare for reporting obligations in case of an incident.

8. AI Deployment in Recruiting and Retention

AI tools are transforming HR functions, from recruiting to performance management and retention strategies. However, these tools require vast amounts of personal data to function effectively, increasing privacy and cybersecurity risks.

The EEOC and other regulatory bodies have cautioned against discriminatory impacts of AI, particularly regarding protected characteristics like disability, race, or gender. (As noted above, the EEOC recently withdrew its AI guidance under the ADA and Title VII following an Executive Order by the Trump Administration.) For example, the use of AI in hiring or promotions may trigger compliance obligations under the ADA, Title VII, and state laws.

Practical Tip: Conduct bias audits of AI systems, implement data minimization principles, and ensure compliance with applicable anti-discrimination laws.

9. Employee Use of AI Tools

Moving beyond the HR department, AI tools are fundamentally changing how people work.  Tasks that used to require time-intensive manual effort—creating meeting minutes, preparing emails, digesting lengthy documents, creating PowerPoint decks—can now be completed far more efficiently with assistance from AI.  The benefits of AI tools are undeniable, but so too are the associated risks.  Organizations that rush to implement these tools without thoughtful vetting processes, policies, and training will expose themselves to significant regulatory and litigation risk.     

Practical Tip: Not all AI tools are created equal—either in terms of the risks they pose or the utility they provide—so an important first step is developing criteria to assess, and then going through the process of assessing, which AI tools to permit employees to use.  Equally important is establishing clear ground rules for how employees can use those tools.  For instance, what company information are they permitted to use to prompt the tool; what are the processes for ensuring the tool’s output is accurate and consistent with company policies and objectives; and should employee use of AI tools be limited to internal functions or should they also be permitted to use these tools to generate work product for external audiences. 

10. Data Minimization Across the Employee Lifecycle

At the core of many of the above issues is the principle of data minimization. The California Privacy Protection Agency (CPPA) has emphasized that organizations must collect only the data necessary for specific purposes and ensure its secure disposal when no longer needed.

From recruiting to offboarding, HR professionals must assess whether data collection practices align with the principle of data minimization. Overcollection not only heightens privacy risks but also increases exposure in the event of a breach.

Practical Tip: Develop a data inventory mapping employee information from collection to disposal. Regularly review and update policies to limit data retention and enforce secure deletion practices.

Conclusion

The rapid adoption of emerging technologies presents both opportunities and challenges for employers. HR professionals and in-house counsel play a critical role in navigating privacy, cybersecurity, and AI compliance risks while fostering innovation.

By implementing robust policies, conducting regular risk assessments, and prioritizing data minimization, organizations can mitigate legal exposure and build employee trust. This National Privacy Day, take proactive steps to address these issues and position your organization as a leader in privacy and cybersecurity.

Insider threats continue to present a significant challenge for organizations of all sizes. One particularly concerning scenario involves employees who leave an organization and impermissibly take or download sensitive company data. These situations can severely impact a business, especially when departing employees abscond with confidential business information or trade secrets. Focusing on how the theft of such information could cripple a business’s operations, competitive advantage, etc. is warranted. It is critical not to overlook, however, other legal and regulatory implications stemming from the theft of certain data, including potential data breach notification obligations.

The Importance of Safeguarding Trade Secrets

Trade secrets generally refer to information that has commercial value because it’s kept secret. Examples include formulas, patterns, programs, devices, methods, and other valuable business data. Such data are often the lifeblood of a company’s competitive edge. These secrets must be safeguarded to retain their value and legal protections under the Uniform Trade Secrets Act (UTSA) which has been adopted by most states. Businesses will need to demonstrate that they took reasonable measures to protect their trade secrets.

Reasonable safeguards under the UTSA can include:

  • Implementing access controls to restrict employees’ ability to download or share sensitive information.
  • Requiring employees to sign confidentiality agreements and restrictive covenants.
  • Regularly training employees on the importance of data security and confidentiality.
  • Using monitoring tools to detect unusual access or downloads of sensitive data.

Failing to adopt such safeguards can jeopardize a company’s ability to claim protection for trade secrets and pursue legal remedies if those secrets are stolen. Companies should consult with trusted IT and legal advisors to ensure they have adequate safeguards.

Beyond Trade Secrets: Data Breach Concerns

While the theft of confidential business and trade secret information rightly garners attention, focusing exclusively on this aspect may cause companies to miss another critical risk: the theft of personal information. As part of their efforts to remove company information, departing employees may inadvertently or intentionally take personal information, such as employee or customer data, which could trigger significant legal obligations, particularly if accessed or acquired without authorization.

Contrary to common assumptions, data breach notification laws do not solely apply to stolen Social Security numbers. Most state data breach laws define “personal information” broadly to include elements such as:

  • Financial account information, including debit or credit card numbers.
  • Driver’s license or state identification numbers.
  • Health insurance and medical information.
  • Dates of birth.
  • Online account credentials, such as usernames and passwords.
  • Biometric data, such as fingerprints or facial recognition profiles.

The unauthorized access or acquisition of these data elements together with the individual’s name can constitute a data breach, requiring timely notification to affected individuals and, in some cases, regulatory authorities.

Broader Regulatory and Contractual Implications

In addition to state breach notification laws that seek to protect personal information, companies must consider other regulatory and contractual obligations when sensitive data is stolen. For example:

  • Publicly traded companies: Theft of critical business information by a departing employee may require disclosure under U.S. Securities and Exchange Commission (SEC) regulations if the theft is deemed material. If a company determines the materiality threshold has been reached, it has four days to report to the public.
  • Critical infrastructure businesses: Companies providing services in regulated industries, such as energy or healthcare, may have reporting obligations to regulatory authorities if sensitive confidential business data is compromised.
  • Contractual obligations: Many businesses enter into agreements with business customers that require notification if confidential business information or personal data is compromised.

Ignoring these obligations could expose organizations to fines, lawsuits, and reputational harm, compounding the difficulties already created by the theft of an organization’s confidential business information.

Taking a Comprehensive Approach to Data Theft

The theft of confidential business information by a departing employee can be devastating for a business. However, focusing solely on restrictive covenants, trade secrets, or business information risks overlooking the full scope of legal and regulatory obligations. To effectively respond to such incidents, companies should:

  1. Identify the nature of the stolen data: Assess whether the data includes personal information, trade secrets, or other sensitive information that could trigger specific legal obligations.
  2. Evaluate legal and regulatory obligations: Determine whether notification is required under state breach laws, SEC or other regulations (if applicable), industry-specific rules, or contractual agreements.
  3. Leverage restrictive covenant agreements: Assess appropriate legal or contractual remedies, including under restrictive covenant, confidentiality, and other agreements, as part of a broader strategy to address the theft.
  4. Implement safeguards: Strengthen data protection measures to mitigate the risk of future incidents, including employee training, enhanced monitoring, and robust exit procedures.

While dealing with insider threats is undoubtedly challenging, taking a comprehensive and proactive approach can help businesses protect their interests and minimize legal exposure. In today’s interconnected and highly regulated world, understanding the full scope of risks and obligations tied to data theft is essential for any business.

If you are looking for a high-level summary of California laws regulating artificial intelligence (AI), check out the two legal advisories issued by California Attorney General Rob Bonta. The first advisory is directed at consumers and entities about their rights and obligations under the state’s consumer protection, civil rights, competition, and data privacy laws. The second advisory focuses on healthcare entities.

“AI might be changing, innovating, and evolving quickly, but the fifth largest economy in the world is not the wild west; existing California laws apply to both the development and use of AI.” Attorney General Bonta

The advisories summarize existing California laws that may apply to entities who develop, sell, or use AI. They also address several new California AI laws that went into effect on January 1, 2025.

The first advisory points to several existing laws, such as California’s Unfair Competition Law and Civil Rights Laws, designed to protect consumers from unfair and fraudulent business practices, anticompetitive harm, discrimination and bias, and abuse of their data.

California’s Unfair Competition Law, for example, protects the state’s residents against unlawful, unfair, or fraudulent business acts or practices. The advisory notes that “AI provides new tools for businesses and consumers alike, and also creates new opportunity to deceive Californians.” Under a similar federal law, the Federal Trade Commission (FTC) recently ordered an online marketer to pay $1 million resulting from allegations concerning deceptive claims that the company’s AI product could make websites compliant with accessibility guidelines. Considering the explosive growth of AI products and services, organizations should be revisiting their procurement and vendor assessment practices to be sure they are appropriately vetting vendors of AI systems.

Additionally, the California Fair Employment and Housing Act (FEHA) protects Californians from harassment or discrimination in employment or housing based on a number of protected characteristics, including sex, race, disability, age, criminal history, and veteran or military status. These FEHA protections extend to uses of AI systems when developed for and used in the workplace. Expect new regulations soon as the California Civil Rights Counsel continues to mull proposed AI regulations under the FEHA.

Recognizing that “data is the bedrock underlying the massive growth in AI,” the advisory points to the state’s constitutional right to privacy, applicable to both government and private entities, as well as to the California Consumer Privacy Act (CCPA). Of course, California has several other privacy laws that may need to be considered when developing and deploying AI systems – the California Invasion of Privacy Act (CIPA), the Student Online Personal Information Protection Act (SOPIPA), and the Confidentiality of Medical Information Act (CMIA).

Beyond these existing laws, the advisory also summarizes new laws in California directed at AI, including:

  • Disclosure Requirements for Businesses
  • Unauthorized Use of Likeness
  • Use of AI in Election and Campaign Materials
  • Prohibition and Reporting of Exploitative Uses of AI

The second advisory recounts many of the same risks and concerns about AI as relevant to the healthcare sector. Consumer protection, anti-discrimination, patient privacy and other concerns all are challenges entities in the healthcare sector face when developing or deploying AI. The advisory provides examples of applications of AI systems in healthcare that may be unlawful, here are a couple:

  • Denying health insurance claims using AI or other automated decisionmaking systems in a manner that overrides doctors’ views about necessary treatment.
  • Use generative AI or other automated decisionmaking tools to draft patient notes, communications, or medical orders that include erroneous or misleading information, including information based on stereotypes relating to race or other protected classifications.

The advisory also addresses data privacy, reminding readers that the state’s CMIA may be more protective in some respects than the popular federal healthcare privacy law, HIPAA. It also discusses recent changes to the CMIA that require providers and electronic health records (EHR) and digital health companies enable patients to keep their reproductive and sexual health information confidential and separate from the rest of their medical records. These and other requirements need to be taken into account when incorporating AI into EHRs and related applications.

In both advisories, the Attorney General makes clear that in addition to the laws referenced above, other California laws—including tort, public nuisance, environmental and business regulation, and criminal law—apply to AI. In short:  

Conduct that is illegal if engaged in without the involvement of AI is equally unlawful if AI is involved, and the fact that AI is involved is not a defense to liability under any law.

Both advisories provide a helpful summary of laws potentially applicable to AI systems, and can be useful resources when building policies and procedures around the development and/or deployment of AI systems.  

This month, the New Jersey Attorney General’s office (NJAG) added to nationwide efforts to regulate, or at least clarify the application of existing law, in this case the NJ Law Against Discrimination, N.J.S.A. § 10:5-1 et seq. (LAD), to artificial intelligence technologies. In short, the NJAG’s guidance states:

the LAD applies to algorithmic discrimination in the same way it has long applied to other discriminatory conduct.  

If you are not familiar with it, the LAD generally applies to employers, housing providers, places of public accommodation, and certain other entities. The law prohibits discrimination on the basis of actual or perceived race, religion, color, national origin, sexual orientation, pregnancy, breastfeeding, sex, gender identity, gender expression, disability, and other protected characteristics. According to the NJAG’s guidance, the LAD protections extend to algorithmic discrimination (discrimination that results from the use of automated decision-making tools) in employment, housing, places of public accommodation, credit, and contracting.

Citing a recent Rutgers survey, the NJAG pointed to high levels of adoption of AI tools by NJ employers. According to the survey, 63% of NJ employers use one or more tools to recruit job applicants and/or make hiring decisions. These AI tools are broadly defined in the guidance to include:

any technological tool, including but not limited to, a software tool, system, or process that is used to automate all or part of the human decision-making process…such as generative AI, machine-learning models, traditional statistical tools, and decision trees.

The NJAG guidance examines some ways that AI tools may contribute to discriminatory outcomes.

  • Design. Here, the choices a developer makes in designing an AI tool could, purposefully or inadvertently, result in unlawful discrimination. The results can be influenced by the output the tool provides, the model or algorithms the tool uses, and what inputs the tool assesses which can introduce bias into the automated decision-making tool.
  • Training. As AI tools need to be trained to learn the intended correlations or rules relating to their objectives, the datasets used for such training may contain biases or institutional and systemic inequities that can affect the outcome. Thus, the datasets used in training can drive unlawful discrimination.
  • Deployment. The NJAG also observed that AI tools could be used to purposely discriminate, or to make decisions for which the tool was not designed. These and other deployment issues could lead to bias and unlawful discrimination.

The NJAG notes that its guidance does not impose any new or additional requirements that are not included in the LAD, nor does it establish any rights or obligations for any person beyond what exists under the LAD. However, the guidance makes clear that covered entities can violate the LAD even if they have no intent to discriminate (or do not understand the inner workings of the tool) and, just as noted by the EEOC in guidance the federal agency issued under Title VII, even if a third-party was responsible for developing the AI tool. Importantly, under NJ law, this includes disparate treatment/impact which may result from the design or usage of AI tools.

As we have noted, it is critical for organizations to assess, test, and regularly evaluate the AI tools they seek to deploy in their organizations for many reasons, including to avoid unlawful discrimination. The measures should include working closely with the developers to vet the design and testing of their automated decision-making tools before they are deployed. In fact, the NJAG specifically noted many of these steps as ways organizations may decrease the risk of liability under the LAD. Maintaining a well thought out governance strategy for managing this technology can go a long way to minimizing legal risk, particularly as the law develops in this area.

A massive data breach hit one of the country’s largest education software providers. According to EducationWeek, PowerSchool provides school software products to more than 16,000 customers, largely K-12 schools, that serve 50 million students in the United States. According to reports, PowerSchool informed customers that, on December 28, 2024, PowerSchool became aware of a cybersecurity incident involving unauthorized access to certain information through one of its community-focused customer support portals, PowerSource. The unauthorized access affected PowerSchool’s Student Information System (“SIS”).

According to one of its communications to customers, PowerSchool stated:

While we are unaware of and do not expect any actual or attempted misuse of personal information or any financial harm to impacted individuals as a result of this incident, PowerSchool will be providing credit monitoring to affected adults and identity protection services to affected minors in accordance with regulatory and contractual obligations. The particular information compromised will vary by impacted customer. We anticipate that only a subset of impacted customers will have notification obligations.

Needless to say, PowerSchool customers likely have lots of questions and concerns about next steps. The Q and A below are intended to help school communities and other affected entities strategize about next steps.

Is this just a PowerSchool problem?

There certainly are steps PowerSchool should be taking. As a service provider that processes the personal information of its customers, conducting a prompt investigation and informing data owners of critical information relating to the breach top the list. Additionally, each customer’s service agreement with PowerSchool may include broader obligations for the vendor. Providing ongoing support and mitigating potential harm also can reasonably be expected. But, schools and other PowerSchool customers may have obligations of their own.  

What should potentially affected PowerSchool customers be doing?

There are several items to consider:

Look at your incident response plan. If you have an incident response plan, it may provide steps to help keep your team organized and focused. If you do not have one, consider developing one in the future.

Gather information. As noted above, PowerSchool has already put out information concerning the breach, and more is likely to come. But there may be other helpful information for you online from trusted sources. For example a bleepingcomputer article provides information on (i) determining whether your school district was affected, and (ii) a link to a “detailed guide written by Romy Backus, SIS Specialist at the American School of Dubai, [that] explains how to check the PowerSchool SIS logs to determine if data was stolen.”

Be ready to communicate with your school community. Teachers, parents, students, former students, and others will have a lot of questions about the incident. According to a report by Infosecurity Magazine,

A message to parents by the Howard-Suamico School District in Wisconsin, US, seen by news outlet NBC 26, read: “PowerSchool confirmed that this was not a ransomware attack but it did pay a ransom to prevent the data from being released.

If a ransom was paid to a threat actor, there is no way to confirm that the data has not or will not be released or used for an impermissible purpose. For this and other reasons, it will be critical to have a plan for delivering prompt, consistent, and accurate messaging about the breach as soon as possible. Having a limited number of persons responsible for responding to questions can help to avoid misinformation and maintain consistent messaging.

As the investigation proceeds, PowerSchool likely will be providing more information about notifications, ID theft and credit monitoring services, and other information concerning the continued response to the incident. Affected schools and other PowerSchool customers will need to be ready to receive that information and decide how best to convey that information to their community. In the event decisions need to be made by a school’s Board, start thinking ahead to taking all the necessary steps to arrange for those meetings so decisions can be made appropriately, thoughtfully, and timely. Feel free to contact our incident response attorneys as we have helped many schools and school districts navigate challenging communications in similar incidents.

Get a handle on your legal and contractual rights and obligations. State breach notification laws generally place the obligation to notify affected persons and others on the owner of the personal information compromised in the breach, not the service provider that had the breach. In many cases, however, a vendor causing a data breach may take on the obligation to provide such notifications, but the owner of the data still will be on the hook if that process if not performed in a compliant manner.

Of course, state notification laws vary state to state. Examples of these variations include the definition of personal information, exceptions to the notification requirement, timeframes for notification, and requirements for ID theft and credit monitoring services. Reports noted above indicate that PowerSchool may be supporting the notification process. However, because the breach is affecting customers differently (e.g., different personal information affected, different state laws), PowerSchool may rely on instructions from customers about whether and how to comply with certain aspects of the notification requirements.

Note also that some states may have issued specific regulatory requirements for school districts and their vendors. For example, in New York, regulations issued by the New York State Department of Education and adopted by its Board of Regents in 2020 require school districts and state-supported schools to develop and implement robust data security and privacy programs to protect any personally identifiable information (“PII”) relating to students, teachers and principals. Among other things, the NY regulations require vendors that suffer a breach to notify the affected schools within seven (7) calendar days. The schools must in turn notify SED within ten (10) calendar days of receipt of notification of a breach from the vendor; and the schools must notify the affected individuals of the breach without unreasonable delay but in no case later than sixty (60) days of discovery or receipt of breach notification from the vendor.

Just as the law varies, the services agreement a school negotiated with PowerSchool may vary from PowerSchool’s standard form. Affected PowerSchool customers should be reviewing those agreements to assess their rights and obligations in areas such as information security, data breach response, and indemnity.

Evaluate insurance protections. Some organizations may have purchased “cyber” or “breach response” insurance which could cover some of the costs related to responding to the breach or defending litigation that may follow. PowerSchool should review their policy(ies) with their brokers to understand the potential coverage and what steps, if any, they need to take to confirm coverage.

What can individuals potentially affected by the PowerSchool breach do now?

It may take some time before notifications are sent to individuals affected by the breach. However, there are some resources that individuals could examine to consider their options now. Databreaches.net pulled together some helpful resources for potentially affected individuals, such as teachers, parents, and former students. Access that here.

When the dust clears from the PowerSchool incident, what should schools do going forward?

This is not the first vendor incident that has affected schools and it will not be the last. There are many steps schools and any organizations should consider taking following a vendor’s breach affecting the organization’s data. However, for the moment, affected schools and customers should focus on the incident at hand. When the time comes, they should consult with experienced legal counsel and information security experts to be sure they have adopted reasonable safeguards at a minimum to protect their data, and that they have assessed whether their vendors are doing the same.

* * *

For organizations large and small, incidents like this can be a significant disruption. To minimize that disruption, organizations may want and need to communicate with their applicable communities, and should do so confidently, but carefully. More information can be very helpful, but too much information and information that is repetitive can be confusing and frustrating. Organizations should involve key persons internally and possibly seek outside expertise and counsel to reach an appropriate balance in their response strategy and communications.

Ask any chief information security officer (CISO), cyber underwriter or risk manager, or cybersecurity attorney about what controls are critical for protecting an organization’s information systems, you’ll likely find multifactor authentication (MFA) at or near the top of every list. Government agencies responsible for helping to protect the U.S. and its information systems and assets (e.g., CISA, FBI, Secret Service) send the same message. But that message may be evolving a bit as criminal threat actors have started to exploit weaknesses in MFA.  

According to a recent report in Forbes, for example, threat actors are harnessing AI to break though multifactor authentication strategies designed to prevent new account fraud. “Know Your Customer” procedures are critical in certain industries for validating the identity of customers, such as financial services, telecommunications, etc. Employers increasingly face similar issues with recruiting employees, when they find, after making the hiring decision, that the person doing the work may not be the person interviewed for the position.

Threat actors have leveraged a new AI deepfake tool that can be acquired on the dark web to bypass the biometric systems that been used to stop new account fraud. According to the Forbes article, the process goes something like this:

1. Bad actors use one of the many generative AI websites to create and download a fake image of a person.

2. Next, they use the tool to synthesize a fake passport or a government-issued ID by inserting the fake photograph…

3. Malicious actors then generate a deepfake video (using the same photo) where the synthetic identity pans their head from left to right. This movement is specifically designed to match the requirements of facial recognition systems. If you pay close attention, you can certainly spot some defects. However, these are likely ignored by facial recognition because videos are prone to have distortions due to internet latency issues, buffering or just poor video conditions.

4. Threat actors then initiate a new account fraud attack where they connect a cryptocurrency exchange and proceed to upload the forged document. The account verification system then asks to perform facial recognition where the tool enables attackers to connect the video to the camera’s input.

5. Following these steps, the verification process is completed, and the attackers are notified that their account has been verified.”

Sophisticated AI tools are not the only MFA vulnerability. In December 2024, the Cybersecurity & Infrastructure Security Agency (CISA) issued best practices for mobile communications. Among its recommendations, CISA advised mobile phone users, in particular highly-targeted individuals,  

Do not use SMS as a second factor for authentication. SMS messages are not encrypted—a threat actor with access to a telecommunication provider’s network who intercepts these messages can read them. SMS MFA is not phishing-resistant and is therefore not strong authentication for accounts of highly targeted individuals.

In a 2023 FBI Internet Crime Report, the FBI reported more than 1,000 “SIM swapping” investigations. A SIM swap is just another technique by threat actors involving the “use of unsophisticated social engineering techniques against mobile service providers to transfer a victim’s phone service to a mobile device in the criminal’s possession.

In December, Infosecurity Magazine reported on another vulnerability in MFA. In fact, there are many reports about various vulnerabilities with MFA.

Are we recommending against the use of MFA. Certainly not. Our point is simply to offer a reminder that there are no silver bullets to achieving security of information systems and that AI is not only used by the good guys. An information security program, preferably one that is written (a WISP), requires continuous vigilance, and not just from the IT department, as new technologies are leveraged to bypass older technologies.

In 2024, Israel became the latest jurisdiction to enact comprehensive privacy legislation, largely inspired by the EU’s General Data Protection Regulation (“GDPR”). On August 5, 2024, Israel’s parliament, the Knesset, voted to approve the enactment of Amendment No. 13 (“the Amendment”) to the Israel Privacy Protection Law (“IPPL”). The amendment which will take effect on August 15, 2025, is considered an overhaul to the IPPL, which has been left largely untouched since the law’s enactment in 1996.

Key Features of the Amendment include:

  • Expansion of key definitions in the law
    • Personal Information – Expanded to include any “data related to an identified or identifiable person”.Highly Sensitive Information – Replaces the IPPL’s current definition of “sensitive information” and is similar in kind to the GDPR’s Special Categories of Data.  Types of information that qualify as highly sensitive information under the Amendment include biometric data, genetic data, location and traffic data, criminal records and assessment of personality types.Data Processing The Amendment broadens the definition of processing to include any operation on information, including receipt, collection, storage, copying, review, disclosure, exposure, transfer, conveyance, or granting access.Database Controller – The IPPL previously used the term “database owner”, and akin to the GDPR has changed the term to database controller, which is defined as the person or entity that determines the purpose of processing personal information in the database.
    • Database Holder – Similar to the GDPR’s “processor”, the Amendment includes the term database holder which is defined as an entity “external to the data controller that processes information on behalf of the data controller”, which due to the broad definition of data processing, captures a broad set of third-party service providers.
  • Mandatory Appointment of a Privacy Protection Officer & Data Security Officer
    • Equivalent to the GDPR’s Data Protection Officer (DPO) role, an entity that meets certain criteria based on size and industry (inclusive of both data controllers and processors), will be required to implement a new role in their organization entitled the Privacy Protection Officer, tasked with ensuring compliance with the IPPL and promoting data security and privacy protection initiatives within their organization.   Likewise, the obligation to appoint a Data Security Officer, which was a requirement for certain organizations prior to the Amendment, has now been expanded to apply to a broader set of entities.
  • Expansion of Enforcement Authority
    • The Privacy Protection Authority (“PPA”), Israel’s privacy regulator, has been given broader enforcement authority including a significant increase in financial penalties based on the number of data subjects impacted due to a violation, the type of violation and the violating entity’s financial turnover.  Financial penalties are capped at 5% of the businesses‘ annual turnover for larger organizations which could reach millions of dollars (e.g. a data processor that processes data without the controller’s permission in a database of 1,000,000 data subjects (8 ILS per data subject) can be fined 8,000,000 ILS (approx. $2.5 million USD)).  Small and micro businesses are capped at penalties of 140,000 ILS ($45,000 USD) per year. Other enhancements to the PPA’s authority include expansive investigative and supervisory powers as well as increased authority for the Head of the PPA to issue warnings and injunctions. 

Additional updates to the Amendment include expansion of the notice obligation in the case of a data breach, increased rights of data subjects, extension of the statute of limitations and exemplary damages. In following segments on the IPPL leading up to the August 2025 effective date, we will dive deeper on some of the key features of the Amendment, certain to have impact on entities with customers and/or employees in Israel.

Data privacy and security regulation is growing rapidly around the world, including in Israel. This legislative activity, combined with the growing public awareness of data privacy rights and concerns, makes the development of a meaningful data protection program an essential component of business operations.

The Indiana Attorney General Office (OAG) filed a detailed complaint on December 23, 2024 (Complaint) which arose out of the following patient complaint:

The OAG received a consumer complaint stating that the consumer had contacted Arlington Westend Dental on multiple occasions to receive copies of their x-rays, but Arlington Westend Dental stated it no longer had the x-rays because someone “hacked” their systems.

Under both federal and state law, patients generally have rights to their medical records. In fact, over the last several years, the federal Office for Civil Rights (OCR), which enforces the HIPAA Privacy and Security Rules, has vigorously enforced these rights. In October 2024, the agency announced its 50th enforcement action, touting a $70,000 settlement, coincidentally with another dental practice.

It should be no surprise that the patient sought redress from the OAG, particularly after being told the reason for the lack of records was a “hacking” to the dental practice’s systems. At that point, according to Complaint, the patient had not received notice of the incident. However, the facts that follow in the Complaint may be surprising for some.

According to the Complaint:

  • A ransomware attack occurred in October 2020. Because no forensic investigation was performed, scope of the incident could not be determined.
  • The ransomware attack was not reported to the OAG when required by law. It was discovered during the investigation. When it was ultimately reported, the report indicated that the incident was not an intrusion, “but an incident of data being lost when the on-site internal hard drive of the server got formatted by mistake.”
  • The OAG obtained recordings of customer service calls from the dental practices software vendor that told a different story about the incident, confirming facts consistent with a ransomware attack, encryption of all records on the impacted server, and the existence of a ransom note.

The OAG’s findings about the ransomware incident prompted further investigation into the practice’s compliance with HIPAA generally. According to the Complaint, the practice had one set of HIPAA policies located at one of its six locations, with no evidence of implementation. No risk assessment had been conducted. In addition to a lack of evidence of regulatory compliance with policy and procedure obligations under HIPAA, the OAG also learned that the practice “repeatedly disclosed PHI in public replies to online patient reviews and made public posts disclosing PHI and identifying individuals, including minor children, as patients of [the practice] without patient authorization.”

The OAG included in the Complaint examples of the photographs of patients made public by the practice and some of the responses to online reviews. Here is one of those responses:

Ms. [redacted] I am sorry to hear that you are upset with the treatment that your husband received at our office. We strive for nothing but the best care for our patients. And let me assure you that your husband got very good dental care. Your husband came in as an emergency because of pain and infection and wanted to have the tooth extracted. We took time out of our busy schedule to take care of him and provide the same-day treatment, for which most people are grateful. He was already in so much pain as you stated when he came in, which means he already had severe infection. We treated the infection by extracting the tooth which was the source of the infection. The doctor also prescribed antibiotics and pain medication. I don’t understand why you would say that we did not take the whole tooth out. We have a post-op X-ray that shows the entire tooth has been extracted. Perhaps you should seek professional opinion of another dentist rather than giving us an unfair review based upon your vague and uninformed assumptions.

Clearly, a lot went wrong here, and there are some serious allegations by the OAG about how this incident and the investigation were handled by the practice. But there are some recurring lessons for providers, particularly smaller and midsized practices, that include:

  • Having a set of HIPAA policies in a draw that no one in the practice sees will do little to support an argument for HIPAA compliance.
  • Complaints about timely and adequate responses to requests for patient records will get the attention of federal and state agencies, and if true likely lead to penalties.
  • While they can be upsetting and possibly disruptive to the practice, responding to patient reviews online and in social media can be serious traps for the unwary. We have seen it play out badly for providers here, here, and here.

We have helped many small to midsized providers, including dental practices, work through the issues and avoid these kinds of settlements and enforcement actions.