When the California Privacy Rights Act (CPRA) was enacted, it created the California Privacy Protection Agency (CPPA) and delegated to the CPPA significant regulatory authority. One of the areas of that authority is cybersecurity, which includes performing cybersecurity audits annually. On September 8, 2023, the CPPA considered a draft set of regulations that would establish rules for conducting cybersecurity audits.

It is important to note that California currently mandates certain businesses to maintain reasonable security procedures and practices to protect personal information.

  • Civil Code Section 1798.100(e), under the CCPA, provides:

A business that collects a consumer’s personal information shall implement reasonable security procedures and practices appropriate to the nature of the personal information to protect the personal information from unauthorized or illegal access, destruction, use, modification, or disclosure in accordance with Section 1798.81.5.

  • Civil Code Section 1798.81.5, provides:

(b) A business that owns, licenses, or maintains personal information3 about a California resident shall implement and maintain reasonable security procedures and practices appropriate to the nature of the information, to protect the personal information from unauthorized access, destruction, use, modification, or disclosure.

(c) A business that discloses personal information about a California resident pursuant to a contract with a nonaffiliated third party that is not subject to subdivision (b) shall require by contract that the third  party implement and maintain reasonable security procedures and practices appropriate to the nature of the information, to protect the personal information from unauthorized access, destruction, use, modification or disclosure

A couple of observations about these provisions:

  • Section 1798.100 which is part of the CCPA, applies to “businesses” that are subject to the CCPA. Section 1798.80(a) defines “business” more broadly to include “a sole proprietorship, partnership, corporation, association, or other group, however organized and whether or not organized to operate at a profit.” For example, while the CCPA generally applies to for-profit entities, this section of the Civil Code applies to businesses whether or not organized for profit.
  • As the CPPA begins to establish regulations around a set of personal information for one set of “businesses,” those covered under the CCPA, there is also guidance in California for businesses covered by Civil Code Section 1798.81.5 which includes audit requirements as well. In February 2016, the then-California Attorney General and now Vice President, Kamala D. Harris, issued a California Data Breach Report. According to that report, a business’s failure to implement all of the controls set forth in the Center for Internet Security’s Critical Security Controls constitutes a lack of reasonable security. Of course, the CCPA appears to incorporate the requirements of Civil Code Section 1798.81.5. Nonetheless, businesses will need to figure out which cybersecurity standard applies to them.

So, what do the draft CCPA cybersecurity audit regulations say? Here is a summary of just some of the proposed requirements for such audits:

  • The requirement for a covered business to complete the audit will be based on whether the business’s processing of personal information presents a significant risk to consumers’ security. The draft regulations are beginning to craft the factors for determining when there will be a significant risk. One factor that would trigger the audit requirement is that the business derives 50 percent or more of its annual revenues from selling or sharing consumers’ personal information. However, the CPPA is considering other factors, such as the business having more than a to-be-determined amount of gross revenue or number of employees.  
  • Cybersecurity audits would be required to be performed by “qualified, objective, independent professional [auditor] using procedures and standards generally accepted in the profession of auditing.” However, the auditor would not need to be external to the business, provided such an auditor can exercise impartial judgment – e.g., such an auditor should not be auditing the cybersecurity program the auditor helped to create.  The audit would need to include the auditor’s name, affiliation, and relevant qualifications to complete the cybersecurity audit in such detail as necessary to fully describe the nature of their qualifications; and the number of hours that each auditor worked on the cybersecurity audit.
  • The cybersecurity audit would need to:
    • Assess, document, and summarize each applicable component of the business’s cybersecurity program;
    • Specifically, identify any gaps or weaknesses in the business’s cybersecurity program;
    • Specifically, address the status of any gaps or weaknesses identified in any prior cybersecurity audit; and
    • Specifically, identify any corrections or amendments to any prior cybersecurity audits.
  • The audit would have to assess and document certain components of the cybersecurity program with “specificity.” One such component is the safeguards the business has in place, such as multi-factor authentication, encryption, zero trust architecture, access management, audit log management, response to security incidents, etc. If a component is not available, the audit would be required to document and explain why it is not necessary and how other safeguards provide at least equivalent security; a standard not too dissimilar to the “addressable” rule for implementation specifications under the HIPAA Security Rule.
  • The cybersecurity audit would need to be reported to the business’s board of directors or governing body, or if no such board or equivalent body exists, to the highest-ranking executive in the business responsible for the business’s cybersecurity program. Notably, the audit would need to include certain statements, such as a certification that such governing body or highest-ranking executive has reviewed the cybersecurity audit and understands its findings.
  • If the business provided notifications to affected consumers under California’s breach notification law for businesses, the cybersecurity audit would have to include a description of those notifications and, where applicable, a description of the notification to the California Attorney General.
  • Service providers and contractors would be required to cooperate with businesses completing such audits, including making available all “relevant information that the auditor deems necessary for the auditor to complete the business’s cybersecurity audit.”
  • A written certification of completion of the audit would be required to be submitted to the CPPA, signed by a member of the board or highest-ranking executive.

If you have questions about the CPPA Cybersecurity Draft Regulations or related issues please reach out to a member of our Privacy, Data, and Cybersecurity practice group to discuss.

This summer, the Securities and Exchange Commission (SEC) adopted rules to enhance and standardize disclosures by public companies regarding cybersecurity risk management, strategy, governance, and incidents.  

The rules will impose a number of new requirements, including disclosures regarding:

  • Material cybersecurity incidents, which must be made within four (4) business days – a tight timeline that will compel subject companies to efficiently conduct their preliminary investigation of cybersecurity incidents so that they are prepared to make disclosures regarding the nature, scope, and timing of such incidents, as well as their material or reasonably likely impact on the company.  Subject companies will also need to provide updates regarding previously reported cybersecurity incidents in their periodic reports.
  • The subject company’s policies and procedures to identify and manage cybersecurity risks.  In advance of making such disclosures, many organizations will likely need to enhance their cybersecurity safeguards and practices and/or to ensure those safeguards and practices are adequately documented in policies and procedures.     
  • The roles played by (a) management in implementing cybersecurity policies and procedures and (b) the board of directors in overseeing the organization’s cybersecurity program.  For some companies, these requirements will likely prompt an assessment of whether management and the board are sufficiently involved in implementing and overseeing the company’s cybersecurity program and have the requisite expertise to do so effectively.   

The new rules were published on August 4, 2023, and took effect September 5, 2023.  Incident-specific disclosures will be required either 90 days after the rule’s August 4, 2023 publication date or December 18, 2023, whichever is later, though smaller companies will have an additional 180 days before they are required to begin providing disclosures. Companies whose fiscal years end on or after December 15, 2023, will be required to provide the annual disclosures beginning with their 2023 Form 10-K or 20-F.

If you have questions about the SEC Cybersecurity Disclosures or related issues please reach out to a member of our Privacy, Data, and Cybersecurity practice group to discuss.

The annual Cost of a Data Breach Report (Report) published by IBM is reliably full of helpful cybersecurity data. This year is no different. After reviewing the Report, we pulled out some interesting data points. Of course, the Report as a whole is well worth the read, but if you don’t have the time to get through its 78 pages, this post may be helpful.

What is new in the Report. There are several new items covered by the Report. The two that caught our eye:

  • Is it beneficial to involve law enforcement in a ransomware attack? According to the Report, organizations that did not involve law enforcement in a ransomware attack experienced significantly higher costs, as much as $470,000. Nearly 40% of respondents did not involve law enforcement. In our experience, involvement of law enforcement can have significant benefits, including greater insight into the behavior of certain threat actors. Such insight can speed up efforts to contain the attack, reducing costs in the process.
  • What are the effects of ransomware playbooks and workflows? In short, it turns out the effects are good. Having playbooks and workflows for ransomware attacks help to reduce response time and minimize costs. In fact, the benefits of incident response planning are not limited to ransomware. Organizations we encounter that have a robust incident response program are significantly more prepared to identify and response to an incident. An incident response plan generally means having a dedicated team, maintaining a written plan, and practicing that plan. Incident response plans can be particularly important for healthcare entities, which have experienced a 53% increase in data breach costs since 2020.   

AI has many benefits, including controlling data breach costs. There are two significant drivers of data breach costs – time to detect and time to contain. Shortening one or both of these can yield substantial costs savings when dealing with a data breach. According to the Report, the extensive use of security AI and automation resulted in reducing breach detection and containment by 108 days on average, and nearly $2 million in cost reduction. Even limited use of AI shortened the response time by 88 days, on average.

AI-driven data security and identity solutions can help drive a proactive security posture by identifying high-risk transactions, protecting them with minimal user friction and stitching together suspicious behaviors more effectively.

Healthcare continues to be the leader in data breach costs. Second place, the financial services industry, is not even close, according to the Report. Healthcare (hospitals and clinics), with an average cost of a data breach at $10.9 million, nearly doubles the cost of organizations in financial services, $5.9 million. Susan Rhodes, the acting deputy for strategic planning and regional manager for the Office for Civil Rights at HHS, recently observed that ransomware attacks are up 278% in the last 5 years.

Smaller organizations faced significant data breach cost increases, while larger organizations experienced declines. We have written a bunch here on the data security and breach risks of small organizations. For the three categories of smaller organizations measured by the Report – fewer than 500 employees, 500-1,000 employees, and 1,001-5,000 employees – all experienced double digit percentage increases, with the larger two categories having a greater than 20% increase in costs. It is difficult to pinpoint the reasons for this disparity. However, it may be that small organizations are less likely to engage in the kinds of activities that tend to minimize data breach costs, such as incident response planning and using security AI. We also find that smaller organizations tend to view themselves as not a target of cyber criminals.

Perhaps one of the more instructive parts of the Report is Figure 16 on page 28 which illustrates the impact certain factors can have on the average cost of a breach. The top four factors that appear to drive down data breach costs include integrated security testing in software development (DevSecOps), employee training, incident response planning and testing, and AI. Factors that tend to increase costs on average include remote workforce, third party involvement, noncompliance with regulations, and security system complexity.  

Since 2021, detection and escalation costs hold the top category of data breach costs, including over business interruption.  When one thinks of data breach-related costs, one may be tempted to guess the costs of notification. But it is actually the lowest of the four categories, according to the Report, although that category has more than doubled since 2018. Beginning in 2022, detection and escalation costs took the top spot. These costs include “forensic and investigative activities, assessment and audit services, crisis management, and communications to executives and boards.”  

Overall, the Report is filled with additional insights concerning the costs of a data breach. Here are some quick takeways that could help your organization minimize these costs:

  • Develop, implement, and practice an incident response plan,
  • Train employees,
  • Implement AI, even a little,
  • Comply with applicable regulations, and
  • Strengthen vendor security assessment and management programs, cloud service providers in particular.

The recent U.S. Supreme Court decision striking down affirmative action in undergraduate admissions, Students for Fair Admissions, Inc. v. President and Fellows of Harvard College, No. 20-1199 (the “SFFA Decision,” summarized here) has significant implications for admissions in higher education. However, some are considering whether the High Court’s holding will have a ripple effect in other areas, such as in employment law.

As the ground shifts a bit under college campuses following the SFFA Decision, employers are considering the potential impact of the decision on their DEI initiatives and recruiting. Following the SFFA Decision, a flurry of litigation, EEOC charges, public relations campaigns, and other activities have commenced in an effort to broaden and/or influence the reach of the Supreme Court’s holding. Recent public statements made by  EEOC officials are illustrative.

Just hours after the SFFA Decision, Equal Employment Opportunity Chair Charlotte A. Barrows said in an EEOC Press Release,

“The decision in Students for Fair Admissions, Inc. v. President & Fellows of Harvard College and Students for Fair Admissions, Inc. v. University of North Carolina does not address employer efforts to foster diverse and inclusive workforces or to engage the talents of all qualified workers, regardless of their background. It remains lawful for employers to implement diversity, equity, inclusion, and accessibility programs that seek to ensure workers of all backgrounds are afforded equal opportunity in the workplace.”   

Around the same time, EEOC Vice Chair Jocelyn Samuels also expressed that she believed employers would still be able to run their DEI programs as long as they’re not making employment decisions based on race.

During a recent webinar, as reported in Law360, EEOC Commissioner Andrea Lucas seemed to echo the SFFA Decision, underscoring the importance of “race-neutral” policies for employers. When asked to respond to Commissioner Samuels’ comments, Commissioner Lucas observed that the Vice Chair’s position:

fails to engage with the key question facing employers today: The legal and practical risks of race- and sex-conscious DEI initiatives adopted by many, many employers in the past to achieve equity instead of equal opportunity

In any case, as employers examine their policies, procedures, and practices to ensure compliance with applicable discrimination laws, an area for review is the use of AI-powered recruiting platforms. How employers assess, configure, and implement these recruiting tools are important considerations and vary from employer to employer, industry to industry, and platform to platform. Developing a deeper understanding of these tools is critical, particularly as the SFFA Decision could wind up reshaping this area of law. 

There are several high-level issues employers should be exploring when assessing the use of these tools, such as:

  • Whether there is bias in the data used to train the system.
  • How the system works when making recruiting suggestions or decisions, and can it be explained plainly.
  • Applicant attributes considered by the algorithms, along with the weight each attribute is assigned.
  • The ability of the tool to be fine-tuned to match specific needs, and who decides what those needs are.
  • When and how the system is evaluated for bias.
  • The allocation of liability between the employer and, if applicable, the vendor supplying the tool.
  • Case studies, if any, the vendor may have involving the tool successfully reducing bias in hiring.

For several years, the EEOC has been examining employer use of AI, and the potential risks of unlawful discrimination under both the Americans with Disabilities Act and Title VII.  It remains unclear whether or to what extent the SFFA Decision will shape the agency’s developing position, particularly as it relates to AI. Regardless, for employers seeking to use AI-powered recruiting platforms to enhance workforce diversity, they should proceed cautiously. AI algorithms are not immune to bias.  To help minimize these risks, employers should meticulously review the data and algorithms used in their recruiting platforms. Regular audits and adjustments, as needed, should be conducted throughout the hiring process.

What do ransomware, Yelp, and website tracking technologies all have in common? They are troubling areas of concern for HIPAA covered entities and business associates, according to one official from the federal Office for Civil Rights (OCR) which enforces the HIPAA privacy and security rules. Recently, the Executive Editor of Information Security Media Group’s (ISMG’s) HealthcareInfoSecurity.com media site, Marianne Kolbasuk McGee, sat down with Susan Rhodes, the OCR’s acting deputy for strategic planning and regional manager to discuss these issues.

We briefly summarize the discussion below, but you can access the short interview here (~10 min.). It is worth a listen.

Ms. Rhodes outlined three troublesome areas that OCR is watching closely:

  • Hacking/ransomware. Obviously, this continues to be a significant problem for the healthcare sector. According to Ms. Rhodes, ransomware attacks are up 278% in the last 5 years. Developing, maintaining, and practicing an incident response plan is one important tool for dealing with these and other attacks.
  • Online reviews. Negative comments made by customers/patients on popular online review services, such as offered by Yelp and Google, can be upsetting for any small business. Practitioners in the health care sector, such as physicians, dentists, etc. have to be particularly careful when responding to patient complaints on such platforms, if they respond at all. Their responses could result in the wrongful disclosure of protected health information of their patients, resulting in significant OCR enforcement actions such as occurred here and here.
  • Website tracking technologies. Calling this a “hot” area and referencing OCR investigations across the country, Ms. Rhodes directed listeners to the OCR guidance on tracking technologies issued in December 2022. Specifically, she reminded HIPAA covered entities of key considerations when using website tracking technologies including, without limitation, the potential need for business associate agreements and patient consent.

Ms. McGee also inquired about areas where covered entities and business associates’ HIPAA compliance frequently falls short. Ms. Rhodes mentioned a few:

  • Risk analysis – which is foundational to the policies and procedures adopted by covered entities and business associates.
  • Access controls – in short, making sure employees and other workforce members at the covered entity or business associate only have access to the PHI needed to perform their job.
  • Audit controls – regularly reviewing system activity, log files, etc. to identify irregular activity or potential compromises to PHI.

The HIPAA privacy and security rule continue to raise significant compliance challenges for covered entities and business associates. It is important to those that those challenges do not just exist in the physician’s office, but must be managed on line as well, including on organizations’ website.

The healthcare sector is a prime target for data breaches. According to a summary by the HIPAA Journal, 32% of all data breaches between 2015 and 2022 were in the healthcare sector, “almost double the number recorded in the financial and manufacturing sectors.” Industry analysts cite to many reasons for this, including the sensitivity of health data and its value on the black market compared to other forms of data. Evidently, another driver of data breaches for healthcare entities is M&A activity.

A recent study suggests that the likelihood for hospitals to experience a data breach doubles during the year before and after a merger. As some expect an increase in hospital mergers in the coming year, one can expect the number of healthcare data breaches to increase.

According to the research, Nan Clement, a Ph.D. candidate in economics in the School of Economic, Political and Policy Sciences in the University of Texas at Dallas looked at reporting on data breaches from the Office for Civil Rights during the period 2010 to 2022. Based on her analysis, for the two-year period surrounding a transaction closing (one year before and after the closing date), the chances of a data breach was 6%, compared to 3% for hospitals that merged but were outside that two-year period.

The study also looked at some of the potential reasons for this uptick:

  • Increase interest from hackers – data from Google Trends showed a “connection between increases in searches for a target hospital’s name with increases in hacking activity” which may stem from increased media attention around the merger.
  • Incompatibility of information systems – trying to merge data on different electronic medical record (EMR) platforms.
  • Increases in insider misconduct

Another reason may be simply a diversion of focus from the day to day administrative functions at the hospital considering how disruptive a merger can be. The FBI also issued a notification advising that ransomware actors target companies involved in significant, time-sensitive financial events to incentivize ransom payment by victims.

We have discussed here data security issues that can arise in the course of a transaction. For any entity involved in M&A activity, especially in the healthcare sector, it is critical to stay focused and realize that the organization may be more of a target at this time. Heightened awareness by the organization’s information security team and increased training and reminders to staff about phishing and other forms of attack could help avoid a data breach during this more vulnerable period. Additionally, the transacting parties might consider this risk and take appropriate steps during the due diligence stage both to protect against an attack, but also to be prepared to respond should one occur.

The Cyber Safety Review Board (Board) issued a report entitled, Review of the Attacks Associates with Lapsus$ and Related Threat Groups (Report), released by the Department of Homeland Security on August 10, 2023. The Report begins with a message from the Board’s Chair and Vice Chair discussing WarGames, a movie with interesting parallels to the present day – the leveraging of AI and large language models into systems (see Joshua/WOPR) and teenagers compromising sophisticated systems (Matthew Broderick as a high school student hacking into the Dept. of Defense). The Report looks at “Lapsus$,” described as a loosely organized group of threat actors, that included juveniles in some cases, which gained lots of attention after providing a window into its inner workings.

“Lapsus$ made clear just how easy it was for its members (juveniles, in some instances) to infiltrate well-defended organizations.”

Established under President Biden’s Executive Order (EO) 14028 on ‘Improving the Nation’s Cybersecurity’, the role of the Board is to review major cyber events and make concrete recommendations that would drive improvements. The Report does not disappoint in terms of its description of the targeting and nature of attack by Lapsus$ and similar groups, as well as the Board’s recommendations, one being to move toward a “passwordless” world.

While we cannot cover all of the critical and helpful information in the 59-page Report, here are a few highlights.

Multi-factor Authentication Implementations Used Broadly Today are Insufficient.

A reliable joke at any data security conference is how “password” or “123456” continue to be the most popular passwords. Another weakness is the use of the same account credentials across multiple accounts. Multi-factor authentication (MFA) was designed to address these practices by going beyond the password to require one or more additional authenticators before access is permitted. MFA often comes highly recommended to help protect against one of the most financially damaging online crimes, business email compromise (BEC).

Perhaps a bit unsettling for many that have implemented MFA thinking it is the answer to system access vulnerabilities, the Report explains:

the Board saw a collective failure to sufficiently account for and mitigate the risks associated with using Short Message Service (SMS) and voice calls for MFA. In several instances, attackers gained initial access to targeted organizations through Subscriber Identity Module (SIM) swapping attacks, which allowed them to intercept one-time passcodes and push notifications sent via SMS, effectively defeating this widely used MFA control. A lucrative SIM swap criminal market further enabled this pay-foraccess to a target’s mobile phone services. Despite these factors, adopting more advanced MFA capabilities remains a challenge for many organizations and individual consumers due to workflow and usability issues.

As expected, however, some methods of MFA are better than others. The Report observed that application or token-based MFA methods, for example, were more resilient.

If you are not familiar with SIM swaps, the process goes something like this, as detailed in the Report:

  1. Attacker collects data on victim through social media, phishing, etc.
  2. Attacker uses victim’s credentials to request SIM swap from telecommunications provider.
  3. Telecommunications provider approves the attacker’s fraudulent SIM swap.
  4. With full account takeover, attacker can navigate MFA, access victim’s personal account, including their employer’s systems.

“Lapsus$ took over online accounts via sign-in and account recovery workflows that sent one-time links or MFA passcodes via SMS or voice calls”

Insider Recruitment

Many organizations might not realize or want to believe it, but employees are vulnerable to monetary incentives to assist with providing system access to the attackers. The Report notes that in some cases these incentives could be as high as $20,000 per week. Compromised employees might hand over access credentials, approving upstream MFA requests, conduct SIM swaps, and perform other actions to assist the attackers with getting access to the organization’s systems.

Supply chain attacks

Lapsus$ and similar groups do not just directly attack organizations, they also go after targets that provide access to many organizations – third-party service providers and business process outsourcers (BPOs). Evidence of this strategy by threat actor groups are the recent attacks on secure file transfer services, such as Accellion and the GoAnywhere service offered by Fortra. By gaining access to these services, the attackers have entrée to files uploaded to these services by their many customers. 

Per the report:

In January 2022, a threat actor studied for this report gained access to privileged internal tools of a third-party service provider by compromising the computer of a customer support contractor from one of its BPOs. The real target of this attack was not the third-party service provider, nor the BPO, but rather the downstream customers of the service provider itself. This is a remarkable example of a creative three-stage supply chain attack used by this class of threat actors.


The Board outlines several recommendations, some are more likely to be within an organization’s power to mitigate risk than others. The recommendations fall into four main categories

  • strengthening identity and access management (IAM);
  • mitigating telecommunications and reseller vulnerabilities;
  • building resiliency across multi-party systems with a focus on business process outsourcers (BPOs); and
  • addressing law enforcement challenges and juvenile cybercrime.

As noted above, one of the strongest suggestions for enhancing IAM is moving away from passwords. The Board encourages increased use of Fast IDentity Online (FIDO)2-compliant, hardware backed solutions. In short, FIDO authentication would permit users to sign in with passkeys, usually a biometric or security key. Of course, biometrics raise other compliance risks, but the Board observes this technology avoids the vulnerability and suboptimal practices that have developed around passwords.

Another recommendation is to develop and test cyber incident response plans. As we have discussed on this blog several times (e.g., here and here), no system of safeguards is perfect. So, as an organization works to prevent an attack, it also must plan to respond should one be successful. Among other things, these plans should:

  • identify critical data, systems, and assets that should be prioritized during an attack,
  • outline a tested process for recovering from back-ups,
  • have an internal communications plan,
  • involve BPOs and third-party service providers in the developing and practicing of the plan,
  • identify and maintain contact information for internal and external individuals and groups that are critical to the response process – key employees, DFIR firms, law enforcement, outside counsel, insurance carriers, etc.

The Report is a great read for anyone involved in some way in addressing data risk to an organization. A critical take-away for anyone reading this report is threats are evolving and come in many forms. A control implemented in year 1 may become a significant vulnerability in year 2. Forty years later, the movie WarGames continues to be relevant, even if only to show that some of the most secure systems can be compromised by a handful of curious teenagers.


In a 2019 post about increasing cyber risks in K-12 schools, we cited a report, “The State of K-12 Cybersecurity: 2018 Year in Review,” that contained sobering information about cybersecurity in local school districts across the country. According to that report, in 2018, there were 122 publicly-disclosed cybersecurity incidents affecting school districts across 38 states. Not much has changed. A more recent article looking at ransomware activity in 2023 reports there being 120 attacks against school districts thus far in the year.

Yesterday, the Biden administration announced “new actions and private commitments to bolster the nation’s cyber defense at schools.” Among the actions:

Secretary of Education Miguel Cardona and Secretary of Homeland Security Alejandro Mayorkas, joined First Lady Jill Biden, to convene school administrators, educators and private sector companies to discuss best practices and new resources available to strengthen our schools’ cybersecurity, protect American families and schools, and prevent cyberattacks from disrupting our classrooms.

Perhaps more impactful in the short term are references in the announcement to (i) additional funding sources for schools, and (ii) recently released guidance, “K-12 Digital Infrastructure Brief: Defensible & Resilient,” jointly published by the U.S. Department of Education and the Cybersecurity and Infrastructure Security Agency (CISA). In particular, the guidance  outlines, among other things, some “High-Impact Recommendations,” such as implementing multifactor authentication. Potential sources for increased funding include an FCC proposed pilot program to provide $200 million over three years. School districts might also consider possibly allocating funds that remain available from the Elementary and Secondary School Emergency Relief Fund (ESSER Fund) established during COVID-19. Also, AWS recently pledged $20 million to a grant program designed to support for training and incident response at schools.  

While it is true that school districts are often understaffed and underfunded, including in the area of cybersecurity, there some areas of potential low-hanging (and relatively inexpensive) fruit more schools might be to address more readily. One of those is incident response.

Even if a district is not in a potion to take all the steps it might want to in order to prevent an attack, it might be able to vastly improve its plans to respond to an attack. Doing so, could significantly impact the disruption to students and related communities.

The White House report notes that during the prior academic year four school had to cancel classes or close completely in response to an attack.

Below are a few basic elements that should be included in an incident response plan (IRP):

  • identifying security incidents;
  • responding to security incidents; and
  • mitigating harmful effects of security incidents.

Certainly, each of these elements might look different district to district considering size, number of locations, information systems, prior experience, cyber insurance policies, type of personal information, and state laws. But they are important elements for any plan.

More specifically, school boards will want to think about who will be doing the responding – who is on the “security incident response team.” This is a team that is organized and trained to effectively respond to security incidents. Areas to consider when forming and building a team include:

  • A strong balance of skill sets among team members (IT, legal, communications, etc.)
  • Ensure lines of communication will be available among team members during a crisis
  • Consider external parties that can provide specific expertise concerning incident response
  • Commit to regularly practicing incident response procedures for different types of attacks.

Among other things, the IRP should help direct the team on mitigation efforts. Mitigation efforts are facilitated through measures such as contingency planning, robust data backup, and recovery processes. These are areas that should not be thought about by the school board or superintendent when a security incident occurs. For example, knowing that you have a backup of student data is not enough, regularly making sure you are able to restore from backups while maintaining integrity is key to minimizing disruption to the district.   

There is a lot that can be said about steps to take toward preparedness and IRP development, but the point is these are examples of measure that can be implemented more quickly and at less cost to help a district affected by a breach. Importantly, they can minimize the impact of the breach on the district and get kids back in the classroom more quickly.  


Recently, things may have sped up a little in your doctor’s office. The notes for your recent visit may have been organized and filed a little more quickly. You might have received assistance sooner than expected with a physician letter to your carrier concerning a claim. You also may have received copies of those medical records you have been waiting for, earlier than usual. Greasing the skids in these areas could be due to the use of generative AI technologies, such as ChatGPT, being leveraged in a myriad of ways across all industries, not just healthcare. But, as pointed out in this recent ScienceBlog post, some caution may be in order.

Healthcare providers and business associates subject to the Privacy and Security Rules under the Health Insurance Portability and Accountability Act (HIPAA) have some steps to consider before sharing protected health information (PHI) with a third party. One that should come as no surprise is determining whether the third party is a business associate. According to OCR guidance:

A “business associate” is a person or entity that performs certain functions or activities that involve the use or disclosure of protected health information on behalf of, or provides services to, a covered entity… The Privacy Rule lists some of the functions or activities, as well as the particular services, that make a person or entity a business associate, if the activity or service involves the use or disclosure of protected health information. The types of functions or activities that may make a person or entity a business associate include payment or health care operations activities, as well as other functions or activities regulated by the Administrative Simplification Rules.

Business associate functions and activities include: claims processing or administration; data analysis, processing or administration; utilization review; quality assurance; billing; benefit management; practice management; and repricing.  Business associate services are: legal; actuarial; accounting; consulting; data aggregation; management; administrative; accreditation; and financial. See the definition of “business associate” at 45 CFR 160.103.

If the third party is a business associate, sharing PHI with the third party must be preceded by entering into a business associate agreement. As a best practice, it also would be prudent to assess the cybersecurity policies and procedures of the third party as well. If the third party is not a business associate, in general, the authorization of the patient would be needed in order to share the information, unless there is a basis under HIPAA to make the disclosure without an authorization.

With ChatGPT or a similar service (offered by a third party to any physician’s office or health system), any doctor, nurse, office manager, administrator, or other staff member in a physician’s office or health system could set up an account (free or paid) and begin leveraging the large language model-based chatbot’s services. Those services might include help with various administrative tasks, such as those described above. The same might be true for workers employed by business associates providing services to physician office customers, for example.

Naturally, the question is whether ChatGPT or any similar service is a business associate under HIPAA. It may be, and it may be prepared to enter into a business associate agreement. Whether the service is HIPAA compliant would need to be an issue to assess when considering the applicable use cases that will be permitted. These are important determinations that any HIPAA covered entity or business associate needs to make before permitting the sharing of PHI on the chatbot’s platform, or that of any similar service. Of course, an alternative is to limit the use of these tools by policy and certainly provide some training. Employees at all levels of a healthcare organization simply may not appreciate flow of data, particularly considering the ease at which they could generate a quick patient letter or referral to a colleague.

For completeness, I asked ChatGPT: I am a healthcare provider, should I share patient data on ChatGPT. The chatbot responded:

As a healthcare provider, it is essential to prioritize patient privacy and adhere to data security regulations. Sharing patient data on this platform is not recommended, as it could compromise patient confidentiality. Instead, consider using secure and HIPAA-compliant communication channels for discussing sensitive patient information. If you have any medical or healthcare-related question, feel free to ask, and I’ll provide general information without requiring specific patient details.

The Department of Health and Human Services and the Federal Trade Commission have sent a joint letter to approximately 130 hospital systems and telehealth providers to emphasize the risks and concerns about the use of technologies, such as the Meta/Facebook pixel and Google Analytics, that can track a user’s online activities. We have summarized each agency’s position and guidance here and here, respectively.

The joint letter emphasizes the agencies’ focus on this issue, conveying to recipients a call to action:

To the extent you are using the tracking technologies described in this letter on your website or app, we strongly encourage you to review the laws cited in this letter and take actions to protect the privacy and security of individuals’ health information.

The joint letter reminds entities regulated by HIPAA that the HIPAA privacy and security rules apply when the information that such entities collect through tracking technologies or disclose to third parties (e.g., tracking technology vendors) includes PHI. As noted in the earlier HHS Bulletin on this issue, individuals with or without an existing patient relationship with the regulated entity could be sharing PHI with the entity (or a third party) through its website tracking technologies.

For those entities not subject to HIPAA, the joint letter affirms they still may have an obligation to protect against impermissible disclosures of personal health information under the FTC Act and the FTC Health Breach Notification Rule. The joint letter makes clear this would be the case:

 “even if you relied upon a third party to develop your website or mobile app and even if you do not use the information obtained through use of a tracking technology for any marketing purposes”

Regulated entities, including covered entities and business associates, should conduct an audit of any tracking technologies used on their websites, web applications, or mobile apps and determine if those technologies are being used in a manner that complies with HIPAA and the FTCA. They also should consider an ongoing process for managing their online assets to ensure such technologies are not implemented without appropriate vetting and risk assessment. Regulated entities also should review tracking technology vendor agreements to ensure they contain appropriate terms relating to the collection, use, processing, and disclosure of PHI or personal health information. This may require a business associate agreement. Additional considerations are provided in our earlier posts at the links above.