Cyber incidents are on the rise with no signs of slowing down, particularly in the healthcare industry. To combat this trend, on September 27, 2023, the U.S. Food and Drug Administration (FDA) released guidance on cybersecurity in medical devices for quality system considerations and on premarket submissions. The guidance is intended to replace the FDA’s 2014 Content of Premarket Submissions for Management of Cybersecurity in Medical Devices.

In the introduction to the guidance, the FDA acknowledged the increase in integration of wireless, Internet-, and network-connected capabilities in portable media and the frequent exchange of medical device-related health information, which created a need for more “robust cybersecurity controls to ensure medical device safety and effectiveness . . . .”

The guidance covers relevant cybersecurity considerations that may affect device safety and effectiveness, including but not limited to software, hardware, and firmware.

The FDA guidance recommends “designing for security” stating that when it reviews premarket submissions, it will assess a device’s cybersecurity based on a number of factors. Premarket submissions should include information that describes how security objectives are addressed and integrated into the device’s design. The guidance emphasizes that cybersecurity is part of device safety and the quality system requirements found under federal regulations, which may be relevant at the premarket stage, postmarket stage, or both.

The guidance provides recommendations on:

  • Testing and validating connected devices against breaches that affect multiple connected devices;
  • Labeling for devices with cybersecurity risks;
  • Developing cybersecurity management plans that communicate how the manufacturer will identify and communicate postmarket vulnerabilities in accordance with federal regulations; and
  • Providing an updateability/patchability view that describes the end-to-end process permitting software updates and patches to be provided/deployed once the device is in the field.

The FDA will host a webinar to discuss its new guidance on November 2, 2023.

If you have questions on the FDA guidance or related issues, contact a member of our Privacy, Data, and Cybersecurity practice group to discuss.

There are numerous cybersecurity regulations and requirements for businesses to worry about but they may not be considering their cybersecurity regulations under privacy statutes. California was at the forefront of privacy regulations with the passage of the California Consumer Privacy Act (CCPA). Lawsuits under the CCPA began almost immediately after it was enacted in 2020. Since its enactment, there have been over 300 cases filed under the CCPA. Although enforcement of the CCPA largely lies with the California Attorney General (and is now shared with the California Privacy Protection Agency), this has not stopped plaintiffs from creatively trying to expand the statute’s private right of action which includes data breaches.  

The CCPA authorizes a private cause of action against a covered business if its failure to implement reasonable security safeguards results in a data breach affecting personal information. If successful, a plaintiff can recover statutory damages in an amount not less than $100 and not greater than $750 per consumer per incident or actual damages, whichever is greater, as well as injunctive or declaratory relief and any other relief the court deems proper.

Plaintiffs’ counsel are attempting to use this requirement under the CCPA to bring class action lawsuits. In a recent case in California district court, the plaintiff brought claims under the CCPA’s reasonable security safeguards requirement for the defendant’s alleged sharing of consumer data.

The CCPA claim was eventually dismissed in part because the court found the CCPA’s right of action is limited to the data breach context and not to the intentional sharing of data.

But this may not be the final word on the use of the CCPA cybersecurity requirements. It is likely plaintiffs’ counsel will continue to look for ways to use the reasonable security safeguards requirements to their advantage.

If you have questions about the CCPA Cybersecurity requirements or related issues, contact a Jackson Lewis attorney to discuss.

On October 8, 2023, Governor Newsom signed Assembly Bill (AB) 947. Effective January 1, 2024, the bill will revise the California Consumer Privacy Act (CCPA) definition of “sensitive personal information” to include personal information that reveals a consumer’s citizenship or immigration status.

Under the CCPA, consumers have certain rights with regard to their personal information, including enhanced notice, access, and disclosure; the right to deletion; the right to restrict the sale of information; and protection against discrimination for exercising these rights. The CCPA was amended by the California Privacy Rights Act (CPRA) which created a new category of “sensitive personal information” and provides rights with regard to this information including restricting businesses’ use of sensitive information.

Companies covered by the CCPA/CPRA should review privacy policies and procedures to ensure that immigration and citizenship are covered as sensitive information.

If you have questions about AB 947 or related issues, reach out to a member of our Privacy, Data, and Cybersecurity practice group to discuss.

This year, Indiana joined several other states to pass a comprehensive consumer privacy law, that becomes operative on January 1, 2026. Like other consumer privacy laws, Indiana’s law requires businesses to establish reasonable administrative, technical, and physical security practices to protect the confidentiality, integrity, and accessibility of personal data, which implicates cybersecurity concerns. However, the privacy law is not the only data protection/cybersecurity law in Indiana.  

Data Breach Notification for All Businesses

Indiana passed a security breach notification statute in 2006, which provides Indiana residents with the right to know about a security breach that has resulted in the exposure of their personal information.

Under the law, personal information includes social security number or an individual’s name in combination with any one or more of the following data elements: driver’s license number, account number, a state identification card number, a credit card number, a financial account number, or a debit card number in combination with any required security code.

In the event of a breach the business must notify affected consumers, consumer reporting agencies (if more than one thousand consumers are impacted) and the Attorney General’s office.

In 2022, the state modified the statute to require notification without unreasonable delay, but not more than forty-five (45) days after the discovery of the breach.

Reasonable Procedures to Secure

Under the state’s data breach notification requirements, database owners are required to maintain their own data security procedures in compliance with federal statutes. Moreover, they must implement and maintain reasonable procedures, including taking appropriate corrective action to protect and safeguard from unlawful use or disclosure of any personal information.

Cyber Incident Reporting for Public Entities

In 2021, Indiana adopted a Cyber Incident Reporting Law, to empower the Indiana Office of Technology to coordinate warning and preparation efforts to avoid and combat cybersecurity threats.

Under the law, public sector entities must report incidents such as ransomware, software vulnerability exploitations, denial of service attacks, and more within 48 hours of discovery to the Office of Technology. This law covers counties, municipalities, townships, school corporations, library districts, local housing authorities, fire protection districts, public transportation corporations, local building authorities, local hospital authorities or corporations, local airport authorities, special service districts, special taxing districts, or other separate local governmental entities.

Data Destruction

Indiana also has specific requirements for the protection of data when disposing of it. Under the statute, a person who disposes of the unencrypted, unredacted personal information of a customer without shredding, incinerating, mutilating, erasing, or otherwise rendering the information illegible or unusable commits a Class C infraction. Class C infractions carry a $500 fine. However, the offense is a Class A infraction if:

(1) the person violates this section by disposing of the unencrypted, unredacted personal information of more than one hundred (100) customers; or

(2) the person has a prior unrelated judgment for a violation of this section.

A Class A infraction can carry up to a $10,000 fine.

Further State Resources

The State of Indiana has also established a Cybersecurity Hub with resources for public and private entities, that includes practical guidance.

If you have questions about cybersecurity or related issues contact a member of our Privacy, Data, and Cybersecurity practice group.

When hit with a cybersecurity attack, organizations are often not inclined to bring in federal law enforcement. Recent comments by FBI Director Christopher Wray at Mandiant’s annual mWISE 2023 conference seek to encourage the private sector to reconsider, as reported in CIODive. Doing so is an important consideration and depending on certain factors, it may be required.

According to the article, Director Wray attempted to reassure conference attendees:

“We know the private sector hasn’t always been excited about working with federal law enforcement, but when you contact us about an intrusion, we won’t be showing up in raid jackets, instead we’ll treat you like the victims you are – just like we treat all victims of crimes.”

According to the U.S. Government Accountability Office, “the U.S. is less prepared to fight cybercrime than it could be” – the title of a recent GAO blog published in August 2023. There are several reasons for this, according to the GAO, one of which is public hesitancy to report attacks. That hesitancy stems from:

  • Apprehension about public disclosure, loss of privilege
  • Concerns about the organization’s reputation
  • Unsure about what agency to which to report the attack
  • Unclear that law enforcement can do anything about the attack, diminishing the incentive to report
  • Some organizations are more inclined to contact local law enforcement

See GAO full report.

Director Wray pointed to some successes his agency has had with disrupting criminal operations and cyber-attacks in the U.S. One example is the takedown of Qakbot, malware that reportedly had infected more than 700,000 computers worldwide and 200,000 in the U.S. 

An organization’s hesitancy to report a cybercrime to federal law enforcement may have to yield to emerging reporting mandates. These include, without limitation:

  • Department of Homeland Security. According to the Cyber Incident Reporting for Critical Infrastructure Act of 2022 (Act), entities in the critical infrastructure sector must report to the Department of Homeland Security (DHS) certain cyber incidents within 72 hours, and ransom payments within 24 hours of making the payment. As regulations to implement these requirements near, DHS recently announced a common platform for reporting cyber incidents.
  • Securities and Exchange Commission. This summer, the Securities and Exchange Commission (SEC) adopted rules to enhance and standardize disclosures by public companies regarding cybersecurity risk management, strategy, governance, and incidents. In short, material cybersecurity incidents must be reported within four (4) business days.
  • National Credit Union Administration. The National Credit Union Administration (NCUA) recently finalized regulations that became effective September 1, 2023. Under the final rule, federally insured credit unions must notify the NCUA as soon as possible but no later than 72 hours after the Federally Insured Credit Union (FICU) reasonably believes that a reportable cyber incident has occurred.

Another reason to consider reporting a cyber-attack has to do with minimizing exposure to civil liability under regulations enforced by the U.S. Department of the Treasury’s Office of Foreign Assets Control (OFAC). In general, U.S. law prohibits U.S. persons from engaging in transactions, directly or indirectly, with certain individuals or entities – this includes ransom payments. According to OFAC guidance, the agency:

may impose civil penalties for sanctions violations based on strict liability, meaning that a person subject to U.S. jurisdiction may be held civilly liable even if such person did not know or have reason to know that it was engaging in a transaction that was prohibited under sanctions laws and regulations administered by OFAC.

However, OFAC will consider certain factors that could minimize exposure to penalties. One of those factors is reporting ransomware attacks to appropriate U.S. government agencies and cooperating with OFAC, law enforcement, and other relevant agencies.

Of course, decisions regarding whether, when, how, and to whom to report a cyber-attack should be thought through carefully, with experienced counsel, considering the circumstances and related issues. Whether Director Wray will see an uptick in reporting and be able to use that information to help thwart more attacks remains to be seen.

If you have questions about reporting cyber-attacks or related issues, please reach out to a member of our Privacy, Data, and Cybersecurity practice group to discuss.

Effective July 10, 2023, the EU-U.S. Data Privacy Framework (“EU-U.S. DPF”) replaced the invalidated EU-U.S. Privacy Shield framework (“Privacy Shield”). Participating U.S. organizations can now receive personal data transferred from the European Economic Area in compliance with the EU General Data Protection Regulation and without being subject to further conditions.  

Similar to the Privacy Shield, the program is administered by the U.S. Department of Commerce, and U.S. organizations must certify to participate. The EU-U.S. DPF framework requires submitting an application and a privacy policy conforming to the EU-U.S. DPF Principles, certifying adherence to the EU-U.S. DPF Principles, and identifying an independent recourse mechanism. U.S. organizations who wish to certify to the DPF but did not maintain an active Privacy Shield certification, or have never certified, may begin the EU-U.S. DPF certification process immediately.

U.S. organizations that maintained their certification to the Privacy Shield framework may transfer that certification by no later than October 10, 2023. The EU-U.S. DPF does not create new substantive obligations for U.S. organizations that participated in the Privacy Shield framework; however, they must update their privacy policy and notices to reference the EU-U.S. DPF and its Principles.

Under the EU-U.S. DPF, additional safeguards will apply to transfers of human resources data collected in the employment context. For example, the U.S. “data importer” must certify annually its commitment to cooperate with EU Data Protection Authorities (“DPAs”) regarding HR data. Cooperation includes responding directly to DPA investigations and complying with DPA advice.

Upon certifying compliance with the EU-U.S. DPF, a U.S. organization may elect to certify adherence to the U.K. Extension to the EU-U.S. DPF in order to receive personal data transferred from the U.K. beginning October 12, 2023. To receive personal data transferred from Switzerland, U.S. organizations may certify their compliance with the Swiss-U.S. DPF; however, transfers of personal data from Switzerland cannot commence until Switzerland formally issues an adequacy decision for the U.S.

The EU-U.S. DPF, U.K. Extension, and Swiss-U.S. DPF present an alternative to the EU Standard Contractual Clauses, International Data Transfer Agreement, and Binding Corporate Rules for transatlantic transfers of personal data in compliance with applicable data protection law. Depending on the organization and the contemplated data transfer, certifying annually to a DPF may be more practical, time-efficient, and economical than executing EU Standard Contractual Clauses or an IDTA for each contemplated transfer activity.

For more insights on the EU-U.S. DPF listen to our podcast: The EU-US Data Privacy Framework: Transferring Personal Data Under the New Privacy Shield

If you have questions about transatlantic transfers of personal data or related issues, please reach out to a member of our Privacy, Data, and Cybersecurity practice group to discuss.

When the California Privacy Rights Act (CPRA) was enacted, it created the California Privacy Protection Agency (CPPA) and delegated to the CPPA significant regulatory authority. One of the areas of that authority is cybersecurity, which includes performing cybersecurity audits annually. On September 8, 2023, the CPPA considered a draft set of regulations that would establish rules for conducting cybersecurity audits.

It is important to note that California currently mandates certain businesses to maintain reasonable security procedures and practices to protect personal information.

  • Civil Code Section 1798.100(e), under the CCPA, provides:

A business that collects a consumer’s personal information shall implement reasonable security procedures and practices appropriate to the nature of the personal information to protect the personal information from unauthorized or illegal access, destruction, use, modification, or disclosure in accordance with Section 1798.81.5.

  • Civil Code Section 1798.81.5, provides:

(b) A business that owns, licenses, or maintains personal information3 about a California resident shall implement and maintain reasonable security procedures and practices appropriate to the nature of the information, to protect the personal information from unauthorized access, destruction, use, modification, or disclosure.

(c) A business that discloses personal information about a California resident pursuant to a contract with a nonaffiliated third party that is not subject to subdivision (b) shall require by contract that the third  party implement and maintain reasonable security procedures and practices appropriate to the nature of the information, to protect the personal information from unauthorized access, destruction, use, modification or disclosure

A couple of observations about these provisions:

  • Section 1798.100 which is part of the CCPA, applies to “businesses” that are subject to the CCPA. Section 1798.80(a) defines “business” more broadly to include “a sole proprietorship, partnership, corporation, association, or other group, however organized and whether or not organized to operate at a profit.” For example, while the CCPA generally applies to for-profit entities, this section of the Civil Code applies to businesses whether or not organized for profit.
  • As the CPPA begins to establish regulations around a set of personal information for one set of “businesses,” those covered under the CCPA, there is also guidance in California for businesses covered by Civil Code Section 1798.81.5 which includes audit requirements as well. In February 2016, the then-California Attorney General and now Vice President, Kamala D. Harris, issued a California Data Breach Report. According to that report, a business’s failure to implement all of the controls set forth in the Center for Internet Security’s Critical Security Controls constitutes a lack of reasonable security. Of course, the CCPA appears to incorporate the requirements of Civil Code Section 1798.81.5. Nonetheless, businesses will need to figure out which cybersecurity standard applies to them.

So, what do the draft CCPA cybersecurity audit regulations say? Here is a summary of just some of the proposed requirements for such audits:

  • The requirement for a covered business to complete the audit will be based on whether the business’s processing of personal information presents a significant risk to consumers’ security. The draft regulations are beginning to craft the factors for determining when there will be a significant risk. One factor that would trigger the audit requirement is that the business derives 50 percent or more of its annual revenues from selling or sharing consumers’ personal information. However, the CPPA is considering other factors, such as the business having more than a to-be-determined amount of gross revenue or number of employees.  
  • Cybersecurity audits would be required to be performed by “qualified, objective, independent professional [auditor] using procedures and standards generally accepted in the profession of auditing.” However, the auditor would not need to be external to the business, provided such an auditor can exercise impartial judgment – e.g., such an auditor should not be auditing the cybersecurity program the auditor helped to create.  The audit would need to include the auditor’s name, affiliation, and relevant qualifications to complete the cybersecurity audit in such detail as necessary to fully describe the nature of their qualifications; and the number of hours that each auditor worked on the cybersecurity audit.
  • The cybersecurity audit would need to:
    • Assess, document, and summarize each applicable component of the business’s cybersecurity program;
    • Specifically, identify any gaps or weaknesses in the business’s cybersecurity program;
    • Specifically, address the status of any gaps or weaknesses identified in any prior cybersecurity audit; and
    • Specifically, identify any corrections or amendments to any prior cybersecurity audits.
  • The audit would have to assess and document certain components of the cybersecurity program with “specificity.” One such component is the safeguards the business has in place, such as multi-factor authentication, encryption, zero trust architecture, access management, audit log management, response to security incidents, etc. If a component is not available, the audit would be required to document and explain why it is not necessary and how other safeguards provide at least equivalent security; a standard not too dissimilar to the “addressable” rule for implementation specifications under the HIPAA Security Rule.
  • The cybersecurity audit would need to be reported to the business’s board of directors or governing body, or if no such board or equivalent body exists, to the highest-ranking executive in the business responsible for the business’s cybersecurity program. Notably, the audit would need to include certain statements, such as a certification that such governing body or highest-ranking executive has reviewed the cybersecurity audit and understands its findings.
  • If the business provided notifications to affected consumers under California’s breach notification law for businesses, the cybersecurity audit would have to include a description of those notifications and, where applicable, a description of the notification to the California Attorney General.
  • Service providers and contractors would be required to cooperate with businesses completing such audits, including making available all “relevant information that the auditor deems necessary for the auditor to complete the business’s cybersecurity audit.”
  • A written certification of completion of the audit would be required to be submitted to the CPPA, signed by a member of the board or highest-ranking executive.

If you have questions about the CPPA Cybersecurity Draft Regulations or related issues please reach out to a member of our Privacy, Data, and Cybersecurity practice group to discuss.

This summer, the Securities and Exchange Commission (SEC) adopted rules to enhance and standardize disclosures by public companies regarding cybersecurity risk management, strategy, governance, and incidents.  

The rules will impose a number of new requirements, including disclosures regarding:

  • Material cybersecurity incidents, which must be made within four (4) business days – a tight timeline that will compel subject companies to efficiently conduct their preliminary investigation of cybersecurity incidents so that they are prepared to make disclosures regarding the nature, scope, and timing of such incidents, as well as their material or reasonably likely impact on the company.  Subject companies will also need to provide updates regarding previously reported cybersecurity incidents in their periodic reports.
  • The subject company’s policies and procedures to identify and manage cybersecurity risks.  In advance of making such disclosures, many organizations will likely need to enhance their cybersecurity safeguards and practices and/or to ensure those safeguards and practices are adequately documented in policies and procedures.     
  • The roles played by (a) management in implementing cybersecurity policies and procedures and (b) the board of directors in overseeing the organization’s cybersecurity program.  For some companies, these requirements will likely prompt an assessment of whether management and the board are sufficiently involved in implementing and overseeing the company’s cybersecurity program and have the requisite expertise to do so effectively.   

The new rules were published on August 4, 2023, and took effect September 5, 2023.  Incident-specific disclosures will be required either 90 days after the rule’s August 4, 2023 publication date or December 18, 2023, whichever is later, though smaller companies will have an additional 180 days before they are required to begin providing disclosures. Companies whose fiscal years end on or after December 15, 2023, will be required to provide the annual disclosures beginning with their 2023 Form 10-K or 20-F.

If you have questions about the SEC Cybersecurity Disclosures or related issues please reach out to a member of our Privacy, Data, and Cybersecurity practice group to discuss.

The annual Cost of a Data Breach Report (Report) published by IBM is reliably full of helpful cybersecurity data. This year is no different. After reviewing the Report, we pulled out some interesting data points. Of course, the Report as a whole is well worth the read, but if you don’t have the time to get through its 78 pages, this post may be helpful.

What is new in the Report. There are several new items covered by the Report. The two that caught our eye:

  • Is it beneficial to involve law enforcement in a ransomware attack? According to the Report, organizations that did not involve law enforcement in a ransomware attack experienced significantly higher costs, as much as $470,000. Nearly 40% of respondents did not involve law enforcement. In our experience, involvement of law enforcement can have significant benefits, including greater insight into the behavior of certain threat actors. Such insight can speed up efforts to contain the attack, reducing costs in the process.
  • What are the effects of ransomware playbooks and workflows? In short, it turns out the effects are good. Having playbooks and workflows for ransomware attacks help to reduce response time and minimize costs. In fact, the benefits of incident response planning are not limited to ransomware. Organizations we encounter that have a robust incident response program are significantly more prepared to identify and response to an incident. An incident response plan generally means having a dedicated team, maintaining a written plan, and practicing that plan. Incident response plans can be particularly important for healthcare entities, which have experienced a 53% increase in data breach costs since 2020.   

AI has many benefits, including controlling data breach costs. There are two significant drivers of data breach costs – time to detect and time to contain. Shortening one or both of these can yield substantial costs savings when dealing with a data breach. According to the Report, the extensive use of security AI and automation resulted in reducing breach detection and containment by 108 days on average, and nearly $2 million in cost reduction. Even limited use of AI shortened the response time by 88 days, on average.

AI-driven data security and identity solutions can help drive a proactive security posture by identifying high-risk transactions, protecting them with minimal user friction and stitching together suspicious behaviors more effectively.

Healthcare continues to be the leader in data breach costs. Second place, the financial services industry, is not even close, according to the Report. Healthcare (hospitals and clinics), with an average cost of a data breach at $10.9 million, nearly doubles the cost of organizations in financial services, $5.9 million. Susan Rhodes, the acting deputy for strategic planning and regional manager for the Office for Civil Rights at HHS, recently observed that ransomware attacks are up 278% in the last 5 years.

Smaller organizations faced significant data breach cost increases, while larger organizations experienced declines. We have written a bunch here on the data security and breach risks of small organizations. For the three categories of smaller organizations measured by the Report – fewer than 500 employees, 500-1,000 employees, and 1,001-5,000 employees – all experienced double digit percentage increases, with the larger two categories having a greater than 20% increase in costs. It is difficult to pinpoint the reasons for this disparity. However, it may be that small organizations are less likely to engage in the kinds of activities that tend to minimize data breach costs, such as incident response planning and using security AI. We also find that smaller organizations tend to view themselves as not a target of cyber criminals.

Perhaps one of the more instructive parts of the Report is Figure 16 on page 28 which illustrates the impact certain factors can have on the average cost of a breach. The top four factors that appear to drive down data breach costs include integrated security testing in software development (DevSecOps), employee training, incident response planning and testing, and AI. Factors that tend to increase costs on average include remote workforce, third party involvement, noncompliance with regulations, and security system complexity.  

Since 2021, detection and escalation costs hold the top category of data breach costs, including over business interruption.  When one thinks of data breach-related costs, one may be tempted to guess the costs of notification. But it is actually the lowest of the four categories, according to the Report, although that category has more than doubled since 2018. Beginning in 2022, detection and escalation costs took the top spot. These costs include “forensic and investigative activities, assessment and audit services, crisis management, and communications to executives and boards.”  

Overall, the Report is filled with additional insights concerning the costs of a data breach. Here are some quick takeways that could help your organization minimize these costs:

  • Develop, implement, and practice an incident response plan,
  • Train employees,
  • Implement AI, even a little,
  • Comply with applicable regulations, and
  • Strengthen vendor security assessment and management programs, cloud service providers in particular.

The recent U.S. Supreme Court decision striking down affirmative action in undergraduate admissions, Students for Fair Admissions, Inc. v. President and Fellows of Harvard College, No. 20-1199 (the “SFFA Decision,” summarized here) has significant implications for admissions in higher education. However, some are considering whether the High Court’s holding will have a ripple effect in other areas, such as in employment law.

As the ground shifts a bit under college campuses following the SFFA Decision, employers are considering the potential impact of the decision on their DEI initiatives and recruiting. Following the SFFA Decision, a flurry of litigation, EEOC charges, public relations campaigns, and other activities have commenced in an effort to broaden and/or influence the reach of the Supreme Court’s holding. Recent public statements made by  EEOC officials are illustrative.

Just hours after the SFFA Decision, Equal Employment Opportunity Chair Charlotte A. Barrows said in an EEOC Press Release,

“The decision in Students for Fair Admissions, Inc. v. President & Fellows of Harvard College and Students for Fair Admissions, Inc. v. University of North Carolina does not address employer efforts to foster diverse and inclusive workforces or to engage the talents of all qualified workers, regardless of their background. It remains lawful for employers to implement diversity, equity, inclusion, and accessibility programs that seek to ensure workers of all backgrounds are afforded equal opportunity in the workplace.”   

Around the same time, EEOC Vice Chair Jocelyn Samuels also expressed that she believed employers would still be able to run their DEI programs as long as they’re not making employment decisions based on race.

During a recent webinar, as reported in Law360, EEOC Commissioner Andrea Lucas seemed to echo the SFFA Decision, underscoring the importance of “race-neutral” policies for employers. When asked to respond to Commissioner Samuels’ comments, Commissioner Lucas observed that the Vice Chair’s position:

fails to engage with the key question facing employers today: The legal and practical risks of race- and sex-conscious DEI initiatives adopted by many, many employers in the past to achieve equity instead of equal opportunity

In any case, as employers examine their policies, procedures, and practices to ensure compliance with applicable discrimination laws, an area for review is the use of AI-powered recruiting platforms. How employers assess, configure, and implement these recruiting tools are important considerations and vary from employer to employer, industry to industry, and platform to platform. Developing a deeper understanding of these tools is critical, particularly as the SFFA Decision could wind up reshaping this area of law. 

There are several high-level issues employers should be exploring when assessing the use of these tools, such as:

  • Whether there is bias in the data used to train the system.
  • How the system works when making recruiting suggestions or decisions, and can it be explained plainly.
  • Applicant attributes considered by the algorithms, along with the weight each attribute is assigned.
  • The ability of the tool to be fine-tuned to match specific needs, and who decides what those needs are.
  • When and how the system is evaluated for bias.
  • The allocation of liability between the employer and, if applicable, the vendor supplying the tool.
  • Case studies, if any, the vendor may have involving the tool successfully reducing bias in hiring.

For several years, the EEOC has been examining employer use of AI, and the potential risks of unlawful discrimination under both the Americans with Disabilities Act and Title VII.  It remains unclear whether or to what extent the SFFA Decision will shape the agency’s developing position, particularly as it relates to AI. Regardless, for employers seeking to use AI-powered recruiting platforms to enhance workforce diversity, they should proceed cautiously. AI algorithms are not immune to bias.  To help minimize these risks, employers should meticulously review the data and algorithms used in their recruiting platforms. Regular audits and adjustments, as needed, should be conducted throughout the hiring process.