One of our recent posts discussed the uptick in AI risks reported in SEC filings, as analyzed by Arize AI. There, we highlighted the importance of strong governance for mitigating some of these risks, but we didn’t address the specific risks identified in those SEC filings. We discuss them briefly here as they are risks likely facing most organizations that either are exploring, developing, and/or have already deployed AI in some way, shape, or form. 

Arize AI’s “The Rise of Generative AI in SEC filings” reviewed the most recent annual financial reports as of May 1, 2024, filed by US-based companies in the Fortune 500. The report is filled with interesting statistics, including evaluating the AI risks identified by the reporting entities. Perhaps the most telling statistic is how quickly companies have moved to identify these risks and their reports:

Looking at the subsequent annual financial reports filed in 2012 reveals a surge in companies disclosing cyber and information security as a risk factor. However, the jump in those disclosures – 86.9% between 2010 and 2012 – is easily dwarfed by the 473.5% increase in companies citing AI as a risk factor between 2022 and 2024.

Arize AI Report, Page 10.

The Report organizes the AI risks identified into four basic categories: competitive impacts, general harms, regulatory compliance, and data security.

In the case of competitive risks, understandably, a organization’s competitor being first to market with a compelling AI application is a risk to the organization’s business. Similarly, the increasing availability and quality of AI products and services may soften the demand for the products and services of organizations that had been leaders in the space. At the same time, competitive forces may be at play in attracting the best talent on the market, something that, of course, AI recruiting tools can help to achieve.  

The general harms noted by many in the Fortune 500 revolve around issues we hear a lot about – 

  • Does the AI perform as advertised?
  • What types of reputational harm could affect a company when its use of AI is claimed to be biased, inaccurate, inconsistent, unethical, etc.?
  • Will the goals of desired use cases be achieved/performed in a manner that sufficiently protects against violations of privacy, IP, and other rights and obligations? 
  • Can organizations stop harmful or offensive content from being generated? 

Not to be forgotten, the third category is regulatory risk. Unfortunately, this category is likely to get worse before it gets better, if it ever does. A complex patchwork is forming, compromised of international, federal, state, and local, as well as specific industry guidelines. Meeting the challenges of these regulatory risks often depends largely on the particularly use case. For example, an AI-powered productivity management application to assess and monitor remote workers may come with significantly different regulatory compliance requirements than an automated employment decision tool (AEDT) used in the recruiting process. Similarly, leveraging generative AI to help shape customer outreach in the hospitality or retail industries certainly will raise different regulatory considerations than if deployed in the healthcare, pharmaceutical, or education industries. And, industry-specific regulation may not be the end of the story. Generally applicable state laws will add their own layers of complexity. In one form or another, several states have already enacted several measures to address the use of AI, including California, Colorado, Illinois, Tennessee, and Utah, in addition to the well known New York City law.

Last, but certainly not least, are data security risks. Two forms of this risk are worth noting – the data needed to fuel AI and the use of AI as a tool to refine attacks by cyber threat actors on individuals and information systems. Because vast amounts of data often are necessary for AI models to be successful, organizations have serious concerns about what date maybe used, even with respect to inadvertent disclosures of confidential and personal information. With different departments or divisions in an organization making their own use of AI, their approaches to data privacy and security may not be entirely aligned. Nuances in the law can amplify these risks.

While many are using AI to help secure information systems, cyber threat actors with access to essentially the same technology have different purposes in mind. Earlier this year we discussed the use of AI to enhance phishing attacks. In October 2023, the U.S. Department of Health and Human Services (HHS) and the Health Sector Cybersecurity Coordination Center (HC3) published a white paper entitled, AI-Augmented Phishing and the Threat to the Health Sector, the HC3 Paper. While many have been using ChatGPT and similar platforms to leverage generative AI capabilities to craft client emails, layout vacation itineraries, support coding efforts, and help write school papers, threat actors have been hard at work using the technology for other purposes.

Making this even easier for attackers, tools such as FraudGPT have been developed specifically for nefarious purposes. FraudGPT is a generative AI tool that can be used to craft malware and texts for phishing emails. It is available on the dark web and on Telegram for a relatively cheap price – a $200 per month or $1700 per year subscription fee – which makes it well within the price range of even moderately-sophisticated cybercriminals.

Thinking about these categories of risks identified by the Fortune 500, we believe, can be instructive for any organization trying to leverage the power of AI to help advance its business. As we noted in our prior post, adopting appropriate governance structures will be necessary for identifying and taking steps to manage these risks. Of course, the goal will be to eliminate them, but that may not always be possible. However, an organization’s defensible position can be substantially improved through taking prudent steps in the course of developing and/or deploying AI.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Joseph J. Lazzarotti Joseph J. Lazzarotti

Joseph J. Lazzarotti is a principal in the Berkeley Heights, New Jersey, office of Jackson Lewis P.C. He founded and currently co-leads the firm’s Privacy, Data and Cybersecurity practice group, edits the firm’s Privacy Blog, and is a Certified Information Privacy Professional (CIPP)…

Joseph J. Lazzarotti is a principal in the Berkeley Heights, New Jersey, office of Jackson Lewis P.C. He founded and currently co-leads the firm’s Privacy, Data and Cybersecurity practice group, edits the firm’s Privacy Blog, and is a Certified Information Privacy Professional (CIPP) with the International Association of Privacy Professionals. Trained as an employee benefits lawyer, focused on compliance, Joe also is a member of the firm’s Employee Benefits practice group.

In short, his practice focuses on the matrix of laws governing the privacy, security, and management of data, as well as the impact and regulation of social media. He also counsels companies on compliance, fiduciary, taxation, and administrative matters with respect to employee benefit plans.

Privacy and cybersecurity experience – Joe counsels multinational, national and regional companies in all industries on the broad array of laws, regulations, best practices, and preventive safeguards. The following are examples of areas of focus in his practice:

  • Advising health care providers, business associates, and group health plan sponsors concerning HIPAA/HITECH compliance, including risk assessments, policies and procedures, incident response plan development, vendor assessment and management programs, and training.
  • Coached hundreds of companies through the investigation, remediation, notification, and overall response to data breaches of all kinds – PHI, PII, payment card, etc.
  • Helping organizations address questions about the application, implementation, and overall compliance with European Union’s General Data Protection Regulation (GDPR) and, in particular, its implications in the U.S., together with preparing for the California Consumer Privacy Act.
  • Working with organizations to develop and implement video, audio, and data-driven monitoring and surveillance programs. For instance, in the transportation and related industries, Joe has worked with numerous clients on fleet management programs involving the use of telematics, dash-cams, event data recorders (EDR), and related technologies. He also has advised many clients in the use of biometrics including with regard to consent, data security, and retention issues under BIPA and other laws.
  • Assisting clients with growing state data security mandates to safeguard personal information, including steering clients through detailed risk assessments and converting those assessments into practical “best practice” risk management solutions, including written information security programs (WISPs). Related work includes compliance advice concerning FTC Act, Regulation S-P, GLBA, and New York Reg. 500.
  • Advising clients about best practices for electronic communications, including in social media, as well as when communicating under a “bring your own device” (BYOD) or “company owned personally enabled device” (COPE) environment.
  • Conducting various levels of privacy and data security training for executives and employees
  • Supports organizations through mergers, acquisitions, and reorganizations with regard to the handling of employee and customer data, and the safeguarding of that data during the transaction.
  • Representing organizations in matters involving inquiries into privacy and data security compliance before federal and state agencies including the HHS Office of Civil Rights, Federal Trade Commission, and various state Attorneys General.

Benefits counseling experience – Joe’s work in the benefits counseling area covers many areas of employee benefits law. Below are some examples of that work:

  • As part of the Firm’s Health Care Reform Team, he advises employers and plan sponsors regarding the establishment, administration and operation of fully insured and self-funded health and welfare plans to comply with ERISA, IRC, ACA/PPACA, HIPAA, COBRA, ADA, GINA, and other related laws.
  • Guiding clients through the selection of plan service providers, along with negotiating service agreements with vendors to address plan compliance and operations, while leveraging data security experience to ensure plan data is safeguarded.
  • Counsels plan sponsors on day-to-day compliance and administrative issues affecting plans.
  • Assists in the design and drafting of benefit plan documents, including severance and fringe benefit plans.
  • Advises plan sponsors concerning employee benefit plan operation, administration and correcting errors in operation.

Joe speaks and writes regularly on current employee benefits and data privacy and cybersecurity topics and his work has been published in leading business and legal journals and media outlets, such as The Washington Post, Inside Counsel, Bloomberg, The National Law Journal, Financial Times, Business Insurance, HR Magazine and NPR, as well as the ABA Journal, The American Lawyer, Law360, Bender’s Labor and Employment Bulletin, the Australian Privacy Law Bulletin and the Privacy, and Data Security Law Journal.

Joe served as a judicial law clerk for the Honorable Laura Denvir Stith on the Missouri Court of Appeals.