Last week, a New York Times’ article discussed ChatGPT and AI’s “democratization of disinformation,” along with their potentially disruptive effects on upcoming political contests. Asking a chatbot powered by generative AI to produce a fundraising email is not the main concern, according to the article. Leveraging that technology to create and disseminate disinformation and deepfakes is. Some of the tactics described in the article intended to further political goals are unsettling for and well beyond politics, including the workplace.

“Now any amateur with a laptop can manufacture the kinds of convincing sounds and images that were once the domain of the most sophisticated digital players. This democratization of disinformation is blurring the boundaries between fact and fake…”

Voice-cloning tools could be used, for example, to create convincing audio clips of political figures. One clip might convey a message that is consistent with the campaign’s platform, albeit never uttered by the candidate. Another clip might be produced to position the candidate in a bad light by suggesting the candidate was involved in illicit behavior or conveyed ideas damaging to the campaign, such as using racially-charged language. Either way, such clips would be misleading to the electorate. The same would be true of AI-generated images or videos.

And as synthetic media gets more believable, the question becomes: What happens when people can no longer trust their own eyes and ears?”

It’s not hard to see how these same technologies, which are increasingly accessible by most anyone and relatively easy to use, can create significant disruption and legal risk in workplaces across the country. Instead of creating a false narrative about a political figure, a worker disappointed in his annual review might generate and covertly disseminate a compromising “video” of his supervisor. The failure to investigate a convincing deepfake video could have substantial and unintended consequences. Of course, the creation of this kind of misinformation can be directed at executives and the company as a whole.

Damaging disinformation and deepfakes are not the only risks posed by generative AI technologies. To better understand the kinds of risks an organization might face, assessing how workers are using ChatGPT and other similar generative AI technologies is a good first step. If a group of workers are like the millions of other people using ChatGPT, activities might include performing research, preparing draft communications such as the fundraising email in the NYT article discussed above, coding, and other tasks. Workers in different industries with different responsibilities likely will be approaching the technology with different needs and identifying a range of creative use cases.

Greater awareness about the uses of generative AI in an organization can help with policy development, but there are some policies that might make sense for most if not all applications of this technology.

Other workplace policies generally apply. As good example of this is harassment and nondiscrimination policies. As with an employee’s activity in social media, an employee’s use of ChatGPT is not shielded from existing policies on discrimination or harassment of others. Existing policies should apply.

Follow the application’s terms and understand its limitations. Using online resources for company business in violation of the terms of use of those resources could create legal exposure for organizations. Also, employees should be aware of the capabilities and limitations of the tools they are using. For instance, while ChatGPT may seem omniscient, it is not, and it may not be up to date – OpenAI notes “ChatGPT’s training data cuts off in 2021.” Employees can avoid a little embarrassment for the organization (and themselves) knowing this kind of information.

Avoid impermissible sharing of data. ChatGPT is just that, a chat or conversation with OpenAI that employees at OpenAI can view:

Who can view my conversations?

As part of our commitment to safe and responsible AI, we review conversations to improve our systems and to ensure the content complies with our policies and safety requirements.

Employees should avoid sharing personal information as well as confidential information about the company or its customers without understanding the applicable obligations that may apply. For example, there may be contractual obligations to customers of the organization prohibiting the sharing of their confidential information with third parties. Similar obligations could be established through website privacy policies or statements through which an organization has represented how it would share certain categories of information.

Establish a review process to avoid improper uses. Information generated through AI-powered tools and platforms may not be what it seems. It may be inaccurate, incomplete, biased, or it may infringe on another’s intellectual property rights. The organization may want to conduct a review of certain content obtained through the tool or platform to avoid subpar service to customers or an infringement lawsuit.

There is a lot to think about when considering the impacts of ChatGPT and other generative AI technologies. This includes carefully wading through political blather during the imminent election season. It also includes thinking about how to minimize risk related to these technologies in the workplace. Part of that can be accomplished through policy, but there are other steps to consider, such as employee training, monitoring utilization, etc.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Joseph J. Lazzarotti Joseph J. Lazzarotti

Joseph J. Lazzarotti is a principal in the Berkeley Heights, New Jersey, office of Jackson Lewis P.C. He founded and currently co-leads the firm’s Privacy, Data and Cybersecurity practice group, edits the firm’s Privacy Blog, and is a Certified Information Privacy Professional (CIPP)…

Joseph J. Lazzarotti is a principal in the Berkeley Heights, New Jersey, office of Jackson Lewis P.C. He founded and currently co-leads the firm’s Privacy, Data and Cybersecurity practice group, edits the firm’s Privacy Blog, and is a Certified Information Privacy Professional (CIPP) with the International Association of Privacy Professionals. Trained as an employee benefits lawyer, focused on compliance, Joe also is a member of the firm’s Employee Benefits practice group.

In short, his practice focuses on the matrix of laws governing the privacy, security, and management of data, as well as the impact and regulation of social media. He also counsels companies on compliance, fiduciary, taxation, and administrative matters with respect to employee benefit plans.

Privacy and cybersecurity experience – Joe counsels multinational, national and regional companies in all industries on the broad array of laws, regulations, best practices, and preventive safeguards. The following are examples of areas of focus in his practice:

  • Advising health care providers, business associates, and group health plan sponsors concerning HIPAA/HITECH compliance, including risk assessments, policies and procedures, incident response plan development, vendor assessment and management programs, and training.
  • Coached hundreds of companies through the investigation, remediation, notification, and overall response to data breaches of all kinds – PHI, PII, payment card, etc.
  • Helping organizations address questions about the application, implementation, and overall compliance with European Union’s General Data Protection Regulation (GDPR) and, in particular, its implications in the U.S., together with preparing for the California Consumer Privacy Act.
  • Working with organizations to develop and implement video, audio, and data-driven monitoring and surveillance programs. For instance, in the transportation and related industries, Joe has worked with numerous clients on fleet management programs involving the use of telematics, dash-cams, event data recorders (EDR), and related technologies. He also has advised many clients in the use of biometrics including with regard to consent, data security, and retention issues under BIPA and other laws.
  • Assisting clients with growing state data security mandates to safeguard personal information, including steering clients through detailed risk assessments and converting those assessments into practical “best practice” risk management solutions, including written information security programs (WISPs). Related work includes compliance advice concerning FTC Act, Regulation S-P, GLBA, and New York Reg. 500.
  • Advising clients about best practices for electronic communications, including in social media, as well as when communicating under a “bring your own device” (BYOD) or “company owned personally enabled device” (COPE) environment.
  • Conducting various levels of privacy and data security training for executives and employees
  • Supports organizations through mergers, acquisitions, and reorganizations with regard to the handling of employee and customer data, and the safeguarding of that data during the transaction.
  • Representing organizations in matters involving inquiries into privacy and data security compliance before federal and state agencies including the HHS Office of Civil Rights, Federal Trade Commission, and various state Attorneys General.

Benefits counseling experience – Joe’s work in the benefits counseling area covers many areas of employee benefits law. Below are some examples of that work:

  • As part of the Firm’s Health Care Reform Team, he advises employers and plan sponsors regarding the establishment, administration and operation of fully insured and self-funded health and welfare plans to comply with ERISA, IRC, ACA/PPACA, HIPAA, COBRA, ADA, GINA, and other related laws.
  • Guiding clients through the selection of plan service providers, along with negotiating service agreements with vendors to address plan compliance and operations, while leveraging data security experience to ensure plan data is safeguarded.
  • Counsels plan sponsors on day-to-day compliance and administrative issues affecting plans.
  • Assists in the design and drafting of benefit plan documents, including severance and fringe benefit plans.
  • Advises plan sponsors concerning employee benefit plan operation, administration and correcting errors in operation.

Joe speaks and writes regularly on current employee benefits and data privacy and cybersecurity topics and his work has been published in leading business and legal journals and media outlets, such as The Washington Post, Inside Counsel, Bloomberg, The National Law Journal, Financial Times, Business Insurance, HR Magazine and NPR, as well as the ABA Journal, The American Lawyer, Law360, Bender’s Labor and Employment Bulletin, the Australian Privacy Law Bulletin and the Privacy, and Data Security Law Journal.

Joe served as a judicial law clerk for the Honorable Laura Denvir Stith on the Missouri Court of Appeals.