In a groundbreaking move, likely to have significant impact on employee hiring and HR tech, the New York City Council has passed a measure (“the NYC measure”) that bans the use of automated decision-making tools to (1) screen job candidates for employment, or (2) evaluate current employees for promotion, unless the tool has been subject to a “bias audit”, conducted not more than one year prior to the use of the tool.  The NYC measure will take effect January 2, 2023.

The NYC measure was passed due to growing concern about automated decision-making tools – which will also be regulated under the California Privacy Rights Act, which is set to take effect at the same time as the NYC measure – one of which is that such tools may be imbedded with unintended biases that result in outcomes that discriminate against individuals based on protected characteristics like race, age, religion, sex and national origin.

The category of automated decision-making tools targeted by the NYC measure is “automated employment decision tools,” which the measure defines as “any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons.”  Excluded from the measure’s scope are tools that do not automate, support, substantially assist or replace discretionary decision-making processes and that do not materially impact natural persons, such as, for instance, junk email filters, firewalls, antivirus software, calculators, spreadsheets, databases, data sets, or other compilations of data.

Employers that intend to utilize an employment decision tool must first conduct a bias audit and must publish a summary of the results of that audit on their websites.  They must also notify all NYC employees and/or job candidates that: (1) the tool will be used in connection with assessment or evaluation of their employment or candidacy and (2) specify the job qualifications and characteristics that the tool will use to make that assessment or evaluation.

Utilizing an automated employment decision tool without first conducting a compliant bias audit exposes employers to civil penalties of up to $500 on day one, followed by penalties of $500 to $1,500 every day thereafter.   Failure to properly notify candidates or employees about use of such tools constitutes a separate violation.

This is not the first legislation of its kind, but certainly the most expansive.   In late 2019, Illinois passed the Artificial Intelligence Video Interview Act (“the AIVI Act”), HB2557, which imposes consent, transparency and data destruction requirements on employers that implement AI technology during the job interview process. The AIVI Act, the first state law to regulate AI use in video interviews, took effect January 1, 2020. Likewise, in 2020, Maryland enacted a law that requires notice and consent prior to use of facial recognition technology during a job interview.  And the Attorney General of Washington D.C. recently introduced a bill that addresses discrimination in automated decision-making tools generally.    Similar legislation is likely to trend across other states, as this technology continues to infiltrate hiring practices and other areas of business.  As early as 2014, the EEOC has been taking notice of “big data” technologies and the potential that the use of such technology may be in violation of existing employment laws such as Title VII of the Civil Rights Act of 1964, the Age Discrimination in Employment Act, the American with Disabilities Act and the Genetic Information Nondiscrimination Act.

Only time will tell the impact the NYC measure and others of its kind will have on employment practices, but employers should tread carefully with AI usage in the workplace. Moreover, it will likely not be long before other states and localities enact similar legislation. Employers, regardless of jurisdiction, should be evaluating their hiring practices and procedures, particularly to ensure that they obtain appropriate written consent before using any technology that collects sensitive information about job applicants or employees, and that they have conducted all requisite privacy and bias impact assessments.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Jason C. Gavejian Jason C. Gavejian

Jason C. Gavejian is a principal in the Berkeley Heights, New Jersey, office of Jackson Lewis P.C. and co-leader of the firm’s Privacy, Data and Cybersecurity practice group. Jason is also a Certified Information Privacy Professional (CIPP/US) with the International Association of Privacy…

Jason C. Gavejian is a principal in the Berkeley Heights, New Jersey, office of Jackson Lewis P.C. and co-leader of the firm’s Privacy, Data and Cybersecurity practice group. Jason is also a Certified Information Privacy Professional (CIPP/US) with the International Association of Privacy Professionals.

As a Certified Information Privacy Professional (CIPP/US), Jason focuses on the matrix of laws governing privacy, security, and management of data. Jason is co-editor of, and a regular contributor to, the firm’s Workplace Privacy, Data Management & Security Report blog.

Jason’s work in the area of privacy and data security includes counseling international, national, and regional companies on the vast array of privacy and security mandates, preventive measures, policies, procedures, and best practices. This includes, but is not limited to, the privacy and security requirements under state, federal, and international law (e.g., HIPAA/HITECH, GDPR, California Consumer Privacy Act (CCPA), FTC Act, ECPA, SCA, GLBA etc.). Jason helps companies in all industries to assess information risk and security as part of the development and implementation of comprehensive data security safeguards including written information security programs (WISP). Additionally, Jason assists companies in analyzing issues related to: electronic communications, social media, electronic signatures (ESIGN/UETA), monitoring and recording (GPS, video, audio, etc.), biometrics, and bring your own device (BYOD) and company owned personally enabled device (COPE) programs, including policies and procedures to address same. He regularly advises clients on compliance issues under the Telephone Consumer Protection Act (TCPA) and has represented clients in suits, including class actions, brought in various jurisdictions throughout the country under the TCPA.

Jason represents companies with respect to inquiries from the HHS/OCR, state attorneys general, and other agencies alleging wrongful disclosure of personal/protected information. He negotiates vendor agreements and other data privacy and security agreements, including business associate agreements. His work in the area of privacy and data security includes counseling and coaching clients through the process of investigating and responding to breaches of the personally identifiable information (PII) or protected health information (PHI) they maintain about consumers, customers, employees, patients, and others, while also assisting clients in implementing policies, practices, and procedures to prevent future data incidents.

Jason represents management exclusively in all aspects of employment litigation, including restrictive covenants, class-actions, harassment, retaliation, discrimination, and wage and hour claims in both federal and state courts. He regularly appears before administrative agencies, including the Equal Employment Opportunity Commission (EEOC), the Office for Civil Rights (OCR), the New Jersey Division of Civil Rights, and the New Jersey Department of Labor. Jason’s practice also focuses on advising/counseling employers regarding daily workplace issues.

Jason’s litigation experience, coupled with his privacy practice, provides him with a unique view of many workplace issues and the impact privacy, data security, and social media may play in actual or threatened lawsuits.

Jason regularly provides training to both executives and employees and regularly speaks on current privacy, data security, monitoring, recording, BYOD/COPE, biometrics (BIPA), social media, TCPA, and information management issues. His views on these topics have been discussed in multiple publications, including the Washington Post, Chicago Tribune, San Francisco Chronicle (SFGATE), National Law Review, Bloomberg BNA, Inc.com, @Law Magazine, Risk and Insurance Magazine, LXBN TV, Business Insurance Magazine, and HR.BLR.com.

Jason is the co-leader of Jackson Lewis’ Hispanic Attorney resource group, a group committed to increasing the firm’s visibility among Hispanic-American and other minority attorneys, as well as mentoring the firm’s attorneys to assist in their training and development. He also previously served on the National Leadership Committee of the Hispanic National Bar Association (HNBA) and regularly volunteers his time for pro bono matters.

Prior to joining Jackson Lewis, Jason served as a judicial law clerk for the Honorable Richard J. Donohue on the Superior Court of New Jersey, Bergen County.