USA AI Bias Audit Laws

The NY automated employment decision tools law

Update as of December 15, 2022: Due to the volume of comments the NYC Department of Consumer & Worker Protection received in response to the proposed rule, Local Law 144 will not be enforced until April 15, 2023.

___________________________________________________________________________

The NY Local Law #2021/144 amending the administrative code of the city of New York, in relation to automated employment decision tools which will take effect Jan. 2, 2023, would require that a bias audit be conducted on an automated employment decision tool prior to the use of said tool.

The bill would require that candidates or employees that reside in the city be notified about the use of such tools in the assessment or evaluation for hire or promotion, as well as be notified about the job qualifications and characteristics that will be used by the automated employment decision tool. Violations of the provisions of the bill would be subject to a civil penalty between $500 to $1,500 each day on which an automated employment decision tool is used in violation of this section. Failure to provide any notice to a candidate or an employee constitutes a separate violation.

The law focuses on conducting bias audits of the tools to identify its potential discrimination based on race/ethnicity or gender. Under the law, employers will be prohibited from using an AI-type tool to screen job candidates or evaluate employees unless the technology has been audited for bias no more than one year before its use and a summary of the audit’s results has been made publicly available on the employer’s website.

The law defines automated employment decision tools as “any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons.”

The law is unclear in many aspects such as whether it includes applications such as the pre-employment personality test or how often this assessment needs to be performed (e.g., anytime there is a code change).

The Illinois Artificial Intelligence Video Interview Act (HB 2557)

The law requires employers who use artificial intelligence to analyze video interviews to do the following when considering applicants for positions based in Illinois before asking applicants to submit video interviews:

  • Provide notice: Employers must inform applicants that AI will be used to analyze their interview videos.
  • Provide an explanation: Employers must explain to the applicant how their artificial intelligence program works and what characteristics the AI uses to evaluate an applicant’s fitness for the position.
  • Obtain consent: Employers must obtain the applicant’s consent to be evaluated by AI before the video interview and may not use AI to evaluate a video interview without consent.
  • Maintain confidentiality: Employers will be permitted to share the videos only with persons whose expertise or technology is needed to evaluate the applicant.
  • Destroy copies: Employers must destroy both the video and all copies within 30 days after an applicant requests such destruction (and instruct any other persons who have copies of the video to destroy their copies as well).

Elements of concern:

  • The law does not define what AI means (traditionally it means the capability of a machine to imitate intelligent human behavior). How far does the technology have to go to qualify as AI under the bill? Weak AI (e.g. Siri) or Strong AI.
  • Potential bias, as the AI is trained by comparing data, certain facial expressions or demeanor may be favored.
  • The notice does not require informing the applicant that they can request the destruction of the video, nor how to do so. It is also unclear what the consequences are (if any) if the videos are not destroyed.
  • It is unclear if the law would apply to a third-party assessing the applicant, not the actual employer.
  • The law does not require express consent so default consent may suffice (if you apply you are consenting)

NIST Special Publication 1270- Towards a Standard for Identifying and Managing Bias in Artificial Intelligence

NIST (National Institute of Standards and Technology) created this document with the intent to develop methods for increasing assurance, governance, and practice improvements for identifying, understanding, measuring, managing, and reducing bias. This proposal is part of NIST’s broader work in developing a risk management framework for Trustworthy and Responsible AI. The document covers the following:

  • Section 1 lays out the purpose and scope of NIST’s work in AI bias.
  • Section 2 Provides context and explains the terminology used. It describes three categories of bias (human, systemic, statistical/ computational) and how they may occur in the commission, design, development, and deployment of AI technologies that can be used to generate predictions, recommendations, or decisions (such as the use of algorithmic decision systems), and how AI systems may impact individuals and communities or create broader societal harms.
  • Section 3 describes three broad areas that present challenges for addressing AI bias: data sets, measuring and metrics to support testing and validation, and human factors. Also provides general guidance for managing AI bias in each of those areas.

In this document, NIST provides an initial socio-technical framing for AI bias, including key context and terminology, highlights of the main challenges, and foundational directions for future guidance.

An example on how to audit AI Bias using a Black Box approach

Each algorithm and related processes associated with training the AI must be evaluated to determine a strategy to test the AI algorithm for bias. We are auditing AI Bias (the code), not the process (human actions).

This article focuses on one possible way of testing, considering a black box approach where the AI engine code and the database structure and data are not accessible to the auditor. The example relates to testing against the NY Bias Audit Law, where candidate selection for a job search should not be based on race/ethnicity or gender.

Caveat – before performing any kind of analysis, please review your contract agreement with the manufacturer, as any approach to measure and identify a tendency or bias can be considered an attempt to reverse engineer the algorithm and infringe their Intellectual Property. For those cases, make sure you obtain a written consent and authorization from your provider.

Information Gathering and Documentation Analysis

Request the job description for the positions to use in the test. That documentation will explain how the algorithm would be instructed to identify the optimal skills for candidate selection criteria (e.g., which skills are needed, what is the value assigned to them, and any other factor used to select best candidates for a job), and the information process and classification to determine the final score and criteria for identifying valid candidates (e.g., a skill may be missing but still a valid candidate, or the weight of experience, etc.).

Seek to understand how the algorithm works and process parsing of the data for the algorithm to analyze. 

Seek to understand the sets of selection criteria and determine if there is any inherent bias even before the algorithm is fed the data. If the data is Bias, the algorithm will respond accordingly. Garbage in garbage out.

Data Analysis

From the data set provided (all applicants for a particular job posting), perform a regression Analysis. In the example of a testing for AI bias for selection of candidates for employment based on race or gender, run the regression analysis to identify candidate population, segment it by race and gender, and analyze population of rejected candidates and accepted ones, compare skills between both populations for potential discrimination (similar skills found in the accepted and rejected populations).

Additionally, perform Multiple Regression Analysis by groups (gender, race). In multiple regression, the objective is to develop a model that describes a dependent variable x(Candidate selection) to more than one independent variable y, z (gender and race).

Regression is used to predict the future result. If one segment (variable) shows a higher predictive result, it will identify the algorithm’s “ideal” characteristics of a candidate (in this example would be gender and race), it will show its biases. To be unbiased, the regression should be “not significant”.

You can use any statistical software (e.g., SPSS, Excel, R, Q, etc.) to perform the analysis.

Review of Data Criterion Provided to the Algorithm

Perform a review of data set considerations. The data set considerations (what data to select) and weight must be challenged, reviewed, and updated periodically.

Review for Transparency

The NY law requires testing for Transparency. Review candidate notification regarding what data is being evaluated, option to request non AI selection, and informed consent.  Review the terms of use and service, privacy policy, and any other notice or information provided to candidates.

Contact us today for assistance auditing AI systems for Bias or contractual obligations with third parties!

Skip to content