AI Bias Audit

Home » Services » AI Bias Audit

Services category: Internal Audit

AI Bias Audit

Share this content

[oxilab_flip_box id=”3″]

 

An example on how to audit AI Bias using a Black Box approach

Each algorithm and related processes associated with training the AI must be evaluated to determine a strategy to test the AI algorithm for bias. We are auditing AI Bias (the code), not the process (human actions).

Before performing any kind of analysis, please review your contract agreement with the manufacturer, as any approach to measure and identify a tendency or bias can be considered an attempt to reverse engineer the algorithm and infringe their Intellectual Property. For those cases, make sure you obtain a written consent and authorization from your provider.

Information Gathering and Documentation Analysis

Request the job description for the positions to use in the test. That documentation will explain how the algorithm would be instructed to identify the optimal skills for candidate selection criteria (e.g., which skills are needed, what is the value assigned to them, and any other factor used to select best candidates for a job), and the information process and classification to determine the final score and criteria for identifying valid candidates (e.g., a skill may be missing but still a valid candidate, or the weight of experience, etc.).

Seek to understand how the algorithm works and process parsing of the data for the algorithm to analyze.

Seek to understand the sets of selection criteria and determine if there is any inherent bias even before the algorithm is fed the data. If the data is Bias, the algorithm will respond accordingly. Garbage in garbage out.

Data Analysis

From the data set provided (all applicants for a particular job posting), perform a regression Analysis. In the example of a testing for AI bias for selection of candidates for employment based on race or gender, run the regression analysis to identify candidate population, segment it by race and gender, and analyze population of rejected candidates and accepted ones, compare skills between both populations for potential discrimination (similar skills found in the accepted and rejected populations).

Additionally, perform Multiple Regression Analysis by groups (gender, race). In multiple regression, the objective is to develop a model that describes a dependent variable x(Candidate selection) to more than one independent variable y, z (gender and race).

Regression is used to predict the future result. If one segment (variable) shows a higher predictive result, it will identify the algorithm’s “ideal” characteristics of a candidate (in this example would be gender and race), it will show its biases. To be unbiased, the regression should be “not significant”.

You can use any statistical software (e.g., SPSS, Excel, R, Q, etc.) to perform the analysis.

Review of Data Criterion Provided to the Algorithm

Perform a review of data set considerations. The data set considerations (what data to select) and weight must be challenged, reviewed, and updated periodically.

Review for Transparency

The NY law requires testing for Transparency. Review candidate notification regarding what data is being evaluated, option to request non AI selection, and informed consent.  Review the terms of use and service, privacy policy, and any other notice or information provided to candidates.

Contact us today for assistance with auditing AI systems for Bias or contractual obligations with third parties!

CONTACT

>> This service article was last update on November 18, 2022
Contact Elevate today to learn more about AI Bias Audit

Elevate // +1 (888) 601-5351 // Monday to Friday 9am-6pm

Skip to content