The EEOC Puts Employers on Notice; The Use of AI in Hiring/Recruiting

All News

By: Raabia Cheema

Earlier this year, the Equal Employment Opportunity Commission (EEOC) released its Strategic Enforcement Plan—the document named discrimination by artificial intelligence (AI) in the workplace as one of its key priorities. This announcement follows the creation of the Artificial Intelligence and Algorithmic Fairness Initiative, in January of 2021, launched by the EEOC chair to ensure that the use of AI, and other emerging technologies, would comply with federal civil rights laws. The initiative marked the EEOC’s intention to (1) issue technical assistance to provide guidance on algorithmic fairness and the use of AI in employment decisions, (2) identify promising practices, (3) hold listening sessions with key stakeholders about algorithmic tools and their employment ramifications, and (4) gather information about the adoption, design, and impact of hiring and other employment-related technologies.

Amongst the employment decisions that are impacted by the use of AI is recruitment/hiring of workers. According to the World Economic Forum, many Fortune 500 companies use AI to sift through large numbers of applications. An estimation from the Society for Human Resource Management (SHRM) stated that, in 2022, around 79% of employers were projected as using AI for recruitment and hiring. Companies rely on these AI systems to both analyze a large volume of applicants, and to find applicants with a specific skill or experience. However, these systems are not always reliable.

The hiring process functions as a set of multiple decisions—for example, in selecting a candidate pool and in shortlisting the strongest candidates for a given position. In all of these steps, AI, likely in the form of algorithms, plays a role.

During the recruiting stage, in which employers shape their candidate pool, AI technology aids in the advertisement of job opening, the notifying of candidates about potential positions, etc. These algorithms are intended to reach the most relevant job seekers to any given position, allowing employers to make more efficient use of their recruitment budgets. However, according to Harvard Business Review, these AI systems can lead job ads to be delivered in a way that reinforces both gender and racial stereotypes. In a study conducted by Harvard Business Review in 2019, researchers found that Facebook targeted ads for supermarket cashier positions were shown to 85% of women, whilst taxi company jobs were projected to an audience of around 75% black individuals. Similarly, job board recommendation systems, such as ZipRecruiter, can replicate implicit biases. These systems automatically learn the preferences of employers—if a recruiter interacts more frequently with white males, the system is likely to find proxies for those characteristics and produce more white male candidates.

The problems don’t end here. Once the candidate pool is created, employers further employ AI to screen job candidates. Resume parsing tools employ “knockout questions” to establish baseline, minimal qualifications. Additional AI also incorporates machine learning to predict which applicants will be the most successful or help make decisions based on the recruiters past screening decisions.

These issues have legal implications. In May of 2022, the EEOC, alongside the Department of Justice (DOJ), released guidance indicating the responsibilities of employers when it comes to the use of software, algorithms, and artificial intelligence to assess job applicants & employees. This document specifically addressed possible violations under Title I of the Americans with Disabilities Act (ADA). The EEOC states that employers could be held responsible for the actions of software, even in the event that software was acquired by a vendor. If an employer (1) failed to provide a “reasonable accommodation” that is necessary for job applicants or employees to be rated fairly and accurately, (2) relied on an algorithmic decision-making tool that intentionally or unintentionally “screened out” an individual with a disability, or (3) adopted an algorithmic decision-making tool for use with its job applicants or employees that violates the ADA’s restrictions on disability-related inquiries and medical examination, they could be held legally responsible for violating the ADA.

Litigation. We have begun to see litigation in this area. Last year, the EEOC charged three companies providing English tutoring services to students in China under the name “iTutorGroup.” The EEOC alleged that iTutorGroup violates federal law by programming its recruitment software to automatically reject older applicants, in an act of age discrimination. According to the lawsuit, iTutorGroup hired thousands of tutors yearly based in the United States to provide remote, online tutoring. iTutorGroup programmed their recruitment software to “automatically reject female applicants aged 55 or older and male applicants aged 60 or older.” This alleged conduct violates the Age Discrimination in Employment Act (ADEA).

Similarly, in February 2023, a class action lawsuit was filed against Workday, Inc. in the Northern District of California, alleging that the company had offered its customers biased applicant screening tools which resulted in racial, age, and disability discrimination. The plaintiff, an African-American male over 40, applied for and was denied 80-100 jobs on Workday since 2018, despite his apparent qualifications. The plaintiff represents all current or former applicants since June 3, 2019, that are African American, over the age of 40, or disabled, and who have not been “referred or permanently hired” due to Workday’s AI practices. The lawsuit alleged that Workday’s screening tools allow its customer to use “discriminatory and subjective judgements when evaluating applicants” and also permit the “preselection” of applicants in certain racial or age categories.

Legislation. In November 2021, the New York City Council passed a bill regulating employers and employment agencies’ use of “automated employment decision tools” in making employment decisions, set to take effect on January 1st, 2023. The law prohibits employers from using an automated employment decision tool, defined as “any computational process, derived from … artificial intelligence, that issues simplified output… that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons,” unless specific requirements are met. These requirements includes (1) that the tool has been subject to a bias audit within the last year; and (2) a summary of the results of the most recent bias audit and distribution data for the tool have been made publicly available on the employer or employment agency’s website. A bias audit is defined as “an impartial evaluation by an independent auditor,” which includes “the testing of an automated employment decision tool to assess the tool’s disparate impact on persons of any component 1 category required to be reported by employers.” The law also requires that employers who use such automated tools notify candidates who reside in New York City that such a tool will be used, along with the job qualifications/characteristics that the tool will use for assessing applicants. Due to a high level of public comments, however, the New York City Department of Consumer and Worker Protection announced it will delay enforcing the law until April 15, 2023.

As AI technology continues to be utilized by employers, these legal implications are extremely important for consideration. The EEOC has made clear its intention to investigate and litigate discriminatory effects of such software, even if the software is purchased by a third-party. Employers should ensure the AI software they use does not fall into these problems.