Skip to content

Internet Explorer is no longer supported by this website.

For optimal browsing we recommend using Chrome, Firefox or Safari.

Client Alerts

EEOC Issues Guidance for Use of AI in Employment Selection Procedures

May 2023

Client Alerts

EEOC Issues Guidance for Use of AI in Employment Selection Procedures

May 2023

On May 18, the Equal Employment Opportunity Commission (“EEOC”) issued guidance on the use of artificial intelligence (“AI”) in employment decision-making. The rapid expansion of the use of AI by employers has largely gone unregulated, and, while the guidance is not binding precedent, it reaffirms the EEOC’s position that improper application of AI in employment-related decisions violates anti-discrimination protections under Title VII of the Civil Rights Act and highlights the EEOC’s focus on this high-profile, rapidly developing issue.

EEOC Confirms Title VII Protections Apply to Use of AI in Selection Procedures

Importantly, the EEOC’s guidance does not create new policies. Rather, it “applies principles already established in the Title VII statutory provisions as well as previously issued guidance.” Title VII prohibits employment discrimination based on race, color, religion, sex (including pregnancy, sexual orientation, and gender identity), or national origin. The EEOC notes that while Title VII applies to all employment practices, the scope of the guidance is limited to the use of AI with respect to “selection procedures,” such as hiring, promotion, and termination. Additionally, the guidance addresses only potential disparate or adverse impact discrimination that may result from the use of improperly designed, vetted, or implemented AI, software, and algorithms in employment selection procedures. Disparate or adverse impact discrimination occurs when a seemingly neutral test or selection procedure disproportionately excludes individuals in protected classes. Title VII also prohibits intentional or disparate treatment discrimination, which consists of practices that intentionally discriminate against individuals in protected classes. However, the guidance does not address intentional discrimination or protections against discrimination afforded by other federal employment discrimination statutes, such as the Americans with Disabilities Act or the Age Discrimination in Employment Act.

The guidance advises that if an algorithmic decision-making tool has a disparate impact on individuals within a protected class, its use violates Title VII unless it can be shown that the use of the tool is “job related and consistent with business necessity” pursuant to Title VII. While the EEOC’s guidance does not include specific examples, the use of seemingly neutral factors in selection procedures could result in disparate impact discrimination against protected classes. For example, while an employer may simply want to identify candidates who reside close to its office, screening for zip codes could unintentionally result in racial bias. Likewise, screening out applicants with gaps in work history could disproportionately screen out female applicants who are more likely to take time off to care for children or other family members. However, the guidance specifically does not address the other stages in a Title VII disparate impact analysis, including whether the use of an automated tool is a valid measure of important job-related traits or characteristics.

Additionally, the EEOC reaffirmed that the Uniform Guidelines on Employee Selection Procedures (“Guidelines”) adopted by the EEOC in 1978 – when the concept of AI was still firmly science fiction – still apply to the use of AI in making hiring, promotion, and termination decisions. Those Guidelines provide direction for determining whether employer tests and selection procedures are lawful.

The Four-Fifths Rule Is Only a Rule of Thumb

The EEOC’s prior Guidelines – now affirmatively reinforced as applying equally with respect to the use of AI in employment-related decisions – referenced the four-fifths (80%) rule as one method to determine whether selection rates between two groups are “substantially” different and thus, whether the selection procedure has a disparate impact on individuals in a protected class. Under that four-fifths rule, if one group’s selection rate is less than 80% of the selection rate of the comparison group, then the rates are considered “substantially” different and, therefore, potential evidence of discrimination. In the context of the use of AI in these selection processes, however, the EEOC cautioned that, while the four-fifths rule remains a way to measure bias of an automated selection tool, it is “merely a rule of thumb” and “may be inappropriate in certain circumstances.”

The guidance highlighted the fact that courts have not always found the four-fifths test to be a reasonable substitute for a test of statistical significance. As a result, the EEOC warned that it might not consider compliance with the four-fifths rule alone sufficient to show that a particular selection procedure is lawful.

Employers Are Ultimately Responsible for Tools Designed or Administered by Others, Including Vendors

The EEOC advised that employers may be responsible if automated selection procedures, such as AI, result in disparate impact even if the tool was developed or administered by an outside vendor. Additionally, employers may be responsible for actions of their agents, including vendors, if the employer gave those agents authority to act on the employer’s behalf.

As a result, the EEOC encourages employers to, at a minimum, ask vendors if steps have been taken to evaluate whether the use of the tool causes a disparate impact. The EEOC also recommends that employers ask a vendor if it relied upon the four-fifths rule when determining whether the use of the tool might have a disparate impact or whether it instead relied upon a more robust statistical significance test often required by courts. However, the EEOC warned that an employer could still be liable – even if it asked these questions to properly vet the vender’s application of AI – if the vendor is incorrect about its tool resulting in disparate impact or disparate treatment discrimination.

If an employer discovers that the use of an algorithmic decision-making tool potentially has a disparate impact, the EEOC stressed that the employer should take steps to reduce the impact or select a different tool to avoid violating Title VII. Failure to adopt a less discriminatory tool could give rise to liability unless the use of the tool, despite that discriminatory impact, is justifiably job related and consistent with business necessity and there are no alternative tools with a less disparate impact that meet the employer’s needs.

Further, the EEOC advises that employers should conduct ongoing self-analyses to determine whether their use of AI for employment-related decisions potentially runs afoul of Title VII protections and, if so, proactively change any such practices.

What Should Employers Do?

In response to the EEOC’s new guidance, employers using or considering using AI-driven tools in selection procedures should proactively take steps to evaluate the potential discriminatory impact, properly vet third-party vendors and their algorithmic decision-making tools, and periodically audit the results of those tools to ensure that they are not, in fact, resulting in potentially discriminatory outcomes. As AI and related technology rapidly evolve, it is expected that regulations and guidance surrounding their use, including in the context of employment decision-making, will continue to develop. Therefore, employers should continue to pay careful attention to legal developments at the federal, state, and local levels on this issue.

This Client Alert has been prepared by Tucker Ellis LLP for the use of our clients. Although prepared by professionals, it should not be used as a substitute for legal counseling in specific situations. Readers should not act upon the information contained herein without professional guidance.

Authors

Related Services

Related News

Rise of the Machines

Christine M. Snyder, Ariana E. Bernard, published in Cleveland Metropolitan Bar Journal More

Tucker Ellis Presents Hot Topics in Employment Law

Thomas R. Simmons, Gregory P. Abrams, Ariana E. Bernard, Ndubisi (Bisi) A. Ezeolu, Peter E. Jones, Melissa Z. Kelly, Carl F. Muller, Christine M. Snyder, Scott J. Stitt More