Share

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus convallis sem tellus, vitae egestas felis vestibule ut.

Error message details.

Reuse Permissions

Request permission to republish or redistribute SHRM content and materials.

EEOC Issues Guidance on Use of AI


An image of a computer screen with a lot of data on it.


Employers can't rely on a vendor's assurances that its AI tool complies with Title VII of the Civil Rights Act of 1964. If the tool results in an adverse discriminatory impact, the employer may be held liable, the U.S. Equal Employment Opportunity Commission (EEOC) clarified in new technical assistance on May 18. The guidance explained the application of Title VII of the Civil Rights Act of 1964 to automated systems that incorporate artificial intelligence in a range of HR-related uses.

Without proper safeguards, employers might violate Title VII when they use AI to select new employees, monitor performance, and determine pay or promotions, the EEOC said.

"Too many employers are not yet fully realizing that long-standing nondiscrimination law is applicable even in the very new context of AI-driven employment selection tools," said Jim Paretti, an attorney with Littler in Washington, D.C. He described the guidance as a "wake-up call to employers."

Neutral tests or selection procedures, including algorithmic decision-making tools, that have a disparate impact on the basis of race, color, religion, sex or national origin must be job-related and consistent with business necessity; otherwise they are prohibited, the EEOC said. Even if the procedures are job-related and consistent with business necessity, alternatives must be considered. The agency noted that disparate impact analysis was the focus of the technical assistance.

Employers May Be Surprised by Title VII's Reach

"Employers are not prepared for the Title VII implications of using AI HR tools for two main reasons," said Bradford Newman, an attorney with Baker McKenzie in Palo Alto, Calif.

"First, front-line HR managers and procurement folks who routinely source AI hiring tools do not understand the risks," he said. "Second, AI vendors will not usually disclose their testing methods and will demand companies provide contractual indemnification and bear all risk for alleged adverse impact of the tools."

The EEOC puts the burden of compliance squarely on employers. "[I]f an employer administers a selection procedure, it may be responsible under Title VII if the procedure discriminates on a basis prohibited by Title VII, even if the test was developed by an outside vendor," the agency states in its technical assistance guidance.

The employer may also be held responsible for agents' actions, including software vendors, if the employer has given them authority to act on the employer's behalf. "This may include situations where an employer relies on the results of a selection procedure that an agent administers on its behalf," the EEOC stated in the guidance.

Employers may want to ask the vendor whether steps have been taken to evaluate whether use of a tool causes a substantially lower selection rate for individuals with a characteristic protected by Title VII, the agency recommended. If the vendor says a lower selection rate for a group of individuals is expected, the employer should consider whether the tool is job-related and consistent with business necessity and whether there are alternatives.

"The guidance leaves unanswered the key question of how employers should establish that AI-based tools are, in fact, job-related," said Mark Girouard, an attorney with Nilan Johnson Lewis in Minneapolis.

In addition, if the vendor is incorrect about its own assessment and the tool results in disparate impact discrimination or disparate treatment discrimination, the employer could be liable.

Four-Fifths Rule for Selection Rate Explained

The document explains the four-fifths rule—a general rule of thumb for determining the selection rate for applicants—and notes that it applies to the use of algorithmic decision-making tools.

Employers can assess whether a selection procedure has an adverse impact on a particular protected group by checking whether the use of the procedure causes a selection rate for individuals in the group that is substantially less than the selection rate for individuals in another group.

Suppose 80 white individuals and 40 Black individuals take a personality test as part of a part of a job application that is scored using an algorithm, and 48 of the white applicants and 12 of the Black applicants advance to the next round of the selection process, the EEOC hypothesized. Based on these results, the selection rate for white individuals is equivalent to 60 percent and the selection rate for Black individuals is 30 percent.

The ratio of the two rates is thus 30/60 or 50 percent. Because this ratio is lower than four-fifths, or 80 percent, the selection rate for Black applicants is substantially different than for white applicants and could be evidence of discrimination.

"Courts have agreed that use of the four-fifths rule is not always appropriate, especially where it is not a reasonable substitute for a test of statistical significance," the agency cautioned.

Employers may want to ask a vendor whether it relied on the four-fifths rule when determining whether the use of a tool might have an adverse impact or relied on a standard such as statistical significance that's often used by courts, the EEOC added.

"Unfortunately, the current EEOC standards for establishing the validity of a hiring tool—which are now nearly 50 years old—don't lend themselves neatly to analysis of these new and emerging technologies," Girouard said.

Oversight

Employers using AI HR tools should have effective AI oversight, including a chief AI officer, and routinely test for potentially adverse impact, Newman said.

Scott Nelson, an attorney with Hunton Andrews Kurth in Houston, noted AI and its interplay with the law is rapidly evolving.

"I'm not sure any of us are truly ready for the potential impact AI can, and likely will, have on our daily lives," said Erica Wilson, an attorney with Fisher Phillips in Pittsburgh.  "That being said, employers have been put on notice that they cannot simply pick a software program off the shelf, or write one themselves, and assume it works as intended without inadvertent bias. Employers need to pay attention and test their employment-related AI tools early and often to make sure they aren't causing unintended harm."

Advertisement

​An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.

Advertisement