Share

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus convallis sem tellus, vitae egestas felis vestibule ut.

Error message details.

Reuse Permissions

Request permission to republish or redistribute SHRM content and materials.

EEOC Solicits Recommendations to Curb AI-Driven Discrimination


A man sitting in front of several computer screens.


The U.S. Equal Employment Opportunity Commission (EEOC) recently held a public hearing that enabled workplace experts to offer suggestions for ensuring that artificial intelligence (AI) doesn't discriminate against job applicants.

The session, "Navigating Employment Discrimination in AI and Automated Systems: A New Civil Rights Frontier," outlined the benefits and pitfalls of using AI in employment decisions to help the EEOC determine how to regulate its use.

"The use and complexity of technology in employment decisions is increasing over time," said EEOC Chair Charlotte A. Burrows. "AI and other algorithms offer great advances, but they may perpetuate barriers [to] employment."

Nearly 1 in 4 organizations reported using automation or AI to support HR-related activities, including recruitment and hiring, according to a 2022 survey by the Society for Human Resource Management (SHRM). The report revealed that 85 percent of employers that use automation or AI said it saves time or increases efficiency.

AI Can Discriminate Against People of Color

The advantages of AI in employment settings were addressed throughout the session: Several panelists noted that these technologies can easily and cheaply source, recruit and select applicants for employment.

Alex Engler, a fellow at the Brookings Institution, a think tank in Washington, D.C., and an adjunct professor at Georgetown University, explained that AI offers value to employers when used responsibly. However, automation has too often been deployed with inflated promises and insufficient testing or worker protections.

"This can lead to discriminatory outcomes, worker disenfranchisement through black-box AI decisions and unjust decisions resulting from algorithmic mistakes," he said.

ReNika Moore, director of the racial justice program at the American Civil Liberties Union in New York City, explained that newer AI tools are often marketed as cheaper, more efficient, and nondiscriminatory or less discriminatory than their predecessors.

"There is research showing that AI-driven tools can lead to more discriminatory outcomes than human-driven processes," she said.

Moore referred to a 2022 white paper by research lab Learning Collider in New York City comparing human-driven hiring with typical AI-driven hiring, which found that the standard AI-driven tool selected 50 percent fewer Black applicants than humans did.

She also noted that Black and Latino applicants are overrepresented in data that contains negative or undesirable information—such as records from criminal legal proceedings, evictions and credit history.

"We must have comprehensive public oversight, transparency and accountability to guarantee that job seekers and employees do not face the same old discrimination dressed up in new clothes," Moore said.

The EEOC had previously warned companies that AI could potentially discriminate against people with disabilities. And Heather Tinsley-Fix, a senior advisor at AARP, said that automation also has the potential to discriminate against older candidates.

"Any data point collected that explicitly reveals or serves as a proxy for age—such as date of birth, years of experience or date of graduation—can be noticed by the algorithm as part of a pattern denoting undesirable candidates and signal the algorithm to lower their ranking or screen them out entirely," she said.

Panelists Offer Solutions

Jordan Crenshaw, vice president of the U.S. Chamber of Commerce's Technology Engagement Center, said it had examined the public's perception of AI earlier this year. The findings revealed that people grow more comfortable with AI as they familiarize themselves with its potential role in society.

"Education continues to be one of the keys to bolstering AI acceptance and enthusiasm, as a lack of understanding of AI is the leading indicator for a push-back against AI adoption," he said. "The federal government can play a critical role in incentivizing the adoption of trustworthy AI applications through the right policies."

[SHRM members-only HR Q&A: What Is Artificial Intelligence and How Is It Used in the Workplace?]

Engler suggested that the EEOC:

  • Consider a range of AI employment systems—in hiring, targeted job ads, recruitment, task allocation, evaluation of employee performance, wage setting, promotion and termination.
  • Encourage and enforce AI principles on these employment systems.
  • Develop the capacity to provide oversight, such as by using investigations to audit these critical AI systems and ensure their compliance with federal law, as well as to use information-gathering authorities to inform the EEOC and the public on their proliferation and impact.

Other panelists recommended that the EEOC implement further policies to protect underrepresented applicants, such as mandating that employers audit automated hiring systems in use, document the use of emerging technologies and inform applicants of their AI applications.

The hearing continues the work of the EEOC's AI and Algorithmic Fairness Initiative, an initiative to ensure that the use of software used in employment decisions complies with the federal laws the agency enforces.

In January, the EEOC published a draft of its new Strategic Enforcement Plan in the Federal Register outlining its priorities in tackling workplace discrimination, including those involving AI, over the next four years. Public comments on the draft must be received by Feb. 9, 2023.


Advertisement

Advertisement