Artificial intelligence (AI), increasingly used in recruiting, might inadvertently discriminate against women and minorities if the data fed into it is flawed. Vendors of AI may be sued, along with employers, for such discrimination, but vendors usually have contractual clauses disclaiming any liability for employment claims, leaving employers on the hook.
Most likely, discrimination lawsuits initially would be brought against employers rather than AI vendors, but employers likely would try to involve the vendors in the litigation, according to Jennifer Betts, an attorney with Ogletree Deakins in Pittsburgh.
Only the largest companies have the bargaining power to persuade AI vendors to omit no-liability clauses from their commercial contracts. Vendors almost never will agree to pay employers for any liability for AI discrimination, said Bradford Newman, an attorney with Paul Hastings in Palo Alto, Calif.
There have not yet been many employment-related AI lawsuits. Nonetheless, Newman said, "I know from discussions with the plaintiff's bar that they are coming and will likely begin with AI used in recruiting and selecting candidates."
Uses of AI
Employers use AI in many ways. Some AI tools recruit by sending targeted job advertisements to individuals who are not actively seeking a job.
AI also is applied to large applicant pools to screen resumes to determine which applicants should advance to interviews, said Natalie Pierce, co-chair of the robotics, AI and automation practice group at Littler in San Francisco. AI programs "use algorithms that consider key metrics such as work experience, education, job progression and relevant skills," she stated. This technology "is also able to store data about candidates so that hiring managers can reach back to past applicants to fill current job openings."
AI enables chatbots to respond to routine queries from candidates, check details such as availability and visa requirements, and help onboard new hires, noted Matthew Savage Aibel in law firm Epstein Becker Green's New York City office.
"Other AI tools analyze resumes or video-recorded interview responses to either narrow the choices for a hiring manager or recommend the successful candidate," he said. Illinois has placed restrictions on the use of artificial intelligence for video interviews, which take effect Jan. 1, 2020.
Risk of Discrimination
"It is unlikely that an AI-enabled software would be intentionally developed to discriminate against minorities or women," Betts said.
Sometimes, AI can prevent discrimination. Some employers use AI-powered tools to screen the words used in job postings to identify coded language that may unintentionally deter certain potential applicants, she said.
But "the larger risk is that these tools may unintentionally discriminate against a protected group," she stated.
Aibel explained that AI can result in bias by selecting for certain neutral characteristics that have a discriminatory impact against minorities or women. For example, studies show that people who live closer to the office are likelier to be happy at work. So an AI algorithm might select only resumes with certain ZIP codes that would limit the potential commute time. This algorithm could have a discriminatory impact on those who do not live in any of the nearby ZIP codes, inadvertently excluding residents of neighborhoods populated predominantly by minorities.
AI also can result in bias when a company tries to hire workers based on the profiles of successful employees at the company. If the majority or all the high-performing employees who are used as examples are men, any AI algorithm evaluating what makes a person successful might unlawfully exclude women, Aibel cautioned.
If the data is tainted by historical biases, the algorithms may reflect these biases and disadvantage some candidates. AI may favor men in technology jobs due to past underrepresentation of women in this field, Pierce said.
What if all the top-rated employees at the company belonged to a particular fraternity in college, and the AI program identified that from resumes or by searching the Internet? "Suddenly, the program might only suggest candidates who were also members of that fraternity, thus creating a discriminatory impact," Aibel said.
"The scope of people impacted expands greatly when a computer can make these decisions in fractions of a second."
[SHRM members-only toolkit: Managing Equal Employment Opportunity]
Validate AI Tools
The Equal Employment Opportunity Commission expects companies that use AI to take reasonable measures to test the algorithm's functionality in real-world scenarios to ensure the results are not biased, Newman said.
"It's helpful to have a cross-disciplinary and diverse team—including stakeholders from operations, legal, human resources, chief knowledge or innovation officers and IT—be part of the internal decision-making process to evaluate the tool," Betts said. This group can address concerns and whether the technology is a good cultural fit.
The teams will get the tool "to the stage where AI can remove human bias, and they'll know where humans need to spot-check for unintended robot bias," said Chris Dyer, founder and CEO of PeopleG2, a background-check company based in Brea, Calif.
Some employers conduct pilot programs with AI tools before widespread implementation. "This seems to work well," Betts said.
Aibel said that employers should test their AI algorithms often. "This does not need to be done against an existing system, which itself could be biased, but instead should be statistically validated by a data scientist—ideally one operating at the direction of an attorney so that the work remains privileged."
With AI playing a larger role at organizations, Newman recommended that companies consider appointing a chief AI officer. "A company may use AI in several different aspects of its operations, and it may not be reasonable for a single person to understand it all," he said. But "there ought to be a single executive to whom periodic reporting is made and who has overall oversight for AI's deployment within the company."
An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.