Hiring Tech Has Potential, but Beware Automation Bias

 

By Lin Grensing-Pophal May 28, 2019
LIKE SAVE

Are we getting to the point where technology can nearly replace humans in the hiring process? It can screen applicants' resumes and conduct prescreening outreach—and much more. But beware the misconceptions and risks involved in having tech take over for HR professionals, particularly in decision-making processes.

As was reported last year, even a technology juggernaut like Amazon can be challenged in getting algorithms right. In an attempt to build a system to analyze resumes, Amazon eventually discovered that the algorithm reportedly became biased against female applicants, and it scrapped the project.

[SHRM members-only Express Request: App Overload in the Workplace]

All That Glitters Is Not AI

Let's start by defining the terms: Is hiring technology properly called automation or artificial intelligence (AI)?

Much of what we refer to as AI really isn't, said Abakar Saidov, CEO of talent software company Beamery, based in London. Automation and AI are not the same, he said. "Most companies that call themselves AI actually have no self-learning built in. It's completely a manual triage mechanism."

A chatbot that conducts initial candidate screenings? An algorithm that sifts through mounds of data to identify the traits of high-performing employees? Those are automation, not AI.

But, whether truly AI or simply automation technology, the use of nonhuman functionality in the talent acquisition process is on the rise as companies seek to be more efficient and effective in making hiring decisions.

Technology is being used in talent acquisition in a number of ways: helping to create job postings, wading through applications, identifying the top traits of successful employees so these traits can be sought in new applicants, and conducting initial candidate "interviews."

Technology like chatbots can be used to "help make a candidate's communication with a company feel more personalized and high-touch while enabling the company to engage at scale," said Mike Bailen, vice president of people at Lever, an applicant tracking software company in San Francisco.

Avoiding Automation Risks

Heather M. Muzumdar, counsel in Thompson Hine's Cincinnati office, regularly advises companies on technology and hiring. How humans instruct the technology about what it should do and the information it bases these actions on presents risks.

Just because an algorithm is involved in processing information doesn't mean the results won't be biased. For instance, says Muzumdar, if you're using past hiring practices to help shape future hiring practices, and in the past you've hired predominantly white men, that base sample could be fraught with bias. "The bias could be built into the model," she said, however inadvertently—just what happened at Amazon.

Bailen agreed. "When [technology] focuses on identifying talent and matching it to roles, it tends to propagate any bias that exists in the dataset it has been trained on," he said. This means that there is a risk of "reinforcing the unconscious bias that has existed in recruitment." 

These issues may be most prevalent in homegrown systems, Muzumdar says. But even tools purchased through third-party vendors may hold risks.

"Even though the contract, hopefully, has some indemnification language in it, at the end of the day you're still the employer. You're still the one that's going to get sued. You're the one that's going to be in the news," she said. "What are they doing to control for biases in their tools? What was their process?" Based on her own personal conversations with vendors, she said "some of them have thought through these issues more than others."

For instance, Muzumdar cautions, language matters. "I heard one sociologist talking with a client about do we want to measure the candidate's ability to handle anxiety? No. No, we do not want to use the word 'anxiety,' because that can be a disability," she pointed out, which could trigger Americans with Disabilities Act liability. Other risky language choices may relate to words that might be a proxy for protected classes, suggesting the inclusion or exclusion of candidates based on gender or age, for instance. Vendors with backgrounds in data science and programming may not be well-attuned to the risky nuances of language. 

Human Intervention and Critical Thinking Are Must-Haves

When deciding how to use technology appropriately in the talent acquisition process, avoid "automation bias"—the tendency to believe that technology is better than humans in performing various functions. Is it OK to automate rote tasks such as scheduling interviews with candidates? Sure. Conducting initial interviews with candidates? Maybe. Making hiring decisions? Probably not.

The bottom line: The tools we use are only as good as the thought process behind what we're asking technology to do and the data we have. "Right now, I don't believe the technology or datasets are ready for us to try to judge people and flatten them into a score," cautioned Bailen. For now, transactions, not decisions, may be the best place for HR to apply technology.

Lin Grensing-Pophal is a freelance writer in Chippewa Falls, Wis.

LIKE SAVE

Job Finder

Find an HR Job Near You
Search Jobs

SPONSOR OFFERS

HR Daily Newsletter

News, trends and analysis, as well as breaking news alerts, to help HR professionals do their jobs better each business day.
temp_image