You apply for a job you spotted on LinkedIn, entering your information into an automated system that you’ve become all too familiar with. Soon after, you receive a follow-up notification with a request to answer some (voluntary) demographic questions. You comply. Then, you receive some skills assessments you’re asked to complete. Finally, you’re provided with a series of questions and asked to record your responses via video and submit that back to the company. Within a few weeks, you receive a request to schedule an interview through a link that allows you to select the dates and times most convenient for you.
You might be surprised to find that, up to this point, there have been no human HR staff members involved in the selection process.
Artificial intelligence has become prolific in HR and other circles. And for good reason. It has the potential to significantly streamline operations as well as inform decisions by analyzing massive amounts of data that humans simply cannot process alone.
William Howard is director of HR research and advisory services with McLean & Company, an HR research firm based in London. Howard said, “We’re seeing organizations mainly leveraging AI in the requisition and sourcing stages of recruiting as well as in candidate communications.”
During the creation of a job requisition, he said, AI can be used to automate and streamline the process through robotic process automation. In sourcing, AI can be used to quickly build a large pipeline of talent using candidate profiles on professional networking and recruiting websites. And, throughout the whole process, AI can be used to generate candidate communications to save recruiters and hiring managers time drafting and sending emails.
The use of AI in HR—and, specifically, in hiring—is growing, but there are some areas where Howard said they are not seeing a lot of use. For smaller organizations, he said, the return on investment just isn’t there.
“Other organizations have a risk mitigation mindset or are in a heavily regulated industry or region that is preventing them from implementing AI, which sometimes drives them to a wait-and-see mindset while legislation is still being developed and launched,” he added.
Not every organization can, or should, be an early adopter of AI in hiring, Howard said. Those that are, though, can take steps to ensure that open communication and transparency are part of the process.
Importance of Transparency
AI firms recognize the need for transparency when this technology is used in the hiring process. John Pennypacker is vice president of sales and marketing at Deep Cognition, a company that provides businesses with next-generation AI platforms and solutions, based in Dallas.
“In my experience as a VP at an AI company, we’ve made significant strides in integrating AI into our hiring processes, but remaining transparent with candidates has been crucial to our success. We prioritize informing applicants about our AI usage at the earliest stages, often as part of the job posting, and then again at the beginning of the application process,” Pennypacker said. This communication includes a simple explanation of how AI assists in the process, “focusing on enhancing fairness by narrowing down candidates based on skills and qualifications rather than subjective criteria.”
Pennypacker said he thinks a key practice is “to assure candidates that a human element remains integral to our decision-making processes.” AI is a tool for efficiency, not a replacement for human judgment, he said.
In these communications, Pennypacker said, it’s important “to maintain clarity and simplicity in explaining AI’s role, avoiding jargon that could confuse applicants.” It’s also important to address potential concerns such as biases in AI algorithms, he said.
“We’ve had some pushback from candidates curious about the fairness of AI assessments, but open dialogue about continuous algorithm training and bias mitigation strategies has helped alleviate these concerns,” Pennypacker said.
For instance, AI-powered resume screening can save valuable time by quickly identifying top candidates, allowing recruiters to focus on building relationships with those individuals. Additionally, AI can aid in eliminating bias by removing identifying information such as names and genders from resumes before they reach hiring managers.
Yet, despite the many benefits that AI can bring to the hiring process, there’s a balance to be struck.
Finding the Right Balance
AI-powered tools are business applications much like other tools that have become prevalent across many organizations. Do you inform candidates about the applicant tracking system you use? Or the chatbots they interact with when scheduling an interview? Likely not. Those types of interactions are generally assumed to involve some type of automation.
“Full transparency isn’t always the right answer in all cases,” Howard said. “The level of transparency depends on organizational readiness and culture. When it comes to the use of AI in the hiring process, transparency can communicate the organization’s commitment to fairness and objectivity while giving candidates an understanding of how their data will be used.”
However, Howard added, “transparency around the use of AI in hiring is increasingly being mandated by legislation, so it may be a required compliance activity.” In the meantime, there are factors HR leaders need to consider when using AI and determining what level of transparency is most appropriate.
Annie Moore is vice president of talent and operations with Inclusively, a company that helps organizations streamline the accommodations process and connects employers with employees, based in St. Louis. Moore noted that certain aspects of the hiring process may be more appropriate for the use of AI than others.
“My advice to businesses—both large and small—is to use AI for things like summarizing resumes or experience, organizing interactions with candidates, streamlining any follow-up conversations, and measuring feedback analytics to continuously improve efforts,” she said. “It should never be used to take over decision-making or replace crucial human interaction necessary during the hiring and onboarding process.”
This balanced approach, she said, “optimizes the strengths of both AI and human capabilities, leading to more effective, efficient and human-centric practices.”
That’s a key point, especially given that one of the criticisms that has been leveled against the use of AI in hiring is the potential for bias—and for valid reasons. There have been some well-publicized incidents in which these tools did exhibit bias.
Reducing the Risk of Bias
Amazon generally comes to mind when the risks of AI bias are raised. It developed an AI recruiting tool that it found to be biased against women, penalizing resumes that included the word “women’s”—like “women’s chess club captain”—and resumes received from candidates who attended women’s colleges. The reason for the bias was that the AI had been trained on resumes the company had received over a 10-year period, predominantly from men. Amazon ultimately scrapped the tool.
But as this example illustrates, it’s not the AI that’s biased, it’s the potential for the material it’s trained on to be biased or skewed in some manner.
Despite concerns about the potential for bias that AI’s use in hiring might create, new research from SHL indicates that companies can decrease this risk by:
- Using diverse and representative data across different demographics to train the AI and mitigate bias.
- Adopting explainable AI methodologies and techniques by providing interpretable and transparent AI models.
- Ensuring a candidate’s assessment experience is as transparent as possible.
“While AI can significantly augment and improve the efficiency of hiring processes, it’s essential to maintain a balance where AI handles the ‘automatable’ aspects of recruitment and humans focus on the interpersonal and judgment-based elements,” Moore said.
And, if you’re using AI in areas that are more human and interactive, an important best practice—even in the absence of legislation requiring it—is to take a stance of full transparency to let candidates know that they’re being assessed by AI and what that means.
Lin Grensing-Pophal is a freelance writer in Chippewa Falls, Wis.
Advertisement
An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.
Advertisement