Although job applicants may be eager about using artificial intelligence to enhance their resumes and cover letters, they’re not always sold on employers incorporating AI into the hiring process.
A recent study of 246 U.S. working-age adults published in August by the journal Media Psychology uncovered general skepticism about any claim made by an employer that AI can be unbiased when sifting through job applications.
For the study, participants completed online job applications that resulted in either acceptance or rejection for a desired role. Following the application exercise, the researchers — former University of Kansas (KU) graduate student Rebecca Baumler and Cameron Piercy, associate professor of relationships and digital media at KU — surveyed the participants.
When the researchers claimed that a hiring manager, an algorithm, or a hiring manager aided by an algorithm was not biased, study participants generally welcomed a decision made by a hiring manager or an algorithm-equipped hiring manager. But when the researchers informed participants that an algorithm alone had made a hiring decision, they cast broad doubt on the algorithm’s ability to be unbiased.
Piercy, the founding director of KU’s Human-Machine Communication Lab, said study participants generally weren’t “crazy about the idea” that algorithms could make subjective judgments about them.
In results from a separate survey, KU students rated AI higher on fairness and objectivity than the working-age adults did. “They believed it was unbiased,” Piercy said.
Piercy explained that college students may be naive about AI’s effect on the hiring process.
“They probably haven’t gone through the job process before, so they believe you when you say it’s unbiased,” Piercy said. But, he added, working adults who may have been rejected for a job or may have undertaken a job search disagree “when you say that the algorithm can make an unbiased decision.”
How Transparent Should Employers Be About the Use of AI?
From an ethical standpoint, hiring managers should be transparent about their use of AI to review job applications and related materials, Piercy said. However, hiring managers should be wary of claiming AI is unbiased. Otherwise, he said, job applicants may think the hiring process is unfair and fails to account for an applicant’s unique attributes.
Many job recruiters are employing AI to automate tasks, such as rejection emails triggered by basic criteria such as location or experience, according to Mike Bradshaw, vice president of talent at HR software provider Pinpoint. But the type of AI that makes independent, nuanced hiring decisions remains the exception, not the norm.
A 2024 Resume Builder survey of business leaders took a deep dive into this subject: Nearly 70% of the business leaders indicated that their organizations planned to use AI in their hiring process in 2025. While almost all of the respondents said they believe AI produces biased recommendations, 23% said their organization lets AI conduct job interviews and 71% said their organization lets AI reject candidates without human oversight.
Most employers still aren’t as transparent as they should be about the role of AI and automation in their hiring process, Bradshaw said. At the very least, he said, employers should be upfront on their careers websites and job applications about where and how these tools are used.
Both the use of AI and transparency about AI are on a spectrum in HR circles. Some employers — especially those operating in highly regulated places like Colorado and New York City — are proactive about transparency due to regulatory requirements, according to Alan Price, global head of talent acquisition at HR services company Deel. But other employers remain in limbo.
“Lots of companies using AI don’t even have clarity among their own teams about where or how it’s being deployed,” Price said.
In the near future, AI disclosure requirements for employers should be in effect in a growing number of cities and states, according to Price.
As employers navigate tighter rules about AI, they may be able to find a middle ground between AI’s low-level automation and high-level decision-making capabilities, Bradshaw said. For instance, some AI tools can scan resumes to identify transferable skills or rank applicants based on job profiles.
“These tools can add real value, but they also introduce risk if employers can’t explain how a model reached its recommendation or why someone was screened out,” Bradshaw said. “That’s where transparency and human oversight are essential, because hiring decisions aren’t just data points. They’re judgments about people’s potential, and that responsibility should always sit with a human.”
Bringing AI Honesty to the Application Journey
Zapier, which employs more than 800 people in 30 countries, offers an online tool to automate tasks. Tracy St.Dic, global head of talent, said the company prioritizes transparency in its hiring process, including the use of AI.
“We understand that candidates expect honesty about how AI is used in their application journey, and that means it’s not just about acknowledging AI’s presence, it’s about showcasing why we use it,” said St.Dic, adding that Zapier doesn’t entrust hiring decisions to AI.
Zapier openly shares how AI helps recruiters, such as detecting fraudulent job applications, she said. The company even sends a blog post about AI’s role in the Zapier hiring process to every job applicant.
As Zapier explores more ways that AI can enhance the hiring process, the company regularly audits and tweaks its AI systems to ensure fairness and compliance, St.Dic said. She noted that assuming a manual system is less biased than an AI-powered system overlooks AI’s value.
“AI, when used thoughtfully, can act as a tool to reduce bias rather than exacerbate it,” St.Dic said. “AI can scrutinize vast amounts of data quickly and consistently, aligning hiring materials to the same success criteria to actually minimize bias.”
Auditing AI Hiring Systems for Bias
Anat Keidar, chief people officer at DoorLoop, a provider of management software for rental properties, said her company strives to use AI thoughtfully. For instance, DoorLoop discloses to applicants when and how it uses AI and communicates how it handles applicants’ AI-generated data. Furthermore, DoorLoop performs audits of its AI-enhanced hiring systems to detect biases based on factors such as gender and age.
The company, which employs more than 200 people, also monitors legal and regulatory risks related to the use of AI in the hiring process, Keidar said. In part, DoorLoop does so to bolster trust among job applicants.
“We are already on the path of responsible AI use, but we must continue to raise the bar, ensuring processes reflect our values, our people feel respected, and our candidates feel seen,” she said.
Was this resource helpful?