Employers that use artificial intelligence (AI) tools during the hiring process should tell job applicants this is being done to assess candidates and to make hiring decisions. Additionally, employers should obtain consent from job applicants before using AI tools to evaluate their skills, experience, talent and qualifications.
These are two of several recommendations recently published by the Institute for Workplace Equality, a nonprofit employer association that helps companies understand their affirmative action and equal opportunity obligations.
The institute's Artificial Intelligence Technical Advisory Committee (AI TAC) prepared the report, which concludes that "the most prevalent use of AI in the employment context today is in sourcing candidates and making hiring decisions." However, the report adds, "Employers are increasingly using or seeking to use AI-enabled tools across the full employment lifecycle."
The more AI is used in the hiring process, the more difficult the process will become, said Victoria Lipnic, a partner at consulting firm Resolution Economics and a former commissioner at the U.S. Equal Employment Opportunity Commission.
"AI has introduced more complexity in hiring," Lipnic said. "[These tools] take advantage of a lot of data, and they have to be understood for what they can do and how they operate. HR departments have to be really on top of these technologies and responsible about them."
Not only does the AI TAC report urge employers to be transparent about the deployment of AI tools to assess job candidates, but it also says vendors need to tell employers about their AI tools that are being used for this reason.
"AI tools have to be explained in a way that people can understand it," Lipnic said. "Some of this stuff is enormously complex, but that does not mean complexity should defeat what has to be the legal compliance."
In addition, the report recommends that employers become familiar with the recent EEOC Technical Guidance Documents addressing the interaction of AI and the Americans with Disabilities Act to help the process of selecting employees with disabilities when companies use AI in the hiring process.
The report also emphasizes that employers using AI tools in recruiting and hiring need to know the source and quality of the data they are using in the selection process. Additionally, because AI use generates high volumes of personal data about applicants and employees that needs to be shared and stored, employers should be familiar with data privacy rules.
According to SHRM research, 85 percent of employers that use automation or AI say it saves them time and/or increases their efficiency. SHRM researchers also found that 64 percent of HR professionals say their organization's automation or AI tools automatically filter out unqualified applicants, and 68 percent say the quantity of applications they must manually review is somewhat better (44 percent) or much better (24 percent) due to their use of automation or AI.
Zachary Chertok, research manager, employee experience at IDC, a market intelligence firm in Needham, Mass., said that while he generally agrees with the institute, the report is hamstrung by the fact that not enough laws and regulations are in place to protect against the risks AI could generate.
In particular, the report noted there is no specific federal requirement for employers to provide notice or information to job applicants or to obtain consent from job seekers prior to using AI in hiring or selection.
The absence of laws that govern AI use in the employment selection process most likely had an impact on how the report was written, Chertok said.
"The entire framework is written from the viewpoint of, 'You can use AI, but here is how to protect against your worst-case scenario,' " he said.
If the basic foundational regulations were in place to direct and make AI-driven change manageable, Chertok added, the AI TAC would have been able to analyze what is possible within the direction the policy world is taking and to help organizations embrace the possibilities of AI.
Pointing out what can be done with AI "is a very difficult narrative if the regulations are up in the air," Chertok said.
He added that AI, and machine learning in particular, is only as good as the data it receives, and the data that currently exists in corporate employment is inherently biased against groups that historically haven't enjoyed high-paying jobs, promotions, and certain educational and financial opportunities.
"If Blacks, whites, Latinx, Asian or any other group were treated the same throughout history, had they had equal opportunity to everything in this country, and if everything had been done on the merits of only skills walking in the door, then you would have a precedent for equity—we could say the only reason you got hired is because you fill the skills gap," Chertok said.
He added that there has never been full equity in the corporate American workplace, but AI does provide an opportunity to create a fairer employment system—if it is used to focus on skills, isolate biases and drive fairness.
"If we can use AI as a tool to alert us to the things for which we lack, such as awareness in our individual behavior, then AI can help us ask smarter questions about who we are, how prepared we are for unfamiliar differences, and how we can best engineer resolutions to what we have never experienced. AI has the potential to help us get there," Chertok said.
Nicole Lewis is a freelance journalist based in Miami.
Advertisement
An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.
Advertisement