Facial recognition technology has been under the microscope as organizations and lawmakers re-evaluate its use in the wake of global protests about racial injustice. Technology giants Amazon, IBM and Microsoft all recently announced that they would stop selling facial recognition technology to police departments in the United States, citing the technology's potential for violating human rights and concerns about racial profiling.
Recent research has shined a light on some inherent dangers of using the technology. One study by MIT and Stanford University found that three commercially released facial analysis technologies showed skin-type and gender biases. The study found that the technology performed better for men and lighter-skinned people and worse for darker-skinned women.
The American Civil Liberties Union (ACLU) as well as other human rights groups and privacy advocates also have raised concerns about privacy and surveillance issues tied to use of the technology.
Evaluating Job Candidates
Some vendors in the human resources industry have long used facial analysis technology to help evaluate video interviews with job candidates. These artificial intelligence (AI) tools scan facial expressions and movements, word choice, and vocal tone to generate data that help recruiters make hiring decisions. Vendors say the tools can help reduce hiring costs and improve efficiencies by speeding the screening and recruiting of new hires.
But experts say that if these facial analysis algorithms aren't trained on large or diverse-enough datasets, they're prone to consistently identify some applicants—such as white men—as more employable than others. For example, the MIT and Stanford study found that one major U.S. technology company claimed an accuracy rate of more than 97 percent for a facial recognition algorithm it designed. Yet the dataset it was trained on was more than 77 percent male and more than 83 percent white.
Josh Bersin, a global HR industry analyst and dean of the Josh Bersin Academy in Oakland, Calif., said some HR vendors have embedded facial analysis technology into their video-interviewing tools with the goal of identifying job candidates' demonstrated stress, misrepresentations and even mood.
"These vendors have tried very hard to validate unbiased analysis, but they are taking risks by doing so," Bersin said. "The best solution is to use these tools very carefully and make sure you perform tests across very large samples before you trust these systems."
The use of facial analysis technology to evaluate job candidates is "very problematic," said Frida Polli, founder and CEO of the New York-based assessment company Pymetrics. "The science of the technology in terms of what it really says about someone is extremely new and not well-validated, and certainly not well-validated for HR uses," she said.
Results should be viewed with a skeptical eye if the technology is used for any assessment of job candidates' character or behavior, said Elaine Orler, CEO of the Talent Function, a talent acquisition consulting firm in San Diego. "The technology solutions aren't accurate in this area, and they leave too much to chance in terms of creating false positives or negatives," she explained. "To understand micro-expressions, for example, would require a deeper understanding of that one person's behaviors and not just a crowdsourced base line of everyone's expected expressions."
Some experts say facial recognition technology isn't without value in the workplace, especially in the age of COVID-19. Orler said using the technology as a biometric tool to grant access to parts of a building or as a touchless replacement for time clocks can be a good solution to reduce the spread of the coronavirus.
"Badges and other products that hold credentials often need to touch products that have been touched by others, and fingerprint scanners also have such dangers," she said.
Legal and Privacy Concerns
The use of facial recognition technology is now governed by laws in a growing number of states. Kwabena Appenteng, an attorney specializing in workplace privacy and information security with Littler in Chicago, said most employers are now aware of the landmark Illinois Biometric Information Privacy Act (BIPA) that requires companies implementing facial recognition technology in that state to obtain consent from subjects and to provide a written policy about how collected data will be stored, protected and used. Appenteng said more states—including California and Texas—also now require employers using the technology to satisfy certain compliance obligations.
Illinois and Maryland also have placed restrictions on facial analysis technology specifically for use in evaluating job candidates. California and New York have proposed similar legislation to regulate the use of artificial intelligence in assessing job applicants, said Monica Snyder, an attorney with Fisher Phillips in Boston and New York City and a member of the firm's data security and workplace privacy practice.
Illinois enacted its Artificial Intelligence Video Interview Act earlier this year, a law that requires companies using the technology to notify applicants in advance that the technology will be used to analyze their facial expressions, to obtain consent for its use, to explain to applicants how AI works and to destroy video interviews within 30 days if a candidate makes such a request, Snyder said.
"Employers need to tread carefully on how they use this technology," she said.
Appenteng said there's also the issue of getting employee buy-in for using facial recognition technology since many may consider it a risk to their privacy. "Employers may therefore want to consider providing their employees with a notice that explains facial recognition technology in easy-to-understand terms to placate any of those employee concerns," he said.
Dave Zielinski is a freelance business writer and editor in Minneapolis.