The California Civil Rights Council, as part of the California Civil Rights Department (CRD), has adopted final regulations imposing limits on employers’ use of automated decision-making systems (ADSs). The proposed regulations await approval by the state Office of Administrative Law and may go into effect as early as July 1. California companies using artificial intelligence software to make employment decisions will want to take note because the regulations target a significant number of commonly used HR technologies.
Discriminatory AI Systems
The current AI boom has raised questions among regulators and observers alike regarding the use of a computer, rather than a human, to make employment decisions.
In fact, “the CRD has been considering automated-decision system regulations for years amid concerns over employers’ increasing use of AI” to make decisions related to hiring, recruitment, and promotions, said Danielle Ochs, an attorney at Ogletree Deakins in San Francisco. “The concern often centers around the extent to which an ADS replaces human decision-making and the potential risk that it uses factors and/or delivers output that is discriminatory in some way.”
“These technologies are not new, and they are used by different entities for different purposes,” explained Richard Paul, an attorney at Quarles & Brady in San Diego. Whatever the intended use, there is no question that companies value the increased efficiency that automated systems offer. A major U.S. company reported in 2015 that it was “getting about 600,000 resumes per year, and they couldn’t just hire people to read and screen them, so they used automatic software that looked for certain things,” Paul said.
But the recent proliferation of AI technology, including systems that analyze speech and facial characteristics, has set off alarm bells. The question was, “Are the existing [regulatory] structures adequate to capture what’s going on in AI? Or is there really a need for something that is new?” said Paul. “And I think California went through that sort of dilemma.”
The AI systems at the core of the issue use machine learning and algorithms to assess skill, personality, and aptitude, among other characteristics. These tools can also analyze a person’s voice, facial expressions, and language during online interviews, aiding in the hiring process. The California regulations, however, object to the use of such programs. “This includes, for example, if a hiring manager decides to hire or not hire a particular applicant based on an analysis of skills testing, voice and speech patterns, and/or facial gestures that was conducted through the use of AI,” said Melanie Ronen, an attorney at Stradley Ronon in Long Beach, Calif.
Ochs pointed out that analyzing these kinds of physical characteristics could discriminate against individuals based on national origin, race, gender, disability, or some other protected characteristic. For instance, a candidate may not smile during an interview, and the AI system could interpret this as unfriendliness. However, people in some cultures do not smile as often, regardless of their actual attitude. Similarly, ranking or rejecting job applicants based on scheduling restrictions may discriminate against people with a disability, religious obligation, or medical condition.
“In short,” Ronen said, “the regulations make it a violation of California law to use an automated decision system or selection criteria that discriminates against applicants or employees on a basis protected by the Fair Employment and Housing Act (FEHA).”
Legislation on AI and Discrimination
A ban on discriminatory hiring tools has long been on the books in California. “The final regulations confirm that those anti-discrimination laws apply to potential discrimination that is carried out by AI or automated decision-making systems,” Ochs said.
But there is talk in the state Legislature of going further. Assembly Bill 1018, which SHRM has opposed, “would amend the Business and Professions Code to add to the cacophony of AI bills attempting to regulate the development and deployment of an automated decision system used to make ‘consequential decisions,’ as opposed to employment decisions only,” Ochs explained.
Paul agreed that California lawmakers will likely come up with something to address these issues. "Had the regulations included strict liability for vendors, that would have left open the door for vendors to argue that the regulations exceeded regulatory authority in enacting that liability," said Paul. In his view, that is reserved for state law to determine.
However, the Legislature may pause any efforts in this regard to assess the efficacy of the new regulations and see whether they are reasonable or even too narrow. Lawmakers may wish to observe over a few years to “see how they shake out,” said Paul.
Impact on Employers and Vendors
Employers would certainly do well to comply with the new regulations, but in the event a company’s AI system is determined to be discriminatory, the issue of liability becomes murky. Should the system “result in a disparate impact against a particular group, [this] may impose liability on the employer,” Paul said.
But employers may be reluctant to accept full liability when they have purchased the automated system from a third party. These third parties view their algorithms as proprietary and have not been open to explaining their systems’ processes. This has placed the compliance risk entirely on employers’ shoulders. Employers may well think: “We’re buying a tool, and it’s not a lawful tool. Why should we have to bear the burden?” Paul said.
This raises, according to Paul, “the very practical question of how you negotiate with vendors and whether you can get them to validate tests” as nondiscriminatory, in line with the new regulations. Paul predicted there may be two kinds of vendors going forward: those with validated systems and those without. Compliance-conscious employers would likely choose a vendor with a validated system. This might create a kind of market pressure that would encourage vendors to verify their systems are nondiscriminatory, he said.
It is also possible that “the employer community might advocate for something that would impose automatic liability on a test provider whose [system] proves to have a disparate impact,” Paul said. Until there is a regulatory decision about how liability may be shared between the two parties, employers should do their due diligence when purchasing an AI system from a vendor.
Opponents’ Arguments
The new regulations are not without their opponents. Some have called them redundant, Ronen said, since existing laws cover much of the same ground.
“Others highlight the challenges posed by fast-evolving technology and the digital literacy gap among employers and other stakeholders,” she said. With new AI programs emerging every day, and with many people still adapting to this digital bombardment, it may be hard for employers to ensure all their automated systems are compliant.
On a grander scale, some opponents are concerned that these regulations could “stifle innovation and increase bureaucracy and costs,” Ochs said. In particular, California Governor Gavin Newsom has voiced his concern that the regulations could result in significant costs that would threaten the state's dominance in the tech industry. If the regulations effectively kneecap the efficient systems employers rely upon to operate smoothly and productively, California business could very well suffer competitively.
How HR Can Comply
First and foremost, Paul advised HR professionals to review the regulations, which are “a statement of the existing rules about age, discrimination, religion and the like with the addition of little AI adaptations.” Understanding California’s anti-discrimination laws and how AI may inadvertently encourage bias is essential for assessing one’s own automated systems.
The next step would be for an employer to analyze the AI programs they use throughout the employment life cycle, with a view to better understanding how they make decisions. “Any such review should not be limited to technology deployed directly by the employer but also by any third parties engaged by the employer, such as recruiters, job posting platforms, etc.,” said Ronen. She also encouraged employers to routinely perform anti-bias testing to ensure their programs are not acting in a discriminatory way.
Ochs recommended that HR professionals encourage employers “to form AI governance teams responsible for vetting the company’s use of ADS before, during, and after its adoption, and responsible for developing and implementing policies and practices concerning AI use in the workplace.” This would include regular auditing of ADSs to ensure they are operating in compliance with the new regulations.
Rachel Zheliabovskii is a specialist, B2C content, at SHRM.
An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.