Share

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus convallis sem tellus, vitae egestas felis vestibule ut.

Error message details.

Reuse Permissions

Request permission to republish or redistribute SHRM content and materials.

New York City AI Law Is a Bust


New York City skyline

New York City’s “AI Bias Law” held a lot of promise for advocates of regulating automated and artificial intelligence tools in hiring. Seven months after the law’s enforcement date, however, it appears that employers are largely ignoring it.

The law requires employers that use automated employment decision tools (AEDTs), which include many platforms and software used in recruitment and hiring, to audit those tools for potential race and gender bias, publish the audit results on their websites, and notify employees and job candidates that such tools are being used.

But according to researchers at Cornell University, only 18 of 391 New York City employers analyzed in January had posted bias audit results. The study authors concluded that the law has very limited value for job seekers.

“It’s absolutely toothless,” said Hilke Schellmann, an AI expert, author of The Algorithm (Hachette Books, 2024) and assistant professor of journalism at New York University. “But I’m not surprised to see such low levels of compliance because there’s such a wide discretion for employers to comply.”

In the latter stages of drafting the law, its scope was modified from initially covering most AEDTs to only covering those that are being used without any human oversight.

“The law very narrowly defines AEDTs,” said Guru Sethupathy, co-founder and CEO of FairNow, an AI governance platform that conducts audits, monitors algorithms and helps companies ensure compliance with AI technology.

“If the tool is used to entirely automate a decision, it falls under the scope of the law,” he said. “But it’s very easy to get around that and very hard to dispute that a human is not in the loop somewhere.”

Amanda Blair, an attorney in the New York City office of the Fisher Phillips law firm, explained that the law’s scope was a contentious issue during its drafting, with some arguing that the original definition of AEDTs was too broad and encompassed too many tools, while others pushed back and said they wanted a more stringent definition. The New York City Department of Consumer and Worker Protection, which is tasked with overseeing the law, ultimately decided to advance the revised definition.

“There was a risk that the law would have a limited effect, but there was also a huge burden on HR departments to understand the tools they are using and whether the tools fit within the definition and how to apply the definition to the tool,” Blair said. “There’s a lot of risk that if you disclose, you are opening yourself up to scrutiny vs. waiting until the law is being enforced in earnest and you know the parameters of what they will determine is applicable—more of a wait-and-see approach.”

According to Schellmann, technology vendors and employers have said that humans are making the final decision after reviewing AI rankings, so that exempts them from having to conduct audits. But there are many automated tools being used at the early stages of the hiring process to screen applications and reject candidates, and those deserve scrutiny, she said.

Another deterrent to compliance is that the city’s enforcement agency responds to complaints, experts agreed. The agency has received no complaints since the rule went into effect, a spokesperson for the city said.

“The problem is that most job applicants would not even know AI is being used, so couldn’t complain,” Schellmann said. “Companies are not posting audit results, so people don’t know they are using AI tools.”

Blair said that “the agency feels that they have broad enforcement powers—we’ve seen that under laws under their purview—where they go in and investigate on their own initiative. But yes, you need the public to know about this law for a complaint to be made. And people—including any affected parties—are still playing catch-up to know what’s going on and know what their rights are. Unless you have an attorney out there trying to find a claimant, it will take time for a complaint to be made.”

Sethupathy said that the requirement to publish the results of an audit would further inhibit compliance. “Companies will not want to do that. Good bias audits are nuanced and can require deeper analyses; publishing raw impact ratios are not really helpful and can be misinterpreted by the public,” he said. “So, you have a very narrow scope, a strange request to publish bias audit results and very small fines. It’s a perfect combination to elicit noncompliance.”

Schellmann compared the New York City law to the Artificial Intelligence Video Interview Act passed in Illinois in 2019, aimed at employers who use artificial intelligence to analyze video interviews of job applicants. The primary feature of the law is disclosing the use of AI before gaining the applicant’s consent to proceed.

“These laws are well intended but not very helpful, because disclosure alone is not necessarily going to solve the problem of AI bias or discrimination,” she said.

New York City’s law is a disclosure law—it doesn’t require employers to change their hiring practices.

“Job applicants are often forced consumers,” Schellmann said. “If you want the job, will you turn down the process just because there is AI or automation in it? Probably not.”

More AI Regulation Coming

Employers will have to get used to greater regulation of their automated and AI systems, experts said.

“Lawmakers are watching to see what the efficacy of the New York City law is—if people are found to be harmed by AI tools, then we will see reactionary laws as a result,” Blair said. “I would expect additional guidance this year.”

“SHRM early on expressed concerns about New York City’s law regarding regulating automated employment decision tools," said SHRM Chief of Staff and Head of Public Affairs Emily M. Dickens. "We continue to urge policymakers to adopt a learning mindset from generative AI, by analyzing data to inform decisions. It's crucial to distinguish between ineffective legislation and a potentially overly cautious or unnecessary approach. We encourage a thorough review of New York City's regulatory efforts, advocating for alignment with emerging best practices and established regulatory structures to avoid a disjointed set of rules that could hinder small and mid-size businesses from accessing valuable tools.”

Other states will learn from what New York City did and didn’t do, Sethupathy said. “Look for a risk management approach, not so much focused on outcomes—a model more focused on governance, policies, controls, testing and continuous monitoring. The European Union approach is focused on best practices around governance—how to monitor, and document checks and balances—and the fines are large. But you’re not asking companies to publicly share audit results.”

The European Union Parliament is expected to pass a landmark law, the EU AI Act, this year. Compliance grace periods may extend the enforcement date out to 2025 or 2026.

Some types of AI are prohibited, including emotional recognition systems at work, facial recognition systems and biometric categorization systems using “sensitive” characteristics. AI used in hiring and employment is considered “high risk” and must be assessed and put through data governance, bias mitigation, and risk and quality management systems.

“In the EU, hiring is a high-stakes application area of the law,” Schellmann said. “We need that attitude here in the U.S.”

Advertisement

​An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.

Advertisement