Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus convallis sem tellus, vitae egestas felis vestibule ut.

Error message details.

Reuse Permissions

Request permission to republish or redistribute SHRM content and materials.

SHRM Seeks More Guidance on NYC AI Regulations

A person is using a laptop with a diagram on it.

​The Society for Human Resource Management (SHRM) is seeking further clarity to help employers comply with a new law in New York City regulating the use of automated employment decision tools.

SHRM commended the New York City Department of Consumer and Worker Protection for proposing rules to clarify the first-of-its-kind law going into effect Jan. 1, 2023, but would like to see further guidance, specifically around the exact definition of "automated employment decision tool" and the provisions related to conducting bias audits. 

The New York City statute, passed in December 2021, requires that AI and algorithm-based technologies for recruiting, hiring or promotion be audited for bias before being used. Employers also must provide job candidates and employees that reside in New York City with notification about the tool's use and the job qualifications and characteristics that the technology will use to make its assessment or evaluation.

"SHRM agrees that clearly defined terms and clarity regarding the bias audit requirements, the notice requirements, and other obligations serve to promote compliance and reduce confusion about the nuances of this law," said Emily M. Dickens, SHRM chief of staff and head of government affairs, in written comments submitted to the department in October. "SHRM also commends the department for its focus on addressing ambiguities and providing specificity around the requirements and obligations of the law while also refraining from expanding the scope of the rules to require obligations beyond those that already exist."

In particular, SHRM noted the department's efforts to clarify that while employers have certain obligations to reasonably accommodate workers on the basis of a disability, there is no obligation to provide alternative selection or evaluation processes.

'Overly Broad'

SHRM commented that the statute's definition of "automated employment decision tool" is overly broad and risks unintended application. Currently the law defines these instruments as "any computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues simplified output, including a score, classification, or recommendation that is used to substantially assist or replace discretionary decision-making for making employment decisions that impact natural persons."

SHRM observed that the definition casts "a very wide net" covering tools, processes and systems, and that the ambiguity is likely to lead to employers being left to guess what might or might not be covered.

"The purpose of this law will be served best where employers have clear guidance as to when the law is triggered—which tools, processes, or systems are covered—and what, specifically, employers need to do to comply," Dickens said.

She added that SHRM encourages the final regulations to explicitly exclude automated tools that carry out human-directed assessments. "For example, some employers might use a scheduling tool that captures employee availability for purposes of both shift scheduling and candidate evaluation. With large candidate pools, employers rely on automation to screen candidates based on core job-related decision points such as educational attainment or relevant licensure. Presumably, these types of tools are not intended to be covered by this law, as subjecting this type of tool to the requirements of the law would likely create a severe burden upon employers."

Bias Audit Criteria

SHRM also is seeking additional clarification on the bias audit requirements in the law, including the scope of the candidate pool required—or permitted—to be tested; whether a new audit is triggered after a technology upgrade; the methodology used in the audit; and the notice requirement, which could risk disclosure of candidates' personal identifiable information.

Caution Against Stifling Innovation

Dickens pointed out that employers are exploring automation, AI and algorithm-based technologies in myriad ways, including for quickly sourcing and expanding candidate pools; identifying skills gaps; enhancing talent pipelines; reducing turnover; increasing productivity; and bolstering diversity, equity and inclusion goals.

"In light of the key benefits that [these tools] offer, SHRM submits that the requirements and obligations contained in the proposed rules should be viewed through the lens of minimizing limitations on the growth and advancements that could benefit everyone," she said. "Appropriate safeguards must be balanced against heavy-handed regulatory restrictions that will set key HR functions back and impede the ability to create and identify broader, more inclusive talent pipelines."

Prepare for Compliance Now

Amanda Blair, an attorney in the New York City office of Fisher Philips, outlined a few steps for covered employers in preparation of the new law next year.

"If your business uses these tools to evaluate and assess candidates, now is the time to vet and/or retain an independent auditor to conduct the bias audit," she said. "Since the auditor will need demographic and selection/evaluative information to conduct the audit, you will also need to develop policies and procedures to collect and preserve this information. You will also need to train any employees responsible for the collection and preservation of this information."

Finally, she added that employers should be prepared for an annual review of the covered technologies and tools.


​An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.