California's privacy agency is reviewing draft regulations on automated decision-making technology that would, among other things, govern the use of AI and automated tools in the workplace.
Artificial Intelligence
in the Workplace
The California Privacy Protection Agency's proposed regulations would apply to California residents, including employees, job applicants and other people in the employment context.
The draft regulations would require employers that use automated decision-making technology to notify job applicants and employees that an employment decision was based on the use of the technology and that affected individuals have a right to know how the technology was used.
Employees and job applicants could opt out of the use of certain technology, including tools used in the hiring process.
The five-member panel will discuss the proposed regulations on Dec. 8 and expects to finalize the regulations and start the rulemaking process next year, according to the agency.
We've rounded up articles from SHRM Online and other sources to provide more context on the news.
Employer Carve-Out
In a nod to employers, the draft regulations specify that companies may block opt-outs in specific situations, such as when it would be "futile" or unrealistic to use a nonautomated decision-making process. The regulations use resume screening and job-matching services as examples. Controversial employee-monitoring technologies, such as keystroke loggers, productivity monitors, location trackers and social media monitoring tools, will also be covered in the forthcoming regulations, according to the agency.
(California Privacy Protection Agency)
California Agency Releases First Draft of AI Regulations
The draft regulations would put California at the forefront of regulating the use of automated technology, including artificial intelligence.
AI in the Workplace
This SHRM resource can help employers and employees successfully manage generative AI and other AI-powered systems at work.
President Biden Issues Executive Order on AI
President Joe Biden signed an executive order on Oct. 30 on the development of artificial intelligence, to shape the evolution of the powerful technology in a way that maximizes its potential but limits its risks.
New York City's AI Bias Law Explained
The first-of-its-kind law requiring employers to audit their HR technology systems for bias and publish the results took effect Jan. 1, but enforcement was delayed until July 5 while clarifications in the regulations were ironed out.
A guidance document was released to accompany the enforcement date, providing more clarification on some of the law's provisions.
EEOC Weighs In on Use of AI
The U.S. Equal Employment Opportunity Commission (EEOC) recently issued guidance on the application of Title VII of the Civil Rights Act of 1964 to automated systems that incorporate artificial intelligence in a range of HR-related uses.
Crafting Policies to Address Generative AI
There's certainly value in generative AI tools. But there are risks as well. Whether your organization chooses to ban or embrace tools like ChatGPT, there are new policy issues to consider to ensure that employees are using these tools in alignment with company concerns and opportunities.
Advertisement
An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.
Advertisement