On March 13, the European Parliament approved the EU AI Act, putting the European Union at the forefront of governing the burgeoning technology. The European Council is expected to soon endorse the act as well.
“SHRM will work with EU and global stakeholders to provide clarity to employers in the EU who seek to use AI safely and consistent with our shared values, and to understand their obligations under the act,” said Emily M. Dickens, SHRM chief of staff and head of public affairs, in a statement. “Here in the United States, SHRM will continue its ongoing engagement with Congress, the administration, and state and local legislatures, serving as a trusted partner to policymakers, looking to achieve consensus on AI legislation and regulation that maximizes human potential.”
We’ve gathered articles on the news from SHRM Online and other media outlets.
Categories of Risk
The EU AI Act divides the technology into categories of risk, ranging from unacceptable to high, medium and low hazard. AI systems considered high risk—such as those used in critical infrastructure, education, health care, law enforcement, border management or elections—will have to comply with strict requirements. Low-risk services, such as spam filters, will face the lightest regulation. Once finalized, implementation of the law will be staggered from 2025 onward.
Banned AI Uses
Some AI uses are banned because they’re deemed to pose an unacceptable risk, like social scoring systems that govern how people behave and emotion recognition systems in workplaces. Other banned uses include AI-powered remote biometric identification systems, except for serious crimes.
Rules for general purpose AI systems like chatbots will start applying a year after the law takes effect. By mid-2026, the complete set of regulations, including requirements for high-risk systems, will be in effect. More regulation of AI in the workplace may be in the works, said Italian lawmaker Brando Benifei, co-leader of the European Parliament’s work on the EU AI Act.
(AP)
Reaction to Vote
A spokesperson for Amazon, which has begun rolling out a new AI assistant, welcomed the vote on the new law. “We are committed to collaborating with the EU and industry to support the safe, secure and responsible development of AI technology,” the company said in a statement.
Meta Platforms cautioned against any measures that could stifle innovation. “It is critical we don’t lose sight of AI’s huge potential to foster European innovation and enable competition, and openness is key here,” said Marco Pancini, Meta’s head of EU affairs in Brussels.
(Reuters)
Deal Reached Last December
EU policymakers reached a deal on the proposed EU AI Act, which includes step penalties for violations, on Dec. 8, 2023. Following the reaching of a deal, U.S. senators indicated they would take a far lighter approach than the EU and focus instead on incentivizing developers to build AI in the United States.
Executive Order in U.S.
In the U.S., an executive order issued in October 2023 by President Joe Biden is shaping how AI technology evolves in a way that can maximize its potential and limit its risks. The order required the tech industry to develop safety and security standards, introduced new consumer and worker protections, and assigned federal agencies a to-do list to oversee the rapidly progressing technology.
Advertisement
An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.
Advertisement