Share

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus convallis sem tellus, vitae egestas felis vestibule ut.

Error message details.

Reuse Permissions

Request permission to republish or redistribute SHRM content and materials.

What Recent Global Regulations of AI Will Mean for HR


Europe AI

HR professionals have some key takeaways on artificial intelligence policies now that the European Parliament has approved the EU AI Act and the United Nations passed its own resolution on the responsible use of AI.

The EU regulations created around AI include hefty fines for using AI as an “emotional recognition” system.

“SHRM will work with EU and global stakeholders to provide clarity to employers in the EU who seek to use AI safely and consistent with our shared values, and to understand their obligations under the act,” said Emily M. Dickens, SHRM chief of staff, head of public affairs and corporate secretary, in a statement. “Here in the United States, SHRM will continue its ongoing engagement with Congress, the administration, and state and local legislatures, serving as a trusted partner to policymakers, looking to achieve consensus on AI legislation and regulation that maximizes human potential.”

Concerns about negative effects of AI are being weighed alongside the technology’s positive impact on growth and innovation. This balancing act could lead Congress to enact legislation that’s narrowly tailored.

US Regulatory Outlook

At SHRM’s The AI+HI Project event in Mountain View, Calif., held in March, two members of the Congressional Bipartisan Task Force on AI agreed that the U.S. will take a narrower approach that’s aimed at limiting the potential negative effects of AI.

“We very much feel any U.S. action is going to be much more limited,” said Rep. Don Beyer, D-Va. “We’re going to be much more permissive. We don’t want to stifle creativity and imagination and really innovative uses [of AI]. Instead, we will limit our interactions to the evil things and the downside risks that may come out,” such as AI-generated video “deepfakes.”

The members said the serious threat of AI-fueled misinformation in an election year could lead to targeted legislative action.

“I think that we can get some broad consensus that anyone who maliciously uses AI to impersonate a political candidate and make it appear that they’re saying or doing things that they’re not actually saying or doing, that’s extremely misleading” said Rep. Jay Obernolte, R-Calif. “I think we should all be able to agree that that’s not a good thing for our society, and there are certainly some remedies that are available to us.”

“I do think that there may be a couple of topics that might be ripe for legislation in this year,” he said.

More than 15 states have already passed AI-related laws, mostly focused on data privacy and accountability. Beyer said a key issue that needs to be addressed is where the line should be drawn on when federal AI laws pre-empt state and local laws.

Because AI technology knows no borders, Beyer said international cooperation on this issue will be essential: “Ultimately, we’re going to need something like a Geneva Convention on AI.”

What Businesses Should Know

Regardless of whether the U.S. ends up charting a different course on AI regulation, companies that do business in Europe could still be affected by the new European regulations, even if they don’t have a physical presence in the 27-nation EU market. The law applies to companies with products in the EU market, as well as AI systems used in the EU, no matter where they are physically based.

Three things business leaders should do in response:

  • Know when and how your business uses AI. Understanding how the European regulations impact your organization will require a thorough understanding of the organization’s European presence, as well as how workers throughout your company are using AI tools. The law is especially relevant to organizations with European-based workers because it imposes fines of 35 million euros or 7 percent of global revenue, whichever is higher, on certain AI uses, including the use of emotional recognition systems, which some companies use to track worker engagement.
  • Have a robust AI policy. Organizations need policies to govern how workers use AI. A recent SHRM survey found that 75 percent of organizations have no guidelines or policies regarding the use of generative AI, yet one-third of HR professionals (34 percent) report using ChatGPT or other generative AI programs at work, suggesting that some use these tools without their organizations’ guidance.
  • Stay continuously informed of new developments. AI is a fast-moving space, and it requires constant, careful corporate governance to employ well. Executives need to stay informed of new developments in this field. Rather than focusing on specific tools or use cases, think holistically about how organizations can use AI, what opportunities it presents and what limits organizations need to abide by.

Advertisement

​An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.

Advertisement