The U.S. Office of Management and Budget (OMB) published its first governmentwide AI policy on March 28, setting out how agencies should use artificial intelligence while mitigating its risks. We’ve gathered articles on the news from SHRM Online and other outlets.
Various Mandates
The OMB will require agencies to publicly report on how they’re using AI, the risks involved and how the agencies are managing those risks. Senior administration officials said the OMB’s guidance will give agency leaders, such as chief AI officers and AI governance boards, the information they need to assess their use of AI tools, identify flaws, prevent biased or discriminatory results, and suggest improvements.
Dec. 1 Deadline
Agencies will have until Dec. 1 to implement “concrete safeguards” around their use of AI tools, according to the OMB. “These safeguards include a range of mandatory actions to reliably assess, test and monitor AI’s impacts on the public, mitigate the risks of algorithmic discrimination, and provide the public with transparency into how the government uses AI,” the OMB said in a fact sheet.
Takeaways for Private-Sector Employers
While the OMB’s final guidance purports to speak solely to the federal government’s own use of AI and clarifies that it does not apply to federal agencies’ regulatory efforts, past experience suggests that the federal government may ultimately decide that the AI risk management best practices it applies to itself should also be adopted by private-sector AI deployers. These governmentwide AI risk management principles may influence the U.S. Equal Employment Opportunity Commission’s (EEOC’s) thinking on AI risk and risk management. At a minimum, these principles will shape EEOC executives’ experience with AI risk management concepts as the EEOC starts using AI in its internal processes.
(Seyfarth)
Memo Resulted from Executive Order
The 34-page memo from OMB Director Shalanda D. Young corresponds with President Joe Biden’s AI executive order, providing more detailed guardrails and next steps for agencies. “This policy is a major milestone for Biden’s landmark AI executive order, and it demonstrates that the federal government is leading by example in its own use of AI,” Young said.
(FedScoop)
Executive Order’s Purpose
Biden signed the executive order last October to shape how AI evolves in a way that can maximize its potential but limit its risks. The order required the tech industry to develop safety and security standards, introduced new consumer and worker protections, and assigned federal agencies a to-do list to oversee the rapidly progressing technology.
Advertisement
An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.
Advertisement