As organizations have increasingly adopted artificial intelligence, especially in their talent management and recruitment efforts, there has been more discussion among business leaders about how to make sure they use AI responsibly. Soon, they may have to walk the talk. A growing wave of regulations could force organizations developing and using AI to implement responsible-use policies.
Upsides of AI include simplified decision-making. In machine learning, for example, a computer ingests huge amounts of data and, based on the patterns it sees, creates certain rules that enable it to make automatic decisions.
Without proper development and use, however, AI systems can be biased—either because of bias in the data itself or in how the algorithm processes the data—and that may result in unintended actions. For example, talent acquisition software may eliminate certain candidates in discriminatory ways if no safeguards are in place.
To guard against such outcomes, experts say, executives who don't already have a process for regularly scrutinizing their organizations' use of AI need to create one.
AI Is Everywhere
AI is being rapidly and broadly adopted by organizations worldwide—often without adequate safeguards. Survey results published in 2022 by NewVantage Partners, a business management consulting firm specializing in data analytics and AI, found that nearly 92 percent of executives said their organizations were increasing investments in data and AI systems. Some 26 percent of companies already have AI systems in widespread production, according to the survey. Meanwhile, less than half of executives (44 percent) said their organizations have well-established policies and practices to support data responsibility and AI ethics.
HR consulting company Enspira estimates that more than 50 percent of companies in the United States either have or plan to implement HR platforms that use AI, according to Aileen Baxter, vice president and head of Enspira Insights, a division of the company focused on research. Organizations are increasingly using AI-based tools in areas such as recruiting, employee performance and diversity. Baxter estimates that only 20 percent of these companies are aware that regulations are likely on the way.
"Given the legal scrutiny that is coming, companies need to make sure they are responsible in how they use AI, especially around diversity in hiring," says Baxter, whose company recently published a white paper on AI and bias in the hiring process.
Regulators Are Stirring
Pressure for oversight is coming from many directions. Over the last year, regulators, including international bodies and U.S. city governments, have begun to focus on the potential for AI to cause harm. (See sidebar on AI regulation.)
Recently, lawmakers introduced legislation in both chambers of the U.S. Congress that would require organizations using AI to perform impact assessments of AI-enabled decision processes, among other requirements. But the regulatory winds really started to blow last spring, when the European Commission, the executive branch of the EU, introduced a proposal to regulate AI using a system similar to the EU's General Data Protection Regulation (GDPR), a sweeping data protection and consumer privacy law. The AI regulation proposes fines for noncompliance ranging from 2 percent to 6 percent of a company's annual revenue.
The New York City Council in December passed a law regulating AI tools used in hiring. That statute carries fines of $500 to $1,500 for violations.
Business groups have responded with their own proposals and recommendations on the responsible use of AI. In January, the Business Roundtable, which represents over 230 CEOs, published a "Roadmap for Responsible Artificial Intelligence" and accompanying policy recommendations.
That same month, the Data & Trust Alliance, a consortium of more than 20 large companies, published its "Algorithmic Bias Safeguards for Workforce," which is designed to help HR teams evaluate vendors on their ability to detect, mitigate and monitor algorithmic bias in workforce decisions.
"This is the first time so many CEOs not only acknowledged the need for AI, but also talked about what companies should do about it," says Will Anderson, vice president of technology and innovation at the Business Roundtable.
However, there are so many gray areas when it comes to AI that it's hard for companies to implement responsible use on a practical level. What constitutes responsible use, for example? Even the definition of AI itself differs depending on whom you talk to
Most importantly, many organizations may not even realize they are using AI.
"A lot of companies don't know what they don't know—they don't necessarily understand when they are bringing AI into the organization," says Merve Hickok, SHRM-SCP, an expert on AI ethics. She has worked with national and international organizations on AI governance methods and is founder of AIethicist.org, a website repository of research materials on AI ethics and the impact of AI on individuals and society. "Some have an inkling about potential bias and some of the risks but may not know what questions to ask or where to begin."
Thus, the first step for C-suite executives should be to identify whether their organizations use AI and, if so, where. Some uses are easy to spot. Vendors of specific types of AI-enabled software often call attention to their AI. It's part of their branding, says Ilana Golbin, global responsible AI leader at professional services company PwC.
However, AI can also sneak into your organization. A procurement team may not realize that an HR software package "has some AI capabilities buried deep inside the product," says Anand Rao, global AI lead at PwC. In large enterprisewide platforms, updates from a vendor could add AI functions without much fanfare. They might add a feature that automates e-mail management, for example. Another possible entry point is through cloud service providers, Golbin says. "It's very easy to get democratized tools from cloud providers for building machine-learning systems," she says.
Strategies and Structures
Once they know what AI their organization is using, executives need to put governance and compliance checks and balances into place. All the responsible AI use principles and guidance from various groups recommend similar types of best practices. Most of them advise, for example, getting documentation from vendors showing how they developed their AI and what data they used. The recommendations also point out that organizations need to continually monitor AI systems because the decisions coming out of AI can change as the data fed into the systems change, which can introduce unintended bias.
"You need to do periodic audits and be able to explain how algorithmic decisions are made," says Ryan McKenney, a member of the cyber, privacy and data innovation team at global law firm Orrick Herrington & Sutcliffe LLP. "Whether or not that's currently required by law, that's just good practice."
Beyond the tactical, executives need to have a risk and governance strategy and structure in place—and they need to identify the person responsible for overseeing that function. However, AI touches on so many different aspects—technology, risk, compliance, contracts, data privacy, recruitment and retention, to name a few—that organizations often struggle over where that governance responsibility belongs.
Some companies have formed AI steering committees with representatives from various departments and disciplines, notes Shannon Yavorsky, a partner in Orrick and head of its cyber, privacy and data innovation team. Other organizations put AI governance under the data privacy officer because AI depends so much on the use of data. Some companies have even created a new chief AI ethics officer role, according to Amber Grewal, managing director, partner, and chief recruiting and talent officer at Boston Consulting Group.
Meanwhile, law firms and consultants are offering specialized services for AI governance. Orrick advises clients on formulating AI compliance rules for both the development and the use of AI, Yavorsky says. PwC offers advice on and assessments of AI risk for clients, Rao says, although he's careful not to call them audits. That's another gray area. The term audit has not been defined in the context of AI. "There are no standards for doing an algorithmic audit," Rao says. "Such guidelines don't yet exist."