Generative AI large language models (LLMs) like GPT-4 and Bard have astonishing capabilities. But they also present potential risks—perhaps ones we haven't even conceived of yet—which has led some employers to limit their use or even ban them.
However, experts in technology, HR and law agree that employers need to embrace this new era of digital advancement. To do so safely, they must develop policies for the use of these tools and educate workers on these guidelines and best practices.
"It's not a perfect analogy, but I liken this to when the internet came out," said Jason Gagnon, partner at Carmody Torrance Sandak and Hennessey, who is also head of its products liability group and a member of its AI working group. "Is it a potential tool or is it a minefield? And the answer is it's both. You can use it as a tool and [you need to] have the appropriate guardrails in place."
What should those safeguards look like? SHRM Online talked to a range of experts about what leaders should tell their employees about LLMs.
Give Permission to Explore AI Tools
Employers should be explicit with workers about whether they can use LLMs and, if so, which tools are acceptable. An enterprise-level LLM may present fewer concerns or have more capabilities than a consumer-level tool. But regardless of the specifics, experts encourage business leaders to give a green light.
"I don't think any company can afford to ignore this, and at some level, they have to embrace it," said Karen Conner, director of Academic Innovation at William & Mary's Raymond A. Mason School of Business in Williamsburg, Va.
Suppressing LLMs means not only that you miss the efficiency gains these tools can offer, but also that you're depriving your employees of new skills they need to advance their careers.
"From a productivity standpoint and from an innovation standpoint, your employees do need exposure to enhance their productivity and gain competency," said Sam Shaddox, vice president of legal for SeekOut, the maker of an AI-enabled recruiting tool. "It's going to be table stakes in this new world."
Follow Guidelines
Business and HR leaders must be explicit regarding how employees should use these tools, preferably by establishing an official policy. A good policy will combine hard-and-fast rules that restrict usage in certain ways, such as explicit instructions on which types of information is permissible to input, with guidelines for recommended use.
"First we want to understand the boundaries, the limits on use," said Will Howard, director of HR research and advisory services at McLean & Company. "Beyond that, we need to define a set of guidelines differentiating between 'employees must do this' and 'employees should do this.' "
The reasons to create such a policy go beyond mitigating risk in your day-to-day operations. It also helps you prepare for a future in which integrating AI tools into your workflows is no longer optional.
"It's time for employers to get ahead of the uncertainty," Gagnon said. "The more you're comfortable in the space now, the better off you'll be when it comes time to start implementing more advanced systems."
Data Accuracy and Security
Companies educating their staff on using LLMs should be most concerned with the potential for indiscriminate data sharing and the need for vigilance in fact-checking.
Without proper guidance, there is a likelihood that employees could expose sensitive or proprietary data. While you still own the information you enter into these tools, you do lose control of how others might use it.
"Once data is uploaded, it's not like there's a 1-800 number you can call and say, 'Can I have it back?' " said Serena Huang, founder of Data with Serena and chief data officer at ABE.work. "I think there's a caution around that."
Business leaders must also make their employees aware that these platforms often produce invented answers called "hallucinations."
Employees must know that it's essential to check all of an LLM's output before using it in any way.
"An LLM is, at most, a starting point," said Gagnon, who emphasized that current LLMs may lack access to certain data, such as the latest news or articles beyond paywalls.
Howard said that it's best to treat an LLM as a low-level helper: "It's as good as an average employee or below-average employee. But it's fast. It does a kind of subpar job very quickly."
The work is not over once you've put your policy in place and educated employees on important points of usage. Companies' engagement with LLMs will need to be ongoing and shift as the tools change.
Phillip Wagner, clinical associate professor in organizational behavior in the Raymond A. Mason School of Business at William & Mary, said that creating dedicated policies that are living and constantly reviewed is "an HR obligation" due to the nascent and fast-moving nature of this technology. "There will be best practices that will be handed down. There's likely to be regulations coming from somewhere, but it's still emerging in this area. So is legal guidance. So is compliance guidance. It's on organizations to be excessively mindful."
One way of staying up-to-date on best practices for LLMs is to name a point person for generative AI knowledge and use. That person can help employees navigate this area and provide expert advice to business leaders as the field develops.
"There will always need to be a human-in-the-loop review process," Wagner said. "There has to be someone there to oversee AI content regulation or generation."
Katherine Gustafson is a freelance writer based in Portland, Ore.
Advertisement
An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.
Advertisement