Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus convallis sem tellus, vitae egestas felis vestibule ut.

Error message details.

Reuse Permissions

Request permission to republish or redistribute SHRM content and materials.

Ready to Draft an Up-to-Date AI Policy? Target Top Risks

The word ai on a blue background.

​Most employers don't have policies to manage how employees use AI in the workplace, according to two recent survey reports, one from McKinsey & Company, a management consulting firm, and one from Littler, a law firm. Others have policies, but some need to be updated. HR can minimize AI's top risks, including inaccuracy, plagiarism and misappropriation, by drafting a policy in tandem with information technology (IT) and counsel.

"Many employers are overdue for enacting AI policies," said Mark Girouard, an attorney with Nilan Johnson Lewis in Minneapolis. As a result, "many employees are already using generative AI without a clear understanding of the flaws and risks of the technology."

Surveys’ Findings

Only 21 percent of respondents reporting AI adoption say their organizations have established AI policies, according to McKinsey's survey of 1,684 participants in a range of regions and jobs. The survey was conducted in April and published in August. Of the respondents, 913 said their organizations had adopted AI in at least one function. To adjust for differences in response rates, the data was weighted by the contribution of each respondent's nation to global GDP.

A survey report released by Littler in September with a vast majority of its 399 respondents from the U.S. (96 percent) found a slightly higher adoption rate of AI policies than McKinsey's survey did—37 percent. Respondents of the Littler survey, conducted in July, reported that information technology, HR and legal departments share a relatively equal degree of responsibility for managing AI-driven HR tools.

"As more organizations begin implementing generative AI into their day-to-day work, it's never been more important to have a policy in place," said Asha Palmer, senior vice president at Skillsoft, an online training company in New York City.

Companies are most concerned with mitigating AI risks of inaccuracy and regulatory compliance, according to the McKinsey study. "I expect [these] concerns will grow over time as more questions surface about the security and validity of various AI sources, and new laws pass that govern their use," Girouard said.

Minimize Problems with AI Inaccuracy

The McKinsey report found that employers cited inaccuracy as their top concern with AI. However, only 32 percent of respondents said they were mitigating inaccuracy.

Employers can take several steps to minimize problems associated with AI inaccuracy, Girouard said. These include:

  • Specifying which employees can and cannot use AI in their work.
  • Clarifying which types of work AI can be applied to.
  • Heightening employers' fact-checking resources and diligence, particularly for public-facing information.

"Today's AI tools are notorious people pleasers, with tendencies to look for, if not outright hallucinate, answers and data that fit with what they believe people want to see," Girouard said. "It's best for information that was generated and informed by AI to be preliminarily flagged as such, and to task humans to fact-check AI-derived data before disseminating to broader audiences."

Watch Out for Plagiarism and Misappropriation

Plagiarism is another top concern with AI, according to McKinsey.

Employers should update and extend some of the same language from other existing acceptable use policies or corporate codes of conduct and ethics policies that refer to plagiarism, Girouard said.

Because AI is relatively new, an AI policy also should not only outline what is and isn't acceptable but also include educational wording that helps employees better understand the nature of AI and the reasons for caution, he added.

Review of the AI policy by intellectual property counsel "can help with developing language to reduce the risk that employees will infringe on other parties' intellectual property or divulge their own organization's intellectual property in their AI interactions," Girouard said.

Palmer said to guard against the misappropriation of a company's intellectual property, an AI policy might state: "What you give generative AI, you can't get back. And, in fact, it can give it to anyone and everyone … without your permission. It's theirs. So as a rule of thumb, don't give generative AI anything that is not yours. Don't give it information that is the company's, your customers' [or] something proprietary."

Avoid Other Risks

An AI policy also should target bias caused by AI and potential breaches of privacy, Palmer said.

Employers in New York City must conduct an annual third-party AI bias audit of technology platforms they use for hiring or promotion decisions and publish the audit findings on their websites.

As for privacy, some company information may be labeled as confidential or involve consent and use restrictions, so sharing it with AI tools could undermine its confidentiality or be unlawful, Girouard explained.

Keep Policies Up-to-Date

Policies not only need to be drafted but kept up-to-date. Palmer said that because generative AI is evolving, Skillsoft includes the following statement in its AI policy: "We're committed to maintaining a responsible, sustainable GenAI policy for our team that is up-to-date, adaptable and clearly defines our ongoing expectations for the technology."

Any AI policy should not be confined to a specific AI tool. The policy "should encompass technologies more broadly—that is, generative and other AI-informed tools—and not be limited to specific tools," Girouard said. For example, a ChatGPT policy is already too narrow and out-of-date as the number of AI tools continues to proliferate, he cautioned.

HR professionals should receive from employees some written or other verified confirmation of receipt of the updated generative AI policy, said Elizabeth Shirley, an attorney with Burr & Forman in Birmingham, Ala.

"It is important to have a generative AI policy because without one, employees may presume that they are free to use generative AI for whatever purposes they see fit and with whatever company information they have access to," she said. "This causes great risks to the quality of work product, as well as to the confidentiality of company and personal information."


​An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.