The wealth of workforce productivity and efficiency benefits created by AI have made believers of many human resource and information technology executives. Yet one growing use of the technology — known as “shadow AI” — has caused a heightened level of concern and attention among leaders in those areas.
Shadow AI involves the use of unauthorized artificial intelligence tools by employees seeking ways to do their jobs faster, better, or with less friction. Frustrated by the limited functionality of, or lack of access to, their organization’s approved generative AI (GenAI) or machine learning platforms, workers are turning to AI tools they find or purchase themselves that haven’t been approved through official channels.
This practice is the newest iteration of shadow IT, in which employees use software and other technologies brought through the back door of their organizations. Experts say the arrival of GenAI platforms has elevated the use of shadow IT to new levels, creating some advantages but also a bevy of new risks that require HR and IT leaders to revamp their education and governance strategies.
A 2023 study from Gartner projected that 75% of employees will use some form of shadow IT by 2027, up from 41% in 2022. The 2024 Work Trend Index annual report from Microsoft and LinkedIn found that 78% of AI users are now bringing their own AI tools to work. In addition, Torii's SaaS Benchmark Annual Report 2025, a study that tracks the use of software-as-a-service applications across industries, found that AI-driven tools are now the most common unmanaged software applications in organizations.
To Ban or Not to Ban? Creating a Shadow AI Policy
Experts say employees typically don’t use shadow AI for malicious reasons. Most turn to these unsanctioned tools because they consider them a better option to enterprise-approved AI platforms for performing tasks such as data analysis and software coding, or for generating drafts of documents such as job descriptions or performance reviews. Workers purchase or use freeware versions of these niche AI tools and store them on their personal phones.
“The use of shadow AI reflects a workforce eager to innovate, but often without the formal guidance, access, or governance needed to do so responsibly,” said Anthony Abbatiello, workforce transformation practice leader for advisory and research firm PwC.
While shadow AI can boost employee productivity or efficiency, use of these unvetted tools also can place organizations at great risk. Using shadow AI can expose sensitive company data to bad actors, violate data privacy laws such as the General Data Protection Regulation or the Health Insurance Portability and Accountability Act, and inadvertently introduce bias to recruiting, promotion, or compensation decisions.
Some organizations have reacted to those risks with outright bans of shadow AI. But a growing number of experts believe a better approach is to rely on employee education, detailed acceptable use policies, and governance to mitigate the risks of shadow AI while still capitalizing on its advantages.
Such an approach recognizes the inevitability of shadow AI, knowing history shows that employees usually find ways around bans. Many experts believe it’s better to bring shadow AI out into the open and install guardrails to reduce its potential dangers.
“While formally banning shadow AI can seem like a quick fix for peace of mind, prohibition tends to breed curiosity,” said Evelyn McMullen, a research manager specializing in HR technologies with Miami-based Nucleus Research. “Without at least solving for the ‘why’ and addressing the consequences of practices like putting sensitive company data into unsanctioned AI models, the behavior will continue.”
HR’s Role in Education, Governance, and Culture Building
Deciding against bans means organizations need to take other steps to reduce the risks inherent in shadow AI. Emily Rose McRae, a senior director analyst at Gartner who advises CHROs on the future of work, said HR has a key role to play in collaborating with IT and legal departments in creating acceptable use policies, education, and governance to manage the accelerating use of shadow AI.
“It’s important to be very clear about what you want the organization’s story to be around the risks of AI and what you want employees to pay attention to,” McRae said. “You need the workforce to not only understand the rules and consequences regarding shadow AI use, but why those rules have been put there.”
Martin Keen, an IBM master inventor and AI expert, highlighted the importance of creating governance models that regulate AI use in organizations to ensure sensitive company data is handled appropriately and securely. This practice is particularly important regarding shadow AI, Keen said.
McRae said policies governing shadow AI should be crystal clear about which AI platforms and applications are approved, what kind of data is and isn’t OK to upload into tools such as public GenAI models, and the consequences for not following the rules.
Abbatiello said HR should play a pivotal role in helping organizations integrate AI into the workplace responsibly and strategically.
“Rather than approaching AI solely through the lens of compliance, HR should lead in developing policies that balance innovation with risk management,” he said. “HR should help employees understand the ethical, legal, and security implications of AI use.”
Clarity is important because recent research shows many employees are uncertain or even confused about their organization’s AI use policy. A 2025 study from Resume Now, for example, found only 50% of employees believe their organization’s AI use guidelines are “very clear.”
McRae said HR also should play an integral role in creating education that teaches employees about the risks of shadow AI use, collaborating with IT and legal teams to build such learning content.
She recommends that HR teams share stories throughout the organization of employees who have used tools such as company-approved GenAI platforms to boost their productivity or create new efficiencies. Such stories shouldn’t only stress the value of the tools, McRae said, but also how employees take steps to avoid uploading any confidential company data into the tools or how they validated GenAI outputs for accuracy.
“Stories that only focus on how great a GenAI tool is don’t teach the culture you want to build around responsible AI use in the organization,” she said.
Teaching the “why” behind acceptable AI use policies is important because employees will invariably encounter situations where they face unknown risks from evolving AI tools.
“At some point, you’ll need employees to be able to apply principles about acceptable AI use to situations that may not be part of your risk modeling yet,” McRae said. “A technology may develop a new capability, for example, without the organization having enough time to think through and plan for its potential risks. If there isn’t a core philosophy guiding all decisions employees make about using AI on the job, the guess they make about what you want them to do with an emerging technology might be wrong, with serious consequences.”
Abbatiello said HR teams should drive proactive AI upskilling by facilitating the design of targeted, future-focused training programs that include guidance on acceptable AI practices.
“By collaborating with IT and the business, HR can identify skill gaps and develop learning paths that not only enhance technical proficiency but also foster a deeper understanding of AI ethics and governance,” he said.
Risks of Under-the-Radar Shadow AI
Educating employees about the risks of using unauthorized software should also focus on the dangers of lesser-known shadow AI tools, said James McQuiggan, a security awareness advocate for KnowBe4, a cybersecurity training firm in Clearwater, Fla.
While much of the focus with shadow AI is on use of tools such as personal ChatGPT or Google Gemini accounts, McQuiggan said many employees aren’t aware of the data security risks of niche tools such as AI-powered transcription services, video creation platforms, or plug-ins designed to read and respond to email.
“Things like transcription services are the type of shadow AI that people don’t often consider to be risky,” McQuiggan said. “But if an AI-driven transcription tool is plugged into a Zoom or Teams meeting, it is often transcribing and uploading — to a third-party vendor — comments and information that are likely considered confidential or private to the organization.”
Legislative and Regulatory Compliance Implications
As global legislation and regulatory compliance continue to change to keep pace with evolving AI systems, experts believe the legal risks of shadow AI will only grow, raising the stakes for educating the workforce about the risks of bring-your-own-AI practices.
“The legal landscape for AI trails behind the pace of technology development, meaning the unsanctioned use of AI could create more problems down the road without having good policies and guidelines in place for its use,” McMullen said.
Going forward, many experts believe the best strategy will remain avoiding bans on shadow AI and instead developing strategies that limit its risks while acknowledging its potential to enhance productivity and efficiency. By understanding and addressing the trend head on, organizations can move from reactive to proactive in building a responsible, enterprisewide approach to AI.
“Shadow AI isn’t just a compliance challenge, it’s also a window into how the workforce wants to work,” Abbatiello said. “Leaders should pay close attention to this behavior, not just to mitigate risk but to harness that energy and curiosity in a more intentional, secure way.”
Dave Zielinski is a Minneapolis-based business journalist who covers the impact of emerging technologies on the workplace. He is a frequent contributor to SHRM publications.
An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.