Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus convallis sem tellus, vitae egestas felis vestibule ut.

Error message details.

Reuse Permissions

Request permission to republish or redistribute SHRM content and materials.

The Workplace Security Risk of ‘Bring Your Own AI’

As companies use generative artificial intelligence to drive various aspects of their business, experts are giving thought to the risks that can occur when employees use generative AI tools that haven't been sanctioned by their employer.



ChatGPT and AI in the Workplace

Artificial Intelligence
in the Workplace

In a recently published report, Forrester Research examined the ways in which employees can engage with the technology and its supporting software through bring your own AI (BYOAI) activities. The research company defines BYOAI as "employees using any form of external AI service to accomplish company-related business regardless of whether it's sanctioned by the business." 

In the report, researchers noted that the rapidly growing use of generative AI software such as ChatGPT, DALL-E2 and Midjourney means employees will also use other forms of AI, such as AI-infused software, AI creation tools and cloud-based application programming interfaces (APIs). This can lead to further security risks.

"When you look at active threats, primarily it's the same challenge that people have always had with employees using what we call 'shadow IT'—that is, using applications that have not been through that entire approval process and aren't sanctioned by the business to make sure that they don't have vulnerabilities," said Andrew Hewitt, principal analyst at Forrester and a co-author of the report.

Hewitt added that data loss and potential copyright violations become major threats when employees bring their own AI into the workplace. Employers will be faced with a lot more risks that they need to take into account—more even than with bring your own device (BYOD). 

"With a BYOD account, you could prevent employees from accessing corporate apps by making a requirement that they enroll with a mobile device management software. With BYOAI employees can go to a website and they type into a URL. There's no way to control it unless you were to shut off the internet," Hewitt said.

Sam Shaddox, vice president of the legal division and interim head of people at SeekOut, a recruiting company based in Bellevue, Wash., said at his company the employee guidelines for using generative AI are first, use the technology cautiously and second, when in doubt reach out to legal.

"We would much rather have a conversation about something and kind of address it in the moment than provide a scripted overly burdensome policy that either gets ignored or that people don't understand," Shaddox said.

He added that employees who are entering queries into a generative AI tool should ask themselves, " 'Would I provide a stranger this information?' If the answer is no, then stop, think about what you are doing and have a conversation with the legal department before you go any further."    

The buzz around using generative AI in the corporate world has primarily raised questions such as how the technology can revolutionize the way people work and how it can be used to enhance productivity and revenues. Also, how many employees will the technology replace?

However, executives are increasingly viewing the technology as a pathway that, if used incorrectly, can open the door to phishing and malware that can steal sensitive company information during a data breach.

Open AI launched ChatGPT on Nov. 30, 2022. Since then, reports that more than 100,000 ChatGPT user accounts were compromised by data breach malware sent a chilling warning to companies, and many organizations have taken action as they recognize the software's potential to be a conduit for data privacy and security violations.

Recently, the Biden administration as well as international organizations and several large corporations have developed their own policies to mitigate the security risks generative AI may pose.

In late October, President Joe Biden issued a wide-ranging executive order designed to safeguard computer networks from bad actors seeking to use AI to supercharge cyberattacks. 

A report published by the International Monetary Fund declared that while generative AI significantly improves business operations at financial institutions, there are inherent privacy concerns, unique cyber threats and "the potential for creating new sources and transmission channels of systemic risks."

Companies including Apple, Samsung, JPMorgan Chase, Bank of America, Wells Fargo and Citigroup have restricted or banned employees from using generative AI platforms at work as they seek to better control employees leaking sensitive internal information, as well as provide guardrails to protect their systems from computer hacking. 

On the HR front, Jason Walker, co-founder of Thrive HR Consulting, a San Francisco-based HR advisory company, said HR executives haven't focused enough on how to use generative AI or the security risks that it may bring.

"A lot of HR departments are really behind in generative AI. I mean, they are not even really in a place where they understand how it will impact them, how they are going to roll out generative AI, and how they are going to use it for their employees. In terms of security, that has not even been mentioned in any of the conversations that we have had because people are still behind on adopting generative AI," Walker said.

He added that because HR executives are not actively involved in the IT discussions that help drive the agenda about what generative AI tools should be used inside of an organization, IT departments are seizing the opportunity to decide what tools are going to be selected. 

"Because of HR's reluctance to jump in, more than likely, HR departments are going to be left with adopting whatever tools the IT department wants. In some instances, those tools are not going to be what HR would have liked to have had," Walker said.

Hewitt predicted that generative AI will force HR professionals to get more involved in communicating company values around the use of the technology as their organizations implement policies to protect sensitive company information and make sure the company isn't using generative AI text, videos or photos that perpetuate bias.  

"Displaying those up-front values around generative AI is going to be a really big area where CHROs are going to play more of a role," Hewitt said.

Nicole Lewis is a freelance journalist based in Miami.


​An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.