Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus convallis sem tellus, vitae egestas felis vestibule ut.

Error message details.

Reuse Permissions

Request permission to republish or redistribute SHRM content and materials.

Internal Generative AI Models: More Value, Less Risk for HR

Two people working on a computer in an office.

​When software engineers at LinkedIn decided to start experimenting with generative AI (GenAI), they wanted to do it in a way that sidestepped some of the well-known obstacles associated with large language models (LLMs) like ChatGPT. Namely, they were concerned that using a public LLM trained on the internet could lead to leaking proprietary LinkedIn data; violating copyright law; or producing inaccurate, biased or fabricated information.

So, like a growing number of organizations, LinkedIn decided to move generative AI within the confines of its own four walls, creating an internal "playground" in LinkedIn's domain-specific data and knowledge. The playground enables not just engineers but also product developers, managers and others to experiment with using generative AI in a safe space with few of the risks that a public LLM presents. 

Xavier Amatriain, vice president of engineering and product AI strategy at LinkedIn, said the development led to the creation of new products, including a messaging feature in LinkedIn Recruiter that uses generative AI to help save time and improve messaging when communicating with job candidates.

"The playground and domain-specific generative AI allows us to shorten the distance between ideas and building prototypes with large language models," Amatriain said. "It's not just developers but product managers and domain experts who can combine proprietary LinkedIn data and knowledge with generative AI in a safe space for exploration."

Technology analysts say the benefits of creating these internal LLMs can be many. Such models can perform better than public versions because they're trained for fewer overall tasks and use cases specific to an organization—including HR, recruiting and learning tasks—making their outputs more accurate and reliable.

These models also help organizations avoid the data security and legal issues that can arise when using public versions of LLMs because they operate within a company's existing security perimeter and don't use the intellectual property of others.

"We can use generative AI without exposing any of our proprietary data to risk," Amatriain said, since that data is never fed into a public LLM. "The playground enables quick exploration of various generative AI models and a lot of flexibility, and there have been hundreds of ideas created within it."

Amatriain said the playground helped facilitate one of the most successful hackathons in LinkedIn history, with thousands of employees participating in creating product ideas and innovative solutions.

Creating Domain-Specific Generative AI Models

Most organizations use one of two methods to create internal LLMs: they either "fine-tune" or "prompt-tune" an existing public LMM that's been pre-trained on the internet for use with their own domain-specific knowledge. Choosing either of these customization processes avoids the exorbitant costs of building such internal models from scratch.

A foundational public model like ChatGPT is already pre-trained to understand human speech and excels at natural language processing, so by customizing it and then placing it behind a firewall, companies can reap the benefits of generative AI while avoiding many of the risks.

"You're seeing more organizations start with open-source generative AI models and fine-tune or prompt-tune them just for use with their own data and domain knowledge," said Ravin Jesuthasan, a senior partner and global leader for transformation services at Mercer.

Fine-tuning a public LLM to add domain-specific content isn't inexpensive, experts say, and also requires having the right data science expertise on staff. Prompt-tuning a model, on the other hand, is a different procedure and often requires fewer resources but can create more technical challenges than fine-tuning.

Greg Pridgeon, a senior analyst with Forrester who specializes in human resource technologies, said he's seen growing interest and investment in both fine-tuning and prompt-tuning public LLMs for specific organizational environments. "There also are a growing number of vendors offering those products and services," Pridgeon said. "It requires less data internally and allows you to adjust the parameters of a public model to meet your specific needs."

Samuel Hamway, a research analyst specializing in enterprise applications for Miami-based Nucleus Research, said customizing public LLMs with domain-specific knowledge solves a problem that keeps many leaders up at night: the accuracy of model outputs.

"It allows companies to be much more confident in areas of critical decision-making," Hamway said. "Achieving 90 percent accuracy isn't good enough in many organizational contexts, especially when LLMs are being used to replace the work of data analysts. Companies can have a hard time trusting the outputs and answers produced by public LLMs."

Internal LLMs also help avoid the legal concerns companies have with using models trained on vast public information on the Internet, including whether use of that data falls under the "fair use" provision of copyright law.

"Given there are still open questions about the legal risks associated with public large language models, more organizations are turning to using models in their own private clouds using only their data," Jesuthasan said.

He said he's seeing more HR use cases of domain-specific generative AI models for tasks related to employee benefits, recruiting, and learning and development.

"Internal LLMs are being used to power bots and AI that provide employees with decision support in understanding and choosing benefits," he said. "They're also being used to save recruiters time in writing job descriptions. Within learning departments, the technology is being used to create personalized learning paths as well as give employees access to chatbots that serve as intelligent tutors."

Pridgeon said he's seeing similar use cases within HR along with some emerging ones. "We're seeing these internal LLMs used to generate first-contact emails in the recruiting process, where recruiters send personalized messages as part of initial outreach through an ATS to potential job candidates," he said.

Generative AI also has the potential to save companies vast amounts of time in creating benefits summary plan descriptions, Pridgeon said. "These are often 20 to 30 pages of content that have to be updated annually to capture policy changes as well as any new benefits an organization is offering to employees," he said. "Generative AI could make that process much faster and easier."

Dave Zielinski is principal of Skiwood Communications, a business writing and editing company in Minneapolis.


​An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.