How AI Stunts On-the-Job Learning

By Dave Zielinski July 15, 2020
LIKE SAVE
workplace training

The use of artificial intelligence (AI) in human resources has created new cost efficiencies, freed up professionals for more strategic work and often delivered improved service to HR's clients. But experts say AI also may have unintended consequences for a linchpin of worker productivity and performance: on-the-job learning.

New research has found that many of the tasks new employees historically are trained on are being "automated away" by AI, ultimately reducing workers' ability to perform higher-order tasks and to be critical thinkers. Matt Beane, an assistant professor at the University of California, Santa Barbara has conducted extensive field research into how the use of AI, machine learning and robotics impacts workforce training. Speaking at the recent EmTech Next conference co-sponsored by MIT Technology Review and Harvard Business Review, he described how the growing adoption of AI is displacing on-the-job learning.

Beane found the trend in industries such as health care, law enforcement and finance. Intelligent technology often blocks traditional on-the-job learning, his research found, keeping employees from acquiring the hard-won skills and knowledge that's key to their future development.

"The amazing thing about our new technologies is that they learn by doing," Beane told the audience. "But there is no law that says that has to come at the expense of employee learning."

Risk of Displacing On-the-Job Learning

In a study about how surgeons use robotics in the operating room, Beane found that senior surgeons often limited the role of resident-trainees in learning how to use the new tools. The time-honored "see one, do one, teach one" approach often took a back seat. Across the top teaching hospitals Beane studied, the time on task for resident physicians-in-training during robotics-assisted surgery dropped from about three hours to 15 minutes. "Yet after a four- or five-year residency, these physicians are still legally empowered to use that robotics tool," Beane said.

Brian Kropp, group vice president specializing in human resources for the global research and advisory firm Gartner, said many tasks that certain new employees traditionally have been trained on are no longer available for teaching purposes because of AI.

"AI works best on repeatable activities that have clear outcomes," Kropp said. "But the places where on-the-job learning is the most effective also are those tasks that are repeatable and where there is a clear right or wrong answer."

Consider call center work, Kropp said, where many workers have repeatable interactions with customers. "As call center employees handle more of these calls, they are able to spot patterns about how to solve specific problems and how to address specific customers," Kropp said. "This early repetition and feedback create learning."

But the use of AI to replace human interaction for many of those call center tasks means employees have fewer early learning opportunities—and when they are presented with problems, they tend to be less routine and more complicated problems, Kropp said.

"Because of the use of AI, the first tasks those call center employees now have are the outliers, the difficult situations and challenges where there is not an obvious right answer," Kropp said. The result is that call center employees no longer get the repetition they need to do the job well, learn and evolve, he said.

The Rise of 'Shadow Learners'

While conducting his hospital research, Beane discovered some surgical residents managed to become effective at robotics-assisted surgery despite the lack of good formal training protocols. He dubbed this group "shadow learners." These residents sought deep early exposure to robotics surgery at the expense of a generalist education and regularly viewed surgical videos and simulations to build their knowledge.

"Shadow learners did these things between 50 and 100 times more than their normal resident counterparts," Beane said. "The introduction of new technologies often means workers need more new skills than ever, and the research found it's not the time to compromise on on-the-job learning."

Beane also studied the use of on-the-job learning in other industries, eventually gathering 25 different datasets. "I found this kind of shadow learning was evident across radically different kinds of work involving different kinds of intelligent technologies."

Beane discovered some common barriers to successful learning on the job in the presence of AI as well as creative solutions shadow learners used to get around those obstacles across various industries. Here are three:

AI can cause trainees to move away from their "learning edge." In an investment bank Beane studied, junior bankers used an AI-enabled system to run reports for analyzing bank deals and then handed off those reports to senior bankers for more-complex analysis. That AI-enabled system replaced a previous process where junior and senior bankers worked side by side on that higher-order analysis.

To make up for the loss of that on-the-job training, some junior and senior bankers flouted efficiency pressures to once again collaborate during later, advanced stages of analysis, Beane said. "The junior bankers in this scenario could make consequential mistakes," he said, but the senior bankers believed the risk worth it for the teaching value.

AI often forces workers to master new and old learning methods simultaneously. University professors in Beane's research had to learn new coding and design skills when shifting to teaching in AI-enabled digital classrooms. "But at the same time, they had to keep their old-school classroom teaching skills sharp, too," Beane said.

To address that challenge, some professors curated new digital solutions. For example, some shared their curriculum with colleagues from other schools online, even though their universities tended to discourage that practice for fear it would dilute their intellectual property or brand.

Not all old methods of learning will work with new technologies. Beane's research found that police chiefs in some departments limited formal training in a predictive policing system because they believed cops would learn best on the beat.

By contrast, some "outlier" police chiefs took a different approach by redesigning job incentives and roles. For example, some chiefs rewarded cops for walking algorithmically generated beats—beats where AI is used to help predict potential crime—even if nothing happened at times when the cops arrived there. "That ran counter to the normal rewards for good beat work, which is often tied to arrests or filling out field interview cards on suspicious people," Beane said.

Best Practices of Shadow Learning

Beane identified four best practices of shadow learning from his 25 datasets:

  • Ensuring that employees have ample opportunities to struggle with and learn from real work.
  • Tapping front-line know-how by creating channels for those workers to "teach up" in an organization.
  • Restructuring roles and incentives to help learners specialize.
  • Curating solutions by building searchable, crowdsourced online knowledge and skills repositories.

"None of these four approaches have to break any rules or seem inappropriate, meaning organizations can begin implementing them starting today," Beane said.

Dave Zielinski is a freelance business writer and editor in Minneapolis.

LIKE SAVE

Job Finder

Find an HR Job Near You
Search Jobs

SPONSOR OFFERS

Find the Right Vendor for Your HR Needs

SHRM’s HR Vendor Directory contains over 10,000 companies

Search & Connect

HR Daily Newsletter

News, trends and analysis, as well as breaking news alerts, to help HR professionals do their jobs better each business day.
temp_image