Organizations are more vulnerable than ever to the increasingly sophisticated tactics of cybercriminals looking to steal sensitive company data. Deepfakes, social engineering attacks, and stolen employee identities have cost organizations millions, undermined their operations, and shaken the confidence of top leaders.
The growing volume and persistence of these attacks coincides with a continuing shortage of cybersecurity professionals in many industries just as cybercriminals are devising ever-more innovative ways to target their favorite attack vector—human nature.
To bolster the efforts of overworked cybersecurity teams and supplement training for employees, some organizations have turned to chatbots and real-time coaching tools. These artificial intelligence tools provide learning at the moment of need, deliver fast and consistent responses to employee questions about cybersecurity issues, and collect and analyze data about areas of greatest vulnerability within organizations.
Cybercriminals regularly target weaknesses in computer system defenses or security protocols, but just as many prey upon employees’ lax data security practices, emotions, and knowledge gaps. Bad actors are relentless in trying to trick workers into clicking on ever-more-believable phishing emails or in impersonating executives and colleagues to persuade them to transfer money or sensitive data. Sometimes, workers open themselves up to cybercrime by routinely sending unencrypted information via unsecured channels like email.
A 2024 global study on cybercrime from Verizon Business found that two-thirds of data breaches (68%) involve “nonmalicious” human actions, and the top three reasons for cybersecurity incidents are miscellaneous errors, social engineering attacks, and system intrusion.
‘Shadow IT’ Nightmares
Information technology (IT) executives also are having more sleepless nights due to the explosive growth of “shadow IT” in their organizations—employees’ unauthorized use of apps or tools. Generative AI (GenAI) tools like ChatGPT represent the fastest-growing use of shadow IT, bringing significant new security concerns even as they may boost productivity or efficiency. Use of GenAI outside of IT’s governance elevates the risk of employees leaking sensitive company data because these shadow AI tools and their plug-ins often don’t have needed security controls in place.
A 2024 study from Alteryx found that 80% of organizations said data security and privacy—not inaccuracies, “hallucinations,” or bias—are their top concerns when scaling the use of GenAI tools in the workplace.
Making GenAI Head of Cybersecurity
At the professional networking site LinkedIn, the cybersecurity team addressed the issue of improving cybersecurity education for employees—and, in the process, removing some service support duties from its plate—by creating a new chatbot.
Geoff Belknap, chief information security officer for LinkedIn, said the GenAI-infused chatbot is designed to answer employees’ frequently asked questions about the company’s data security policies and practices.
“Many of those questions are about what employees are or are not allowed to do in terms of cybersecurity and how they can take actions in the most secure way possible,” Belknap said. “People here are very aware that they are the trusted custodians for the personal information of millions of people, and employees operating in a good security culture want to be clear on what the guardrails are.”
The chatbot is built on a large language model, trained on LinkedIn’s policies and standards, and has the ability to answer a range of questions. “It’s also designed to answer what some might consider very basic questions,” Belknap said. “But it’s important to us that employees don’t feel guilty, silly, or small for asking any question related to data security.”
The chatbot is used by the workforce in myriad ways. For example, a LinkedIn employee might be collecting sensitive data for performance metrics or using recently installed software for the first time and ask the chatbot, “How am I required to handle this data?” or “Am I allowed to use this kind of software, and what are the security requirements involved in that?”
Belknap said the chatbot not only answers questions around the clock, but can also provide faster and more consistent responses to employee queries. “What can happen when you’re not using chatbots like this is employees might get somewhat different answers from human engineers about their security questions,” he said. “But with the AI model we’re using, those answers are always stable and consistent.”
GenAI also turbocharges the capabilities of standard chatbots, Belknap said, allowing the technology to gather more contextual cues about the employees posing questions, such as what team they work on, to deliver more accurate responses.
“The use of GenAI can help identify and parse different meanings in words employees use during conversations,” he said. “It brings a more human-like response to questions than earlier chatbot versions.”
Despite those advanced capabilities, quality control is still needed to ensure the chatbot is providing consistent and accurate responses that satisfy employees. To that end, Belknap said LinkedIn uses a feedback-based system with human backups.
If a worker asks the chatbot a cybersecurity-related question in a Slack or Microsoft Teams channel, for example, there is usually a human available through the channel as well to respond if needed. “The chatbot will try to answer the question first, like your front-line help desk,” Belknap said. “The employee can then give the bot’s response a thumbs-up or -down based on their satisfaction with the answer.”
If the employee isn’t satisfied, a human is notified and asked to further engage with the person to answer the question, with the chatbot interaction used to help fine-tune the AI model as needed.
“It’s important to remember chatbots aren’t driverless cars,” Belknap said. “Humans still need to work with and alongside the technology.”
Real-Time Data Security Coaching
Training employees to adopt more responsible data security habits is laudable but also has limits. If that training is infrequent or not tailored to real-world, evolving cyberthreats, employees often fall victim to the “forgetting curve” and any knowledge gained through online or in-person courses quickly dissipates.
As an alternative to that scenario, some HR and IT leaders are using options such as real-time coaching tools to help change risky employee security behaviors. One such tool comes from KnowBe4, a cybersecurity education firm in Clearwater, Fla. The provider’s Security Coach tool immediately delivers tips and advice to employees when they take actions such as visiting malicious websites or clicking on links in suspicious emails or texts.
The tool works by connecting through application programming interfaces (API) to security products embedded in a company’s technology stack from the likes of Crowdstrike, Microsoft, Netskope, Zscaler, or other security providers. When a risky employee behavior occurs, an alert is generated and then analyzed by the Security Coach, which determines which threats provide the best opportunities to coach users on their behavior.
If coaching is warranted, the tool sends real-time security tips to the employee through email, Slack, or Microsoft Teams channels. The message might say, “This is a security risk, and here’s why,” with advice on how to handle the situation going forward.
“The idea is to nudge users into the right behaviors and data security mindsets to support the security culture an organization is trying to create,” said Jeffrey Gelinas, a product manager at KnowBe4. “The detection rules built into our coaching tool help determine whether a user should receive a coaching notification for an action. In some cases, they might really be targeted by a bad actor.”
Gelinas said as opposed to the infrequent, “one-size-fits-all” cybersecurity training many organizations do, such performance support tools allow HR and IT to work together to create more targeted security training for employees.
“You can be more specific in the coaching employees receive because the data collected by the tools shows you, for example, which cyberthreats some departments are facing more than others, what risky behaviors some teams are using more than others, and so on,” Gelinas said.
Experts say such data gathering also can provide additional justification for investment in cybersecurity education initiatives.
Dave Zielinski is principal of Skiwood Communications, a business writing and editing company in Minneapolis.
Advertisement
An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.
Advertisement