Deepfake Scams Expose Employers to Big Risks
Protecting against AI-generated fraud begins with awareness
In one of the latest and most brazen AI-generated scams, cybercriminals tricked an unwitting employee into handing over millions of dollars of the company’s funds.
The thieves used deepfake technology—audiovisual content created with generative artificial intelligence (GenAI) that mimics the voice and likeness of people—to set up a video call between the duped employee and imitations of the company’s chief financial officer and several other corporate executives. They told him about a secret deal and directed him to initiate a series of bank transfers to different accounts totaling over $25 million.
The scammed employee said he recognized everyone in the video conference—their faces, their voices, their office backgrounds—but it turns out that everyone he saw was a simulation.
“One of the biggest dangers of GenAI is the ability to create deepfakes that makes it appear like someone said or did something when they actually didn’t,” said Perry Carpenter, chief evangelist and strategy officer at KnowBe4, a security awareness training firm in Clearwater, Fla. “Cybercriminals can synthesize deepfakes to spread disinformation, trick employees into revealing information and granting access to sensitive systems, conduct fraud, or even extort them.”
The quality and realistic nature of the fake and altered images have improved dramatically in recent years, said Kaarel Kotkas, founder and CEO at Veriff, an identity verification company in Tallinn, Estonia. “Fraud is getting more sophisticated, and tools fueled by AI are making fraud activity more accessible to even the less sophisticated bad actors,” he said.
Kotkas noted that a few of the more common deepfake methods include:
- Superimposing the image of a person’s face onto a photo or video of another person.
- Using lip-syncing technology in modified videos to match words in an alternate audio recording.
- Deep-learning models that generate deepfake videos from a single image. In the case of the finance employee above, the scammers used fabricated images of real people built from photos and videos found online. They also replicated their voices using publicly available audio samples.
“Deepfake technology is incredibly convincing, which means businesses and their employees need to be educated on recognizing a deepfake and defending against it by heightening existing security,” Kotkas said. “Many organizations’ current hybrid work preference provides bad actors with even more opportunities to infiltrate companies. These scams are especially effective when deepfakes are used against an enterprise with disjointed and inconsistent identity management processes and poor cybersecurity.”
How to Guard Against Deepfakes
Protecting the organization from deepfake fraud begins with awareness that the threat exists. “By simply being aware of the potential damage deepfakes can do, organizations can educate their employees and partners on what to look out for and how to protect themselves,” Kotkas said.
Dave Walton, the AI practice group chair at law firm Fisher Phillips, outlined several steps organizations can take to mitigate the risk posed by deepfakes:
- Educate your employees about the existence and potential dangers of deepfakes. “Explain how deepfakes work, their potential impact on the organization, and the importance of staying vigilant,” he said. “Provide training about the ways to spot deepfakes, and foster a culture of skepticism, similar to the way that employees are now on guard for phishing emails.”
- Develop communication channels. “Encourage employees to speak up and promote a culture that supports open communication about questionable information and activity,” Walton said.
- Establish strong IT authentication measures for access to sensitive information, systems and accounts. “This may include multi-factor authentication, biometric verification or other secure methods to minimize the risk of unauthorized access,” he said. “There must also be failsafe measures and multiple levels of approval before allowing certain actions to occur, such as transferring money above a certain threshold amount.”
- Make sure your policies prohibit your own employees from creating deepfakes involving employer resources or data.
Kotkas added that employers could deploy emerging tools such as AI-powered deepfake detection technology to help identify and flag potential threats. He also advised conducting comprehensive checks on identity documents, a biometric analysis of supplied photographic and video images, and an examination of key device attributes.
What to Do if Victimized by a Deepfake
If your organization falls victim to a deepfake scam, act quickly and contact your data security attorneys, Walton said. “It is critical to engage counsel who have contacts with forensics experts and law enforcement to appropriately preserve and collect any information that could be used to trace the scammers,” he said. “Acting quickly is important not just for the obvious reasons but also because electronic logs may only go back a certain number of days, and every day from the date of discovery is a day of lost data.”
Walton counseled employers to resist the urge to conduct an internal investigation. “Well-intended searches of communications, emails, messages and data entry could compromise the information,” he said.
Attorneys also can help with damage control, Walton added, noting: “If you determine that protected or confidential information was accessed during the scam, you may be required to appropriately notify those affected.”
Advertisement
An organization run by AI is not a futuristic concept. Such technology is already a part of many workplaces and will continue to shape the labor market and HR. Here's how employers and employees can successfully manage generative AI and other AI-powered systems.
Advertisement