Using artificial intelligence tools to clone voices has introduced an entirely novel realm of risk for both companies and individuals.
Generative AI (GAI) has become a catalyst for change, introducing new ways of conducting business, managing data, gathering insights, and collating content. As an intelligent and highly capable technology, it has become a powerful tool in the business toolbox, providing rapid analysis, support, and functionality.
Regrettably, the immense potential of GAI is being exploited by cybercriminals, who have harnessed it for malicious purposes, such as creating convincing deep fakes and perpetrating unnervingly realistic voice scams. In 2019, the technology was used to impersonate the voice of the CEO of an energy company in the UK to extort $243,000. In 2021, a company in Hong Kong was defrauded of $35 million. These attacks are not solely aimed at large corporations; individuals are also targeted. Voice clone scams, including kidnapping hoaxes, requests for money from friends or family, and emergency calls, are all part of these scams that prove difficult to detect.
“The scammers are incredibly clever,” says Stephen Osler, Co-Founder and Business Development Director at Nclose. “Using readily available tools online, scammers can create realistic conversations that mimic the voice of a specific individual using just a few seconds of recorded audio. While they have already targeted individuals making purchases on platforms like Gumtree or Bob Shop, as well as engaged in fake kidnapping scams, they are now expanding their operations to target high-level executives with C-Suite scams.”
It’s easy to recognize the potential for cybercriminals, considering the number of people who use voice notes to quickly convey instructions to team members or arrange payments. Busy executives frequently use platforms like WhatsApp to message others while driving or rushing between meetings, making it difficult, if not impossible, for employees to discern that the message is fake.
“An IT administrator might receive a voice note from their manager, requesting a password reset for their access to O365,” explains Osler. “Unaware of the malicious intent, the administrator complies, thinking it’s a legitimate instruction. However, in reality, they unintentionally provide privileged credentials to a threat actor. This information can then be exploited to gain unauthorized access to critical business infrastructure and potentially deploy ransomware.”
And where do these voice clips come from? They originate from voice notes sent via platforms like WhatsApp or Facebook Messenger, social media posts, and phone calls. Scammers can exploit various methods, such as recording calls with CEOs, to create deep fakes, or extracting voice samples from videos or posts on individuals’ online profiles. Cybercriminals have many techniques at their disposal to capture the distinctive voice identity of anyone who has shared their lives online. Subsequently, they employ AI technology to manipulate these recordings, making it appear as though the person is speaking live during the call or voice note.
Deepfake technology will only become more proficient at deceiving victims and breaching organizations. To defend against this, organizations must ensure they have robust processes and procedures in place that require multiple levels of authentication, particularly for financial or authentication-based transactions.”
Companies should establish a clearly defined formal process for all transactions. Relying solely on a voice note from the CIO or CISO should not suffice to change a password, authenticate a monetary transaction, or grant hackers access to the business. It is crucial to educate employees and end-users about the evolving risks associated with these threats. If they are aware of this type of scam, they are more likely to take a moment to verify the information before making a costly mistake.
Always ensure that any voice note or instruction you receive is from a trusted source. It is important to double-check and confirm that the communication is indeed from the intended person,” concludes Osler. “Cultivate an inquisitive mindset and question the source, whether it is a call, email, or message. By doing so, both organizations and individuals can be better prepared to identify and protect themselves against potential voice-cloning scams.
By Stephen Osler, Co-Founder and Business Development Director at Nclose