Over the last year, discussions about AI-enabled cybercrime have shifted from speculation about impacts to real-world observations. Malicious actors continue to find ways to harness AI to their advantage, resulting in an increased volume and velocity of threats, keeping the cybersecurity community on their toes.
As defenders, awareness of AI’s impacts on the threat landscape is vital, as is understanding strategies to combat the shifts occurring in the wake of this new technology. Gaining hands-on practice mitigating AI-focused threats is the next crucial step in fighting increasingly sophisticated cybercrime operations.
Fewer barriers, more threats
By leveraging AI, cybercriminals can generate customized phishing emails with context-aware personalization, create convincingly fake voices or videos to power social engineering attacks, and streamline reconnaissance efforts. While those all represent real threat vectors, one of the most significant observations among the group about AI’s impact on cybercrime was that the rise in this technology lowers the barrier to entry for novice and experienced threat actors alike.
AI is making it easier for existing criminals to transition into cybercrime, giving individuals with little to no knowledge of coding or hacking tools the ability to craft malicious code with minimal effort. By reducing this technical barrier, AI “supercharges” cybercriminals’ capabilities, making it more accessible.
Here are 5 AI-enabled cybercrime trends to watch for
Here are 5 key trends relating to AI-powered cybercrime that are expected to be prominent in the future:
1. The rise of deep fakes and social engineering:
Deepfake technology was once out of reach for inexperienced cybercriminals and is now more accessible. For example, malicious actors can clone voices with YouTube footage and an inexpensive subscription. As AI-powered editing tools become broadly available, we’ll see the volume of impersonation attacks increase.
Additionally, we expect to see cybercriminals offer “deepfake generation on demand,” turning voice and video impersonation into an as-a-service model, just like how we’ve seen Ransomware-as-a-Service evolve.
2. Hyper-targeted phishing:
Phishing today is increasingly localized, personalized, and persuasive. Using AI to aid their reconnaissance efforts, threat actors will create context-rich, culturally relevant phishing communications that are tailored to local languages and, in some cases, reference region-specific holidays, customs, or events. As a result, these communications often appear legitimate, potentially fooling even the most cyber-aware recipient.
3. Agentic AI for malware and reconnaissance:
The use of agentic AI among cybercriminals will evolve quickly. For example, a cybercrime group might manage multiple AI agents, all of which focus on executing one part of the cyberkill chain but doing so faster than any human could. In the future, we anticipate adversaries using AI agents for a multitude of activities, such as deploying AI agents within botnets that are actively discovering Common Vulnerabilities and Exposures (CVEs).
4. AI-driven identities to augment insider threats:
During the TTX, the group discussed a scenario in which attackers could create and use AI-driven identities to apply for remote jobs at technology companies, passing standard background checks with fabricated employment histories. As malicious actors explore this use of AI, organizations will need to re-examine and refresh the vetting process associated with hiring.
5. Automated vulnerability scanning and exploitation:
While cybercriminals are primarily using AI for reconnaissance and to help with initial intrusion today, we anticipate that malicious actors will harness AI to discover and exploit vulnerabilities soon. In a short time, AI-enabled tools can scan large amounts of code, identifying zero-day and N-day vulnerabilities and then automatically exploiting them.
In response to cybercriminals embracing AI, security teams must strengthen the organization’s defenses by implementing the appropriate technologies and processes.
Defenders can use AI to protect their respective enterprises in numerous ways, such as harnessing the technology to expeditiously analyze large amounts of data, identify anomalous patterns, and automate select incident response actions. In addition, the rise in AI-driven cybercrime makes an enterprise wide cybersecurity training and education program a crucial component of an effective risk management strategy.
Employees are often on the front lines regarding social engineering and phishing attacks, making it vital that every individual in an organization know how to spot an attempted attack, especially as cybercriminals deliver more context-aware communications, making cyber hygiene and training an imperative.
By Derek Manky, Chief Security Strategist & Global VP Threat Intelligence at Fortinet