AI Is Reshaping Online Crime
Hackers, criminals and spies are rapidly adopting Artificial Intelligence (AI) and there is considerable evidence emerging of a substantial acceleration in AI-enabled crime. This includes evidence of the use of AI tools for financial crime, phishing, distributed denial of service (DDoS), child sexual abuse material (CSAM) and romance scams.
In all these areas, criminal use of AI is already augmenting revenue generation and exacerbating financial and personal harms. Scammers and social engineers, the people in hacking operations who pretend to be someone else, or who write convincing phishing emails have been using LLMs to appear convincing.
In particular, Russian hackers are adding a new angle on the massive amounts of phishing emails sent to Ukrainians. In one such exploit. hackers are now including an attachment containing an (AI) program and if it is installed, it automatically searches the victims’ computers for sensitive files that can be exfiltrated to the sender. This campaign, which was first uncovered by the government of Ukraine, is the first known instance of Russian intelligence being caught building malicious code with large language models (LLMs), the type of AI chatbots that have become ubiquitous in the corporate world.
Many hackers of seemingly every stripe, cyber criminals, spies, researchers and corporate defenders alike, have started including AI tools in their work.
LLMs, like ChatGPT, are still error-prone. But AI hackers have become remarkably adept at processing language instructions and at translating plain language into computer code, or identifying and summarising documents. The technology has so far not revolutionised hacking by turning complete novices into experts and so far it has not allowed would-be cyber terrorists to shut down the electric grid. But it’s making skilled hackers better and faster. Cyber security firms and researchers are using AI now, too, feeding into an escalating cat-and-mouse game between offensive hackers who find and exploit software flaws and the defenders who try to fix them first.
The shift is only starting to catch up with hype that has permeated the cybersecurity and AI industries for years, especially since ChatGPT was introduced to the public in 2022. Those tools haven’t always proved effective, and some cyber security researchers have complained about would-be hackers using fake vulnerability findings using AI.
Hackers and cyber security professionals have not settled whether AI will ultimately help attackers or defenders more. But at the moment, defence appears to be winning.
That trend may not hold as the technology evolves, however. One reason is that there is so far no free-to-use automatic hacking tool, or penetration tester, that incorporates AI. Such tools are already widely available online, nominally as programs that test for flaws in practices used by criminal hackers.
This is a key motivating factor behind recommendation to establish a new AI Crime Taskforce within the British National Crime Agency'sl Cyber Crime Unit to coordinate the national response to AI-enabled crime.
The collation of data from across law enforcement to monitor and log criminal groups’ use of AI, and the mapping of bottlenecks in criminal adoption of AI tools to raise barriers to adoption, would be crucial to developing law enforcement’s capability to respond to an evolving AI threat landscape.
NBC News | CETAS | Cert-UA | Logpoint | Ars Technica | DHS | NCA
Image: Ideogram
You Might Also Read:
Predictive Maintenance In The Age Of AI & Cybersecurity Challenges:
If you like this website and use the comprehensive 8,000-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.
- Individual £5 per month or £50 per year. Sign Up
- Multi-User, Corporate & Library Accounts Available on Request
- Inquiries: Contact Cyber Security Intelligence
Cyber Security Intelligence: Captured Organised & Accessible