Understanding The Threats & Opportunities Posed By AI
It wasn’t long ago when we all thought the idea of artificial intelligence being capable of surpassing our own intelligence as science-fiction fantasy. But the release and widespread adoption of Language Learning Models (LLMs) also known as Generative AI such as ChatGPT, Google PaLM and Gemini, Meta’s LLaMA and more has started to open the eyes of many to the true potential of AI.
As with anything new, comprehending AI’s capabilities will take time to fully understand. For security professionals, however, keeping a finger on the pulse of new developments and getting up to speed quickly must be made a priority.
AI can provide the means for security teams to operate more efficiently and effectively - an opportunity that’s become hard to ignore under current pressures.
According to one report, cyberattacks increased two or threefold across nearly every tracked metric in 2023, as cybercriminals continued to ramp up and diversify their attacks. This included attack volumes, with encrypted threats up 117% and cryptojacking up 659% year over year.
Unfortunately, this upward trend looks unlikely to abate anytime soon. For instance, a 2024 assessment from the UK’s National Cyber Security Centre has warned that artificial intelligence will almost certainly increase the volume and impact of cyberattacks in the next two years. Further, according to data from Statista, the FBI and IMF, the total cost of cybercrime globally is expected to skyrocket to $23.84 trillion by 2027, up from $11.50 trillion in 2022.
With attack volumes on the rise, analysts are already finding themselves inundated with alerts, fighting an uphill battle. And we’ve already seen the potential for AI technologies such as deepfakes to add additional complexity, heightening these stresses further.
While a video impersonating Ukrainian President Volodymyr Zelensky which falsely requested that the country’s forces lay down their arms and surrender stands as a prime example, similar technologies have also been deployed against corporations. Indeed, back in 2020, one cybercriminal stole $35 million after using AI to successful clone a company director’s voice and trick a bank manager.
As AI-led threats continue to rise, with one report suggesting that deepfake fraud attempts increased by a whopping 31 times in 2023 (a 3,000% increase year-on-year), cybersecurity professionals must ensure they are one step ahead. But are they?
Awareness Is Growing, But Understanding Is Limited
In surveying 205 IT security decision makers, we sought to explore industry attitudes towards and understanding of AI within a cybersecurity context.
During our analyses of responses, it became clear that awareness surrounding AI as a growing cyber threat exists, with the majority (59%) of respondents agreeing that AI is increasing the number of cybersecurity attacks. Further, 61% expressed apprehension over the growth of AI, indicating that this is an area of concern within the industry.
Such sentiment aligns with the uptick in threats seen during 2023 as ‘offensive AI’ becomes increasingly deployed in areas such as deepfake phishing and malware creation.
However, despite the concerns, the survey revealed that less than half (46%) of IT security decision makers believe they grasp the impact of AI on cyber security. Digging deeper, it is perhaps even more worrying to see that CIOs have the least understanding (42%) – a statistic which may have implications in terms of how proactive organisations are able to be in addressing threats, educating end users and equipping the business with AI-enabled tools.
Fortunately, this is not to say that cybersecurity teams aren’t working to overcome the gaps in understanding that exist at present. Indeed, data from the survey shows that cybersecurity professionals are seeing AI as an important addition to enhance security operations.
Perceptions of the role that AI can play in enhancing security practices were largely positive, with more than two thirds (67%) of respondents agreeing it will improve security operations by automating routine tasks. Further, 71% stated that incident response would benefit from data analysis at scale and identifying threats in real-time.
Seizing the AI-led opportunities
Without question, it is vital that cybersecurity teams quickly find and embrace the benefits that AI offers them in order to keep pace with cybercriminals.
Increasingly, we expect that conventional cyberattacks will fall by the wayside as AI technologies become more widely available, appealing and accessible, and thus providing attackers with a means of expanding and enhancing their capabilities.
To combat these threats, security professionals should deploy AI in several ways. Not only can such technologies be used defensively, improving the speed and accuracy of incident response. Equally, they can be used offensively to enhance penetration tests, helping to identify potential vulnerabilities, misconfigurations and other weaknesses in networks and endpoints.
While the true extent of the impact that AI will have on cybercrime remains to be seen, cybersecurity professionals - particularly at the senior level - must work to improve their understanding and response.
Being AI-ready
Another important consideration for organisations is how to deploy AI capabilities within their business securely. There is much excitement around the creative and productivity benefits that AI will bring. However, if deployed without adequate preparation, can significantly increase an organisation’s exposure to threats. Most significant in this respect is information and data security. If the organisation does not ensure proper classification and labelling of its data and related access permissions, there is a danger that unauthorised access to sensitive information just gets a whole lot easier. It is critical therefore that before LLMs start to interrogate large unprotected datasets and provide ready access via natural language queries, that the necessary protections and controls are established.
In Summary
Failing to be proactive may not only leave organisations on the backfoot in respect of evolving threats, but may also limit the overall efficiency and effectiveness of their operations and threat responses and equally the overall organisation’s ability to leverage the benefits brought by generative AI solutions.
To prevent such outcomes, it is imperative that cybersecurity teams recognise and grasp the AI-driven opportunities available to them today.
Brian Martin Is Director of Product Management at Integrity360
Image: Philip Oroni
You Might Also Read:
AI As A Standalone Cybersecurity Solution:
DIRECTORY OF SUPPLIERS - AI Security & Governance:
___________________________________________________________________________________________
If you like this website and use the comprehensive 6,500-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.
- Individual £5 per month or £50 per year. Sign Up
- Multi-User, Corporate & Library Accounts Available on Request
- Inquiries: Contact Cyber Security Intelligence
Cyber Security Intelligence: Captured Organised & Accessible