Closing The AI Divide - Mitigating Cyber Threats
As much as new technologies open the door to business innovation and growth, they also represent an opportunity for threat actors. Nowhere is this clearer than in the realm of artificial intelligence (AI). As the National Cyber Security Centre (NCSC) warned in May, the danger is that as cybercriminals and state operatives ramp up their offensive efforts faster than security teams can respond.
A digital divide looms between those capable of defending against AI-powered threats, and those that can’t.
The AI Threat Landscape Is Evolving Fast
The NCSC’s report makes for concerning reading for all organisations, especially those running operational technology (OT) systems that may be less well secured.
Over the next two years, threat actors will continue to use AI to help to improve reconnaissance, vulnerability research and exploit development (VRED), social engineering, basic malware generation, and exfiltrating data. VRED is highlighted as the “most significant” use case, with sophisticated adversaries potentially even using their own models to discover new zero-day exploits. However, all threat actors will benefit, as AI-as-a-service and AI-powered pen testing tools become more widespread, driving an increase in the “volume and impact” of intrusions over the next two years, the NCSC says.
AI systems themselves will also be a growing target, especially if developers rush products to market with vulnerabilities, weak encryption, poorly configured identity management and storage, and other defects.
There is a clear need for organisations to start building these scenarios into their risk planning. So how are they doing? The early evidence is not great. Our research reveals that although around a third (30%) of British firms name AI as among their top three risks, a similar share (31%) have no governance policy in place. A further 29% say they have only just implemented their first AI risk strategy. Nearly a fifth of US and UK firms we polled admit they’re still not prepared for data poisoning attacks, or for a deepfake incident.
The Compliance Burden Grows
Malicious use of AI can supercharge ransomware, data breaches and other threats. And attacks on corporate AI systems may also enable data theft and extortion, and (via data poisoning) change outputs to effectively sabotage operations. Deepfakes could enable threat actors to pass HR screening tests for new hires, or trick customer-facing, anti-fraud authentication mechanisms. There are also unintended data protection and privacy risks to manage—such as inadvertent submission of sensitive data into commercial generative AI tools.
All of which could lead to financially and reputationally damaging security breaches. And create new compliance risks - especially for organisations following NIS2, DORA and/or GDPR. The first two in particular set a high bar for cyber-risk management best practice, and introduce new liabilities for senior management if found negligent.
Start With Governance
That alone should be enough to focus the minds of business leaders on the coming challenge. But where to start? Of course, organisations want to adopt AI not just in specialised functions and use cases but across all core operations, in order to harness its transformative potential. To stand the best chance of success, they should first implement an equally wide-lens AI governance strategy. This would involve mapping, monitoring and managing all AI-related risks - from a cyber, supply chain, regulatory, technological, ESG and geopolitical perspective.
Crucially, this must involve not only managing direct risks to AI systems, but also the risk to the enterprise from AI-powered attacks.
It’s becoming clear that these cannot be dealt with in isolation, just as cyber risk cannot be siloed from overall business risk. They are all interconnected in so many ways, and will only become more so as AI is embedded further into the fabric of enterprise operations. The good news is that all of this can be done with the help of expert third party via a single, unified SaaS platform.
The Right Side
There’s no time to wait around. The UK’s regulatory regime continues to evolve, with forthcoming legislation set to introduce mandatory ransomware reporting, new powers for regulators, and greater alignment with NIS2. In the meantime, threat actors continue to benefit from malicious AI services which lower barriers to entry and help them create new revenue streams.
It’s time to make sure your organisation isn’t on the wrong side of a new digital divide.
Dr. Megha Kumar isChief Product Officer & Head of Geopolitical Risk at CyXcel
Image: Resource Database
You Might Also Read:
If you like this website and use the comprehensive 8,000-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.
- Individual £5 per month or £50 per year. Sign Up
- Multi-User, Corporate & Library Accounts Available on Request
- Inquiries: Contact Cyber Security Intelligence
Cyber Security Intelligence: Captured Organised & Accessible