Combating The Threat Of Malicious AI

A group of academics and researchers from leading universities and think-tanks, including Oxford, Yale, Cambridge and Open AI, recently published a chilling report titled, The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. 
 
The report raised alarm bells about the rising possibilities that rogue states, criminals, terrorists and other malefactors could soon exploit AI capabilities to cause wide spread harm.
 
These risks are weighty and disturbing, albeit not surprising. Several politicians and humanitarians have repeatedly advocated for the need to regulate AI, with some calling it humanity’s most plausible existential threat.
 
For instance, back in 2016, Barack Obama, then President of the United States, publicly admitted his fears that an AI algorithm could be unleashed against US nuclear weapons. “There could be an algorithm that said, ‘Go penetrate the nuclear codes and figure out how to launch some missiles,'” Obama cautioned.  
 
A year later, in August 2017, the charismatic Tesla and SpaceX CEO, Elon Musk, teamed up with 116 executives and scholars to sign an open letter to the UN, urging the world governing body to urgently enact statutes to ban the global use of lethal autonomous weapons or so-called “killer robots.”
 
While AI’s ability to boost fraud detection and cyber defense is unquestionable, this vital role could soon prove to be a zero-sum game. 
 
The same technology could be exploited by malefactors to develop superior and elusive AI programs that will unleash advanced persistent threats against critical systems, manipulate stock markets, perpetrate high-value fraud or steal intellectual property. 
 
What makes this new report particularly significant is its emphasis on the immediacy of the threat. It predicts that widespread use of AI for malicious purposes, such as repurposed autonomous weapons, automated hacking, target impersonation, highly tuned phishing attacks, etc., could all eventuate as early as the next decade.
 
So, why has this malicious AI threat escalated from Hollywood fantasy to potential reality far more rapidly than many pundits anticipated? 
 
There are three primary drivers: 
  • First, cyber-threat actors are increasingly agile and inventive, spurred by the growing base of financial resources and absence of regulation, factors that often stifle innovation for legitimate enterprises.
  • Secondly and perhaps most important, the rapid intersection between cyber-crime and politics, combined with deep suspicions that adversarial nations are using advanced programs to manipulate elections, spy on military programs or debilitate critical infrastructure, have further dented prospects of meaningful international cooperation. 
  • Thirdly, advanced AI-based programs developed by nation-states may inadvertently fall into wrong hands. 
An unsettling example is the 2016 incident, in which a ghostly group of hackers, going by the moniker “The Shadow Brokers,” reportedly infiltrated the US National Security Agency (NSA) and stole advanced cyber weapons that were allegedly used to unleash the WannaCry ransomware in May 2017. 
 
As these weapons become more powerful and autonomous, the associated risks will invariably grow. The prospect of an autonomous drone equipped with hellfire missiles falling into wrong hands, for instance, would be disconcerting to us all. 
It’s clear that addressing this grave threat will be complex and pricey, but the task is pressing. As report co-author Dr. Seán Ó hÉigeartaigh stressed, “We live in a world that could become fraught with day-to-day hazards from the misuse of AI and we need to take ownership of the problems, because the risks are real.” Several strategic measures are required, but the following two are urgent: 
  • There is need for deeper, transparent and well-intentioned collaboration between academics, professional associations, the private sector, regulators and world governing bodies. This threat transcends the periphery of any single enterprise or nation. Strategic collaboration will be more impactful than unilateral responses. 
  • As the report highlighted, we can learn from disciplines such as cybersecurity that have a credible history in developing best practices to handle dual-use risks. 
Again, while this is an important step, much more is required. As Musk and his co-collaborators wrote to the UN, addressing this risk requires binding international laws. After all, regulations and standards are only as good as their enforcement.  
This is an old story; history is repeating itself. As Craig Timberg wrote in The Threatened Net: How the Web Became a Perilous Place, “When they [Internet designers] thought about security, they foresaw the need to protect the network against potential intruders and military threats, but they didn’t anticipate that the Internet’s own users would someday use the Internet to attack one another.”
 
The Internet’s rapid transformation from a safe collaboration tool to a dangerous place provides an important lesson. If we discount this adjacent threat, AI’s capabilities, which hold so much promise, will similarly be exploited by those with bad intentions.  
 
Absent a coherent international response, the same technology that is being used to derive deep customer insights, tackle complex and chronic ailments, alleviate poverty and advance human development could be misappropriated and lead to grave consequences.
 
ISACA
 
You Might Also Read: 
 
Artificial Intelligence: A Warning:
 
Artificial Intelligence, Robotics & All Tomorrows Wars:
 
« Artificial Intelligence Is Cyber Defence
N Korea Is A Bigger Cyber Threat Than Russia »

ManageEngine
CyberSecurity Jobsite
Check Point

Directory of Suppliers

CYRIN

CYRIN

CYRIN® Cyber Range. Real Tools, Real Attacks, Real Scenarios. See why leading educational institutions and companies in the U.S. have begun to adopt the CYRIN® system.

LockLizard

LockLizard

Locklizard provides PDF DRM software that protects PDF documents from unauthorized access and misuse. Share and sell documents securely - prevent document leakage, sharing and piracy.

Practice Labs

Practice Labs

Practice Labs is an IT competency hub, where live-lab environments give access to real equipment for hands-on practice of essential cybersecurity skills.

North Infosec Testing (North IT)

North Infosec Testing (North IT)

North IT (North Infosec Testing) are an award-winning provider of web, software, and application penetration testing.

Tines

Tines

The Tines security automation platform helps security teams automate manual tasks, making them more effective and efficient.

British Assessment Bureau

British Assessment Bureau

The British Assessment Bureau is an ISO certification body. We check conformity and compliance of companies to recognised ISO standards including ISO 27001.

Boxcryptor

Boxcryptor

Boxcryptor encrypts your sensitive files before uploading them to cloud storage services.

Fenror7

Fenror7

Fenror7 lowers the TTD (Time To Detection) of hackers, malwares and APTs in enterprises and organizations from 300 days on average to 24 hrs or less.

MailGuard

MailGuard

MailGuard delivers a full suite of security solutions across email and web to protect your business before threats reach your environment.

JPCERT/CC

JPCERT/CC

JPCERT/CC is the first Computer Security Incident Response Team (CSIRT) established in Japan.

Cyberbit

Cyberbit

Cyberbit empowers cybersecurity teams to be fully prepared with a product portfolio ready to detect and respond effectively across both IT and OT networks.

ZenMate

ZenMate

ZenMate is a Virtual Private Network services provider offering secure encrypted access to the internet.

Cyphercor

Cyphercor

Cyphercor is a leading smartphone and desktop-based two-factor authentication (2FA) provider.

Findings

Findings

Findings (formerly IDRRA) is a scalable AI powered assessment platform that streamlines security compliance across sectors, jurisdictions and regulatory frameworks.

ReconaSense

ReconaSense

ReconaSense helps protect people, assets, buildings and cities with its next-gen access control and converged physical security intelligence platform.

White & Black

White & Black

White & Black are specialist corporate & technology lawyers based in London & Oxford.

IMQ Group

IMQ Group

IMQ is one of Europe’s top players in the field of conformity assessment. We offer certification services to support all the major sectors of the manufacturing and service industries.

DeNexus

DeNexus

DeNexus is the leading provider of cyber risk modeling for industrial networks. Our Mission is to build the Global Standard for Industrial Cyber Risk Quantification.

LayerX Security

LayerX Security

LayerX's user-first browser security platform turns any browser into the most protected & manageable workspace, by providing real-time monitoring and governance over users’ activities on the web.

8com

8com

8com is an established Managed Security Service Provider (MSSP) with over 75 employees and customers in over 40 countries.

Orca Fraud

Orca Fraud

Orca is an AI-driven fraud orchestration platform. We empower fraud fighters to outpace fraud using our custom ML models.