Humans Should Ban Artificially Intelligent Weapons

article-2715290-203CCE0A00000578-914_634x304.jpg

Unlike self-aware computer networks, self-driving cars tricked out with machine guns are possible right now — as are any number of AI-augmented weapons far deadlier than their human-aimed counterparts. 
    
Unfortunately, much of the recent outcry against artificial-intelligence weapons has been confused, conjuring robot takeovers of mankind. This scenario is implausible in the near term, but AI weapons actually do present a danger not posed by conventional, human-controlled weapons, and there is good reason to ban them.

We’ve already seen a glimpse of the future of artificial intelligence in Google’s self-driving cars. Now imagine that some fiendish crime syndicate were to steal such a car, strap a gun to the top, and reprogram it to shoot people. That’s an AI weapon.

The potential of these weapons has not escaped the imaginations of governments. This year we saw the US Navy’s announcement of plans to develop autonomous-drone weapons, as well as the announcement of both the South Korean Super aEgis II automatic turret and the Russian Platform-M automatic combat machine.
But governments aren’t the only players making AI weapons. Imagine a GoPro-bearing quadcopter drone, the kind of thing anyone can buy. Now imagine a simple piece of software that allows it to fly automatically. The same nefarious crime syndicate that can weaponised a driverless car is just inches away from attaching a gun and programming it to kill people in a crowded public place.

This is the immediate danger with AI weapons: They are easily converted into indiscriminate death machines, far more dangerous than the same weapons with a human at the helm.

Stephen Hawking and Max Tegmark, alongside Elon Musk and many others have all signed a Future of Life petition to ban AI weapons, hosted by the institution that received a $10 million donation from Mr. Musk in January. This followed a UN meeting on ‘killer robots’ in April that did not lead to any lasting policy decisions. The letter accompanying the Future of Life petition suggests the danger of AI weapons is immediate, requiring action to avoid disasters within the next few years at the earliest. Unfortunately, it doesn’t explain what sorts of AI weapons are on the immediate horizon.
Many have expressed concerns about apocalyptic Terminator-like scenarios, in which robots develop the human-like ability to interact with the world all by themselves and attempt to conquer it. For example, physicist and Astronomer Royal Sir Martin Rees warned of catastrophic scenarios like “dumb robots going rogue or a network that develops a mind of its own.” His Cambridge colleague and philosopher Huw Price has voiced a similar concern that humans may not survive when intelligence “escapes the constraints of biology.” Together the two helped create the Centre for the Study of Existential Risk at the University of Cambridge to help avoid such dramatic threats to human existence.
These scenarios are certainly worth studying. However, they are far less plausible and far less immediate than the AI-weapons danger on the horizon now.

How close are we to developing the human-like artificial intelligence? By almost all standards, the answer is: not very close. University of Reading chatbot ‘Eugene Goostman’ was reported by many media outlets to be truly intelligent because it managed to fool a few humans into thinking it was a real 13-year-old boy. However, the chatbot turned out to be miles away from real human-like intelligence, as computer scientist Scott Aaronson demonstrated by destroying Eugene with his first question, “Which is bigger, a shoebox or Mt Everest?” After completely flubbing the answer, and then stumbling on, “How many legs does a camel have?” the emperor was revealed to be without clothes.
In spite of all this, we, the authors of this article, have both signed the Future of Life petition against AI weapons. Here’s why: Unlike self-aware computer networks, self-driving cars with machine guns are possible right now. The problem with such AI weapons is not that they are on the verge of taking over the world. The problem is that they are trivially easy to reprogram, allowing anyone to create an efficient and indiscriminate killing machine at an incredibly low cost. The machines themselves aren’t what’s scary. It’s what any two-bit hacker can do with them on a relatively modest budget.
Imagine an up-and-coming despot who would like to eliminate opposition, armed with a database of citizens’ political allegiances, addresses and photos. Yesterday’s despot would have needed an army of soldiers to accomplish this task, and those soldiers could be fooled, bribed, or made to lose their cool and shoot the wrong people.

The despots of tomorrow will just buy a few thousand automated gun drones. Thanks to Moore’s Law, which describes the exponential increase in computing power per dollar since the invention of the transistor, the price of a drone with reasonable AI will one day become as accessible as an AK-47. Three or four sympathetic software engineers can reprogram the drones to patrol near the dissidents’ houses and workplaces and shoot them on sight. The drones would make fewer mistakes, they wouldn’t be swayed by bribes or sob stories, and above all, they’d work much more efficiently than human soldiers, allowing the ambitious despot to mop up the detractors before the international community can marshal a response.
Because of the massive increase in efficiency brought about by automation, AI weapons will lower the barrier to entry for deranged individuals looking to perpetrate such atrocities. What was once the sole domain of dictators in control of an entire army will be brought within reach of moderately wealthy individuals.
Manufacturers and governments interested in developing such weapons may claim that they can engineer proper safeguards to ensure that they cannot be reprogrammed or hacked. Such claims should be greeted with skepticism. Electronic, ATMs, Blu-ray disc players, and even cars speeding down the highway have all been recently compromised in spite of their advertised security. History demonstrates that a computing device tends to eventually yield to a motivated hacker’s attempts to repurpose it. AI weapons are unlikely to be an exception.

International treaties going back to 1925 have banned the use of chemical and biological weapons in warfare. The use of hollow-point bullets was banned even earlier, in 1899. The reasoning is that such weapons create extreme and unnecessary suffering. They are especially prone to civilian casualties, such as when people inhale poison gas, or when doctors are injured in attempting to remove a hollow-point bullet. All of these weapons are prone to generate indiscriminate suffering and death, and so they are banned.

Is there a class of AI machines that is equally worthy of a ban? The answer, unequivocally, is yes. If an AI machine can be cheaply and easily converted into an effective and indiscriminate mass killing device, then there should be an international convention against it. Such machines are not unlike radioactive metals. They can be used for reasonable purposes. But we must carefully control them because they can be easily converted into devastating weapons. The difference is that repurposing an AI machine for destructive purposes will be far easier than repurposing a nuclear reactor.
We should ban AI weapons not because they are all immoral. We should ban them because humans will transform AI weapons into hideous blood-thirsty monsters using mods and hacks easily found online. A simple piece of code will transform many AI weapons into killing machines capable of the worst excesses of chemical weapons, biological weapons, and hollow-point bullets.

Banning certain kinds of artificial intelligence requires grappling with a number of philosophical questions. Would an AI weapons ban have prohibited the US Strategic Defense Initiative, popularly known as the Star Wars missile defense? Cars can be used as weapons, so does the petition propose to ban Google’s self-driving cars, or the self-driving cars being deployed in cities around the UK? What counts as intelligence, and what counts as a weapon?

These are difficult and important questions. However, they do not need to be answered before we agree to formulate a convention to control AI weapons. The limits of what’s acceptable must be seriously considered by the international community, and through the advice of scientists, philosophers, and computer engineers. The U.S. Department of Defense already prohibits fully autonomous weapons in some sense. It is time to refine and expand that prohibition to an international level.

Of course, no international ban will completely stop the spread of AI weapons. But this is no reason to scrap the ban. If we as a community think there is reason to ban chemical weapons, biological weapons, and hollow-point bullets, then there is reason to ban AI weapons too.

DefenseOne: http://http://bit.ly/1K1GFq8

« MH370 Gentle Landing Theory
Snowden Has No Plans to Leave Russia »

CyberSecurity Jobsite
Check Point

Directory of Suppliers

Alvacomm

Alvacomm

Alvacomm offers holistic VIP cybersecurity services, providing comprehensive protection against cyber threats. Our solutions include risk assessment, threat detection, incident response.

XYPRO Technology

XYPRO Technology

XYPRO is the market leader in HPE Non-Stop Security, Risk Management and Compliance.

Directory of Cyber Security Suppliers

Directory of Cyber Security Suppliers

Our Supplier Directory lists 8,000+ specialist cyber security service providers in 128 countries worldwide. IS YOUR ORGANISATION LISTED?

BackupVault

BackupVault

BackupVault is a leading provider of automatic cloud backup and critical data protection against ransomware, insider attacks and hackers for businesses and organisations worldwide.

DigitalStakeout

DigitalStakeout

DigitalStakeout enables cyber security professionals to reduce cyber risk to their organization with proactive security solutions, providing immediate improvement in security posture and ROI.

CSR Privacy Solutions

CSR Privacy Solutions

CSR Privacy Solutions is a leading provider of privacy regulatory compliance programs for small and medium sized businesses.

TUV Sud

TUV Sud

TÜV SÜD is a leading technical service organisation. We specialize in testing, certification, auditing, training, and advisory services for different industries.

AKATI Sekurity

AKATI Sekurity

AKATI Sekurity is a security-focused consulting firm providing services specializing in Information Security and Information Forensics.

GreyCortex

GreyCortex

GreyCortex uses advanced artificial intelligence, machine learning, and data mining methods to help organizations make their IT operations secure and reliable.

UKAS

UKAS

UKAS is the national accreditation body for the UK. The directory of members provides details of organisations offering certification services for ISO 27001.

CryptoCurrency Certification Consortium (C4)

CryptoCurrency Certification Consortium (C4)

The CryptoCurrency Certification Consortium is a non-profit organization that provides certifications to professionals who perform cryptocurrency-related services.

German Accelerator

German Accelerator

German Accelerator supports high-potential German startups in successfully entering the U.S. and Southeast Asian markets.

Vivitec

Vivitec

Vivitec security services are tailored for your business, industry, risk, technology, and size to ensure great protection and planned response for the inevitable cyber-attacks on your business.

Cybersecure Policy Exchange (CPX)

Cybersecure Policy Exchange (CPX)

Cybersecure Policy Exchange is a new initiative dedicated to advancing effective and innovative public policy in cybersecurity and digital privacy.

01 Communique Laboratory

01 Communique Laboratory

01 Communique Laboratory is an innovation leader in the new realm of Post-Quantum Cyber Security.

ActZero

ActZero

ActZero’s security platform leverages proprietary AI-based systems and full-stack visibility to detect, analyze, contain, and disrupt threats.

ABCsolutions

ABCsolutions

ABCsolutions is dedicated to assisting businesses and professionals achieve compliance with federal anti-money laundering regulations in an intelligent and pragmatic way.

TheHive Project

TheHive Project

TheHive Project is a Scalable, Open Source and Free Security Incident Response Platform for SOC, CSIRT and CERT teams.

Boecore

Boecore

Boecore is an aerospace and defense engineering company that specializes in software solutions, systems engineering, cybersecurity, enterprise networks, and mission operations.

Realm.Security

Realm.Security

Realm.Security is pioneering the creation of an easy-to-implement, simple-to-use security fabric solution that is purpose-built for cybersecurity.

Device42

Device42

Device42 is a trusted, advanced, and complete full-stack agentless discovery and dependency mapping platform for Hybrid IT.