Humans Should Ban Artificially Intelligent Weapons

article-2715290-203CCE0A00000578-914_634x304.jpg

Unlike self-aware computer networks, self-driving cars tricked out with machine guns are possible right now — as are any number of AI-augmented weapons far deadlier than their human-aimed counterparts. 
    
Unfortunately, much of the recent outcry against artificial-intelligence weapons has been confused, conjuring robot takeovers of mankind. This scenario is implausible in the near term, but AI weapons actually do present a danger not posed by conventional, human-controlled weapons, and there is good reason to ban them.

We’ve already seen a glimpse of the future of artificial intelligence in Google’s self-driving cars. Now imagine that some fiendish crime syndicate were to steal such a car, strap a gun to the top, and reprogram it to shoot people. That’s an AI weapon.

The potential of these weapons has not escaped the imaginations of governments. This year we saw the US Navy’s announcement of plans to develop autonomous-drone weapons, as well as the announcement of both the South Korean Super aEgis II automatic turret and the Russian Platform-M automatic combat machine.
But governments aren’t the only players making AI weapons. Imagine a GoPro-bearing quadcopter drone, the kind of thing anyone can buy. Now imagine a simple piece of software that allows it to fly automatically. The same nefarious crime syndicate that can weaponised a driverless car is just inches away from attaching a gun and programming it to kill people in a crowded public place.

This is the immediate danger with AI weapons: They are easily converted into indiscriminate death machines, far more dangerous than the same weapons with a human at the helm.

Stephen Hawking and Max Tegmark, alongside Elon Musk and many others have all signed a Future of Life petition to ban AI weapons, hosted by the institution that received a $10 million donation from Mr. Musk in January. This followed a UN meeting on ‘killer robots’ in April that did not lead to any lasting policy decisions. The letter accompanying the Future of Life petition suggests the danger of AI weapons is immediate, requiring action to avoid disasters within the next few years at the earliest. Unfortunately, it doesn’t explain what sorts of AI weapons are on the immediate horizon.
Many have expressed concerns about apocalyptic Terminator-like scenarios, in which robots develop the human-like ability to interact with the world all by themselves and attempt to conquer it. For example, physicist and Astronomer Royal Sir Martin Rees warned of catastrophic scenarios like “dumb robots going rogue or a network that develops a mind of its own.” His Cambridge colleague and philosopher Huw Price has voiced a similar concern that humans may not survive when intelligence “escapes the constraints of biology.” Together the two helped create the Centre for the Study of Existential Risk at the University of Cambridge to help avoid such dramatic threats to human existence.
These scenarios are certainly worth studying. However, they are far less plausible and far less immediate than the AI-weapons danger on the horizon now.

How close are we to developing the human-like artificial intelligence? By almost all standards, the answer is: not very close. University of Reading chatbot ‘Eugene Goostman’ was reported by many media outlets to be truly intelligent because it managed to fool a few humans into thinking it was a real 13-year-old boy. However, the chatbot turned out to be miles away from real human-like intelligence, as computer scientist Scott Aaronson demonstrated by destroying Eugene with his first question, “Which is bigger, a shoebox or Mt Everest?” After completely flubbing the answer, and then stumbling on, “How many legs does a camel have?” the emperor was revealed to be without clothes.
In spite of all this, we, the authors of this article, have both signed the Future of Life petition against AI weapons. Here’s why: Unlike self-aware computer networks, self-driving cars with machine guns are possible right now. The problem with such AI weapons is not that they are on the verge of taking over the world. The problem is that they are trivially easy to reprogram, allowing anyone to create an efficient and indiscriminate killing machine at an incredibly low cost. The machines themselves aren’t what’s scary. It’s what any two-bit hacker can do with them on a relatively modest budget.
Imagine an up-and-coming despot who would like to eliminate opposition, armed with a database of citizens’ political allegiances, addresses and photos. Yesterday’s despot would have needed an army of soldiers to accomplish this task, and those soldiers could be fooled, bribed, or made to lose their cool and shoot the wrong people.

The despots of tomorrow will just buy a few thousand automated gun drones. Thanks to Moore’s Law, which describes the exponential increase in computing power per dollar since the invention of the transistor, the price of a drone with reasonable AI will one day become as accessible as an AK-47. Three or four sympathetic software engineers can reprogram the drones to patrol near the dissidents’ houses and workplaces and shoot them on sight. The drones would make fewer mistakes, they wouldn’t be swayed by bribes or sob stories, and above all, they’d work much more efficiently than human soldiers, allowing the ambitious despot to mop up the detractors before the international community can marshal a response.
Because of the massive increase in efficiency brought about by automation, AI weapons will lower the barrier to entry for deranged individuals looking to perpetrate such atrocities. What was once the sole domain of dictators in control of an entire army will be brought within reach of moderately wealthy individuals.
Manufacturers and governments interested in developing such weapons may claim that they can engineer proper safeguards to ensure that they cannot be reprogrammed or hacked. Such claims should be greeted with skepticism. Electronic, ATMs, Blu-ray disc players, and even cars speeding down the highway have all been recently compromised in spite of their advertised security. History demonstrates that a computing device tends to eventually yield to a motivated hacker’s attempts to repurpose it. AI weapons are unlikely to be an exception.

International treaties going back to 1925 have banned the use of chemical and biological weapons in warfare. The use of hollow-point bullets was banned even earlier, in 1899. The reasoning is that such weapons create extreme and unnecessary suffering. They are especially prone to civilian casualties, such as when people inhale poison gas, or when doctors are injured in attempting to remove a hollow-point bullet. All of these weapons are prone to generate indiscriminate suffering and death, and so they are banned.

Is there a class of AI machines that is equally worthy of a ban? The answer, unequivocally, is yes. If an AI machine can be cheaply and easily converted into an effective and indiscriminate mass killing device, then there should be an international convention against it. Such machines are not unlike radioactive metals. They can be used for reasonable purposes. But we must carefully control them because they can be easily converted into devastating weapons. The difference is that repurposing an AI machine for destructive purposes will be far easier than repurposing a nuclear reactor.
We should ban AI weapons not because they are all immoral. We should ban them because humans will transform AI weapons into hideous blood-thirsty monsters using mods and hacks easily found online. A simple piece of code will transform many AI weapons into killing machines capable of the worst excesses of chemical weapons, biological weapons, and hollow-point bullets.

Banning certain kinds of artificial intelligence requires grappling with a number of philosophical questions. Would an AI weapons ban have prohibited the US Strategic Defense Initiative, popularly known as the Star Wars missile defense? Cars can be used as weapons, so does the petition propose to ban Google’s self-driving cars, or the self-driving cars being deployed in cities around the UK? What counts as intelligence, and what counts as a weapon?

These are difficult and important questions. However, they do not need to be answered before we agree to formulate a convention to control AI weapons. The limits of what’s acceptable must be seriously considered by the international community, and through the advice of scientists, philosophers, and computer engineers. The U.S. Department of Defense already prohibits fully autonomous weapons in some sense. It is time to refine and expand that prohibition to an international level.

Of course, no international ban will completely stop the spread of AI weapons. But this is no reason to scrap the ban. If we as a community think there is reason to ban chemical weapons, biological weapons, and hollow-point bullets, then there is reason to ban AI weapons too.

DefenseOne: http://http://bit.ly/1K1GFq8

« MH370 Gentle Landing Theory
Snowden Has No Plans to Leave Russia »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

IT Governance

IT Governance

IT Governance is a leading global provider of information security solutions. Download our free guide and find out how ISO 27001 can help protect your organisation's information.

CSI Consulting Services

CSI Consulting Services

Get Advice From The Experts: * Training * Penetration Testing * Data Governance * GDPR Compliance. Connecting you to the best in the business.

MIRACL

MIRACL

MIRACL provides the world’s only single step Multi-Factor Authentication (MFA) which can replace passwords on 100% of mobiles, desktops or even Smart TVs.

BackupVault

BackupVault

BackupVault is a leading provider of automatic cloud backup and critical data protection against ransomware, insider attacks and hackers for businesses and organisations worldwide.

NordLayer

NordLayer

NordLayer is an adaptive network access security solution for modern businesses — from the world’s most trusted cybersecurity brand, Nord Security. 

Lastline

Lastline

Lastline is the leader in advanced malware protection.

Ethio-CERT

Ethio-CERT

National Cyber Emergency Readiness and Response Team of Ethiopia.

Research Institute in Science of Cyber Security (RISCS)

Research Institute in Science of Cyber Security (RISCS)

RISCS is focused on giving organisations more evidence, to allow them to make better decisions, aiding to the development of cybersecurity as a science.

OIC-CERT

OIC-CERT

OIC-CERT is the Computer Emergency Response Team for Organisation of Islamic Cooperation (OIC) member countries.

CSI Consulting Services

CSI Consulting Services

Get Advice From The Experts: * Training * Penetration Testing * Data Governance * GDPR Compliance. Connecting you to the best in the business.

CyPhyCon

CyPhyCon

CyPhyCon is an annual event exploring threats and solutions to cyber attacks on cyber-physical systems such as industrial control systems, Internet of Things and Industrial Internet of Things.

Fiserv

Fiserv

Fiserv offers a wide array of Risk & Compliance solutions to help you prevent losses from fraud and ensure adherence to regulatory and compliance mandates.

Astaara

Astaara

Astaara is an integrated insurance services and risk management advisory business incorporating cyber risk advisory, underwriting and analytics.

CyFIR

CyFIR

CyFIR is a network investigation and Incident Response tool for performing live computer investigations across any size enterprise.

DisruptOps

DisruptOps

Built for today’s cloud-scale enterprises, DisruptOps’ Cloud Detection and Response platform automates assessment and remediation procedures of critical cloud security issues.

Lifetech

Lifetech

Lifetech is a software development, product engineering and system integration company. Cybersecurity services include SIEM deployment and training.

BOXX Insurance

BOXX Insurance

BOXX Insurance Inc. is a new type of insurance company for a new type of risk. Cyberboxx is the first fully-integrated cybersecurity and insurance solution for small-to-medium-sized businesses.

Persistent Systems

Persistent Systems

Persistent Systems are a trusted Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients.

Distology

Distology

Distology are an award-winning cloud security distributor bringing a wealth of experience and strong relationships with a huge breadth of partners covering the UK, Ireland and Benelux.

Sayers

Sayers

Sayers is best known for its ability to solve business challenges with IT solutions. Our areas of expertise include cloud, storage, virtualization, security, mobility and networking.

Verastel

Verastel

Specializing in the niche space of proactive cyber-defense, and adaptive resilience, team Verastel is bolstering enterprise digital security like never before.