Artificial Intelligence Could Be As Powerful As A Nuclear Weapon

AI has the potential to play a significantly positive role in our futures, but any development within Artificial intelligence (AI) also has the potential to have a negative impact. Like nuclear weapons, AI weaponry has the potential to inflict mass damage in our future. The open-source development of AI can enable the democratic supervision of AI, which is very different from the style of secrecy which concealed the  development of nuclear weapons. 

The connection between AI and nuclear weaponry is not new. In fact, AI has been part of the nuclear deterrence architecture for decades. 

AI refers to systems able to perform tasks that normally require human intelligence, such as visual and speech recognition, decision making and, perhaps one day, thinking.  As early as in the 1960s, the United States and the Soviet Union saw that the nascent field of AI could play a role in the development and maintenance of their retaliatory capability, that is, the capability to respond to a nuclear attack, even by surprise. “AI can be more dangerous than nuclear weapons in the future. People have suffered from the 1945 trauma and been constantly trying to ensure the regulations against nuclear weapons and the expansion of nuclear power.... the lack of regulations and difficulty to set rules for the development of artificial intelligence render AI to be more dangerous than nuclear weapons. Thus new rules and means of supervision are needed for AI development to make sure it is for the betterment of human race, rather than endangering us,” says academic researcher Xuanbing Cheng in a recent paper published by Stanford University.

Nuclear weapons ultimately led to laws and agreements to prevent their spread and use partly through arms control nnd now many experts now believe AI will require a regulatory regime similar to nuclear weapons. 

Progress in computing is making it possible for machines to accomplish many tasks that once required human effort or were considered altogether impossible and now ethical questions are starting to pop up globally in the emerging field of AI.

Self-driving cars. Automated medical diagnoses. Algorithms that decide where to deploy police officers. Robotic soldiers. AI has the unsettling potential to transform modern life over the next few decades.

This might suggest new capabilities that AI could have to spur arms races  or increase the likelihood of states escalating to nuclear use, either intentionally or accidentally, during a crisis. However, incorporating AI into early warning systems could create time efficiencies in nuclear crises. AI could improve the speed and quality of information processing, giving decision-makers more time to react.

Albert Einstein described the universe as “finite but unbounded.” That definition could fit AI’s future applications. Perhaps the only comparable disruptive technology was nuclear weapons. These weapons irreversibly disrupted and changed the nature, conduct and character of the politics of war.

The reason id that there are no winners. Only victims and losers would emerge after a thermonuclear holocaust killed the combatants. 

Nuclear weapons provoked heated debate over the moral and legal implications and when or how these weapons could or should be employed from a counterforce first strike against military targets to “tactically” to limit escalation or rectify conventional arms imbalances. 

  • Nuclear weapons affected national security and AI most certainly will affect the broader sweep of society, in the same way as the industrial and information revolutions with positive and negative consequences. The destructive power of these weapons made them so significant. 
  • AI needs an intermediary link to exercise its full disruptive power, however, as societies became more advanced, those two revolutions had the unintended consequence of also creating greater vulnerabilities, weaknesses and dependencies subject to major and even catastrophic disruption. 

COVID-19, massive storms, fires, droughts and cyber attacks are unmistakable symptoms of the power of the new MAD, Massive Attacks of Disruption. AI is a potential multiplier, exploiting inherent societal weaknesses and vulnerabilities and creating new ones as well as preventing their harmful effects. 

Unlike nuclear weapons, if used properly AI will have enormous and even revolutionary benefits.

A permanent 'AI Oversight Council' with a substantial amount of research funding to examine AI’s societal implications should be created. Membership should be drawn from the public and the legislative and executive branches of government. Funding should go to the best research institutions, another parallel with nuclear weapons. This council would also coordinate, liaise and consult with the international community, including China, Russia, allies, friends and others to widen the intellectual aperture and as confidence building measures.

By employing the lessons learned from studying the nuclear balance, not only can AI’s  potentially destructive consequences be mitigated. More importantly, if properly used, AI has nearly unbounded opportunity to advance the public good.

In the near future, AI could be used to conduct remote sensing operations in areas that were previously hardly accessible for manned and remotely-controlled systems, such as in the deep sea. Autonomous unmanned systems such as aerial drones or unmanned underwater vehicles could also be seen by nuclear weapon states as an alternative to intercontinental ballistic missiles as well as manned bomber and submarines for nuclear weapon delivery. 

According to Elon Musk, AI represents a serious danger to the public, and needs regulatory oversight from a public body. In touching on the nuclear weapon analogy, he said:  "The danger of AI is much greater than the danger of nuclear warheads, by a lot and nobody would suggest that we allow anyone to just build nuclear warheads if they want - that would be insane."

RAND:       The Hill:        UNUCPR:         Venturebeat:       EuroLeadershipNetwork:     Vassar Insider:

Time:          Stanford.edu:          2021.AI:        TechRepublic

You Might Also Read:

Cyber Threats & Nuclear Dangers:

 

« The Cyber Security Industry Will Soon Be Worth Over $300 Billion
Cyber Effects On The Legal Profession »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

XYPRO Technology

XYPRO Technology

XYPRO is the market leader in HPE Non-Stop Security, Risk Management and Compliance.

Cylance Smart Antivirus

Cylance Smart Antivirus

An antivirus that works smarter, not harder, from BlackBerry. Lightweight, non-intrusive protection powered by artificial intelligence. BUY NOW - LIMITED DISCOUNT OFFER.

Practice Labs

Practice Labs

Practice Labs is an IT competency hub, where live-lab environments give access to real equipment for hands-on practice of essential cybersecurity skills.

FREE eBook: Practical Guide To Optimizing Your Cloud Deployments

FREE eBook: Practical Guide To Optimizing Your Cloud Deployments

AWS Marketplace eBook: Optimizing your cloud deployments to accelerate cloud activities, reduce costs, and improve customer experience.

MIRACL

MIRACL

MIRACL provides the world’s only single step Multi-Factor Authentication (MFA) which can replace passwords on 100% of mobiles, desktops or even Smart TVs.

Cyber Security Supplier Directory

Cyber Security Supplier Directory

Free Access: Cyber Security Supplier Directory listing 5,000+ specialist service providers.

QTS

QTS

QTS Realty Trust, Inc. is a leading provider of secure, compliant data center, hybrid cloud and managed services.

Sucuri

Sucuri

Sucuri provide a complete website security solution to protect against hacks and clean up after security incidents.

Security Onion Solutions

Security Onion Solutions

Security Onion is a Linux distro for intrusion detection, network security monitoring, and log management.

Genua

Genua

Genua is a specialist in IT security services and solutions ranging from network and infrastructure security to encrypted comms and industrial automation.

Cymulate

Cymulate

Cymulate is a SaaS-based breach and attack simulation platform that makes it simple to know and optimize your security posture any time, all the time.

StrongKey

StrongKey

StrongKey (formerly StrongAuth) is a leader in Enterprise Key Management Infrastructure, bringing new levels of capability and data security at a price point significantly lower than other solutions.

Lynx

Lynx

Lynx provides high added value services in the area of information systems security and ICT infrastructure building.

Featurespace

Featurespace

Featurespace is a world-leader in Adaptive Behavioural Analytics and creator of the ARIC™ platform for fraud and risk management.

CONCORDIA

CONCORDIA

Concordia is a Cybersecurity Competence Network with leading research, technology, and competences to build the European Secure, Resilient and Trusted Ecosystem.

HENSOLDT Cyber

HENSOLDT Cyber

HENSOLDT Cyber introduces a paradigm shift to cyber security. Our products have been designed to ensure the integrity of embedded systems at the core: the operating system and the processor.

StrikeReady

StrikeReady

StrikeReady have developed CARA, an advanced technology solution that offers personalized and proactive assessment and remediation of future and current risk in real-time.

Institute for Information Security & Privacy (IISP) - Georgia Tech

Institute for Information Security & Privacy (IISP) - Georgia Tech

The Institute for Information Security & Privacy (IISP) at Georgia Tech connects government, industry and academia to solve the grand challenges of cybersecurity.