AI is as Dangerous as Nuclear Weapons

2A9FECF100000578-3165356-Autonomous_robots_like_Boston_Dynamics_Big_Dog_shown_above_parti-a-31_1437142321919.jpg

Autonomous robots like Boston Dynamics Big Dog (shown above) particularly concerned Professor Russell

Artificial Intelligence has the potential to be as dangerous to mankind as nuclear weapons, a leading pioneer of the technology has claimed. Professor Stuart Russell, a computer scientist who has lead research on artificial intelligence, fears humanity might be 'driving off a cliff' with the rapid development of AI. He fears the technology could too easily be exploited for use by the military in weapons, putting them under the control of AI systems.

He points towards the rapid development in AI capabilities by companies such as Boston Dynamics, which was recently acquired by Google, to develop autonomous robots for use by the military.

Professor Russell, who is a researcher at the University of California in Berkeley and the Centre for the study of Existential Risk at Cambridge University, compared the development of AI to the work that was done to develop nuclear weapons.
Google has set up an ethics board to oversee its work in artificial intelligence. The search giant has recently bought several robotics companies, along with Deep Mind, a British firm creating software that tries to help computers think like humans.
One of its founders warned artificial intelligence is 'number one risk for this century,' and believes it could play a part in human extinction.
'Eventually, I think human extinction will probably occur, and technology will likely play a part in this,' DeepMind's Shane Legg said in a recent interview.
Among all forms of technology that could wipe out the human species, he singled out artificial intelligence, or AI, as the 'number 1 risk for this century.' The ethics board, revealed by web site The Information, is to ensure the projects are not abused.

Neuroscientist Demis Hassabis founded DeepMind two years ago with the aim of trying to help computers think like humans.
His views echo those of people like Elon Musk who have warned recently about the dangers of artificial intelligence. 
Professor Stephen Hawking also joined a group of leading experts to sign an open letter warning of the need for safeguards to ensure AI has a positive impact on mankind.

In an interview with the journal Science for a special edition on Artificial Intelligence, he said: 'From the beginning, the primary interest in nuclear technology was the "inexhaustible supply of energy".
'The possibility of weapons was also obvious. I think there is a reasonable analogy between unlimited amounts of energy and unlimited amounts of intelligence… Both seem wonderful until one thinks of the possible risks. In neither case will anyone regulate the mathematics. The regulation of nuclear weapons deals with objects and materials, whereas with AI it will be a bewildering variety of software that we cannot yet describe. I'm not aware of any large movement calling for regulation either inside or outside AI, because we don't know how to write such regulation.'

Science recently published a series of papers highlighting the progress that has been made in artificial intelligence recently.
In one, researchers describe the pursuit of a computer that is able to make rational economic decisions away from humans while another outlines how machines are learning from 'big data'.

Nuclear research was conducted with the aim of producing a new energy source, but scientists also knew that it could be used to create weapons of great power. Professor Russell warns AI could be put to similar use if researchers are not careful. A nuclear bomb test is shown over French Polynesia in the image above

Professor Russell, however, cautions that this unchecked development of technology can be dangerous if the consequences are not fully explored and regulation put in place.
He said: 'Here's what Leo Szilard wrote in 1939 after demonstrating a nuclear chain reaction: 'We switched everything off and went home. That night, there was very little doubt in my mind that the world was headed for grief… To those who say, well, we may never get to human-level or super intelligent AI, I would reply: It's like driving straight toward a cliff and saying, 'Let's hope I run out of gas soon!'

In April Professor Russell raised concerns at a United Nations meeting in Geneva over the dangers of putting military drones and weapons under the control of AI systems. He joins a growing number of experts who have warned that scenarios like those seen in films from Terminator, AI and 2001: A Space Odyssey are not beyond the realms of possibility.
'The routes could be varied and complex—corporations seeking a super technological advantage, countries trying to build [AI systems] before their enemies, or a slow-boiled frog kind of evolution leading to dependency and enfeeblement not unlike EM Forster's The Machine Stops.' EM Forster's short story tells of a post-apocalyptic world where humanity lives underground and relies on a giant machine to survive, which then begins to malfunction.

Professor Russell said computer scientists needed to modify the goals of their research to ensure human values and objectives remain central to the development of AI technology. He said students needed to be trained to treat these objectives much in the same way 'as containment is central to the goals of fusion research'.

In an editorial in Science, editors Jelena Stajic, Richard Stone, Gilbert Chin and Brad Wible, said: 'Triumphs in the field of AI are bringing to the fore questions that, until recently, seemed better left to science fiction than to science.
'How will we ensure that the rise of the machines is entirely under human control? And what will the world be like if truly intelligent computers come to coexist with humankind?'
Mail: http://dailym.ai/1Jnebmb

 

 

« Cyber Threats to Civilian Flights
Scientists Want to Keep AI Out of Weapons »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

CYRIN

CYRIN

CYRIN® Cyber Range. Real Tools, Real Attacks, Real Scenarios. See why leading educational institutions and companies in the U.S. have begun to adopt the CYRIN® system.

Practice Labs

Practice Labs

Practice Labs is an IT competency hub, where live-lab environments give access to real equipment for hands-on practice of essential cybersecurity skills.

North Infosec Testing (North IT)

North Infosec Testing (North IT)

North IT (North Infosec Testing) are an award-winning provider of web, software, and application penetration testing.

DigitalStakeout

DigitalStakeout

DigitalStakeout enables cyber security professionals to reduce cyber risk to their organization with proactive security solutions, providing immediate improvement in security posture and ROI.

CSI Consulting Services

CSI Consulting Services

Get Advice From The Experts: * Training * Penetration Testing * Data Governance * GDPR Compliance. Connecting you to the best in the business.

L J Kushner & Associates

L J Kushner & Associates

L.J. Kushner is a leading Information Security recruiting firm.

Institute for Critical Infrastructure Technology (ICIT)

Institute for Critical Infrastructure Technology (ICIT)

ICIT is a leading cybersecurity think tank providing objective research, advisory, and education to legislative, commercial, and public-sector cybersecurity stakeholders.

CERT-UG/CC

CERT-UG/CC

CERT-UG/CC is the national Computer Emergency Response Team for Uganda, operating under the National Information Technology Authority (NITA-U)

Soracom

Soracom

Soracom offers secure, scalable, cloud-native connectivity developed specifically for the Internet of Things.

Cradlepoint

Cradlepoint

With Cradlepoint customers leverage the speed and economics of wired and wireless Internet broadband for branch, mobile, and IoT networks while maintaining end-to-end visibility, security and control.

SCADAfence

SCADAfence

SCADAfence offers cutting edge cybersecurity solutions designed to ensure the operational continuity of industrial (ICS/SCADA) networks.

Claranet

Claranet

Claranet are experts in modernising and running critical applications and infrastructure through end-to-end professional services, managed services and training.

R2S Technologies

R2S Technologies

R2S can help you implement a cyber security framework to ensure your business is more resilient towards the growing threat of cyber crime. We provide Web and Mobile Application Security Assessment..

Langner

Langner

Langner is a software and consulting firm specialized in cyber security for critical infrastructure and large-scale manufacturing.

ArcRan Information Technology

ArcRan Information Technology

ArcRan concentrates on developing comprehensive cybersecurity solutions for smart city applications. We believe that cybersecurity is the fundamental enabler of IoT development.

Automox

Automox

Remediate vulnerabilities 30X faster than the industry norm – and dramatically reduce your risk with simple, fast, and cloud-native endpoint hardening from Automox.

Chainlink

Chainlink

Chainlink expands the capability of smart contracts by enabling access to real-world data and systems without sacrificing the security and reliability guarantees inherent to blockchain technology.

CERT.JE

CERT.JE

CERT.JE is responsible for promoting and improving the cyber resilience across the critical national infrastructure, business communities and citizens in Jersey.

Red Goat Cyber Security

Red Goat Cyber Security

Red Goat Cyber Security have created excellent, informative and interactive Social Engineering Awareness training which is suitable for all levels of staff.

CyberconIQ

CyberconIQ

CyberconIQ provide an integrated Human Defense Platform that reduces the probability and/or the cost of a cybersecurity breach by measurably improving our clients risk posture and compliance culture.

Aeries Technology

Aeries Technology

Aeries is a technology services organization offering capabilities in Technology Services, Digital Transformation, and Business Process Management.