Artificial Intelligence - What We Need To Know

The trouble with the word artificial intelligence is the word intelligence. It misleads, people conjure up images of thinking machines, Stephen Spielberg; Arnold Schwarzenegger coming back from the future and saying: “I’ll be back.” 

Who knows what the future may bring, but for now, and for all intents and purposes, people may be confusing intelligence and sentience. Neither are well defined, we are supposedly sentient, machines are not, and for all we know, may never be.

Intelligence means “the ability to acquire and apply knowledge and skills“. These days machines can do that, machines learn, they deep learn, they can even learn by applying neural networks, that does not make them like people, maybe they possess one subset, of a myriad of sets, that make us who we are.

The Beginning

It is never easy to find a beginning, every event in history had a cause that had occurred before; keep going back and you get to the Big Bang. So when did artificial intelligence begin? 

Did it begin when Alan Turing defined what is now called the Turing Test? Back in 1951, in a paper with the title Computer Machinery and Intelligence, he envisioned three rooms. 

In one sits a man, in another a women and in a third a judge. The man and women are asked a series of questions, the judge has to identify from the answers in which room sits a man and in which room sits a women. One of the two people is then replaced by a computer. Will the judge draw a different conclusion?

Maybe we can go back further to Ancient Greece and the myth of Talos, a giant mechanical humanoid, perhaps the first robot equipped with what we might now call artificial intelligence.

But to find the first time this phrase was actually used, we need to forward wind the clock to 1956, at Dartmouth College, New Hampshire. This was the location of the Dartmouth Workshop, organised by John McCarthy, a famous American computer and cognitive scientist. It was at this eight-week long event, McCarthy coined the phrase: Artificial intelligence.

Intriguingly, McCarthy initially lost his list of attendees, but the list that finally emerged read like a who’s who of the great and good in the story of AI: Marvin Minsky, Allen Newell and Herbert A. Simon: together these men are known as the ‘founding fathers of AI’. 

Incidentally, John Nash, the mathematician famous for his work in game theory, a winner of the Nobel Memorial Prize in economics, and subject of the film: Beautiful Mind, starring Russell Crowe, was also at the conference.

McCarthy later said: “Our ultimate objective is to make programs that learn from their experience as effectively as humans do.”

These days, many argue that artificial intelligence and machine learning are interchangeable, that’s not strictly accurate, it may be more accurate to say that machine learning is a subset of artificial intelligence, as its super smart sibling, deep learning.

Machine and Deep Learning

Earlier versions of AI, such as IBM’s Deep Blue, which, to a fanfare of publicity, defeated the world’s top chess player, Gary Kasparov in a series of matches, culminating in decisive victory in 1997, was different. 

Deep Blue was mainly about muscle, the computer equivalent of horsepower, throwing processing power around, analysing each move and all the potential permutations and combinations. Scientific American interviewed Murray Campbell, who, as an employee of IBM, worked on Deep Blue.

He said: “The 1997 version of Deep Blue searched between 100 million and 200 million positions per second, depending on the type of position. The system could search to a depth of between six and eight pairs of moves, one white, one black, to a maximum of 20 or even more pairs in some situations.”

Deep Blue was also developed with a lot of human involvement, chess grandmasters, for example, to help develop the program.

By contrast, what machine learning and deep learning have in common is that they do what their name suggests, they learn. The difference largely relates to the amount of human involvement, with human programmers and designers taking a more proactive role in defining parameters in machine learning. With deep learning, computers often learn from multiple data sources, extrapolating data from quite unrelated areas.

Learn like we Do

A good analogy between machine learning and older forms of AI might relate to sport. If you play a sport, such as tennis or squash, your brain does not calculate the trajectory a ball will travel in using advanced mathematical formulae, applying the rules of geometry. 

It learns that if you hit the ball in a certain way it then reacts in a certain way. If you hit it in slightly different way, it reacts differently. It can then extrapolate from this how the ball will approximately react if you hit the ball in a way that is somewhere between these two practiced shots. 

In short, we learn. If a computer was to apply mathematics to calculating the trajectory of a ball depending on the angle it was hit at, the speed of shot, texture of surface to calculate bounce and a myriad of other different variables, the necessary computing power would be enormous. If, instead, it learnt from studying previous shots (and built a predictive model from data), then the processing power required would be a lot less.

Jeopardy to Go

Many years after Deep Blue defeated Gary Kasparov, IBM Watson defeated the best players in the world at Jeopardy, the US quiz based game. That was in 2011. 

What was impressive about this victory is that questions contestants are required to ask as part of the game are often quite ambiguous. It was a famous victory for Watson (which was named after a IBM’s first CEO, Thomas Watson, and not, Sherlock Holmes’ sidekick.) 

But clearly, Watson did not understand its answers, there was no common sense, for example answering ‘Dorothy Parker’ instead of ‘The Elements of Style,’ to one question. In fact, ‘The Elements of Style’ is a book describing writing guidelines, Dorothy Parker, an American poet once recommended it.

Watson’s victory in Jeopardy was impressive, but a computer that could win the Chinese game of Go, an abstract board game with 2×10170 permutations, which is significantly greater than the number of atoms in the known universe, seemed to make winning Jeopardy more like child’s play. Such was the achievement of AlphaGo, a computer program created by Alphabet subsidiary DeepMind.

Whilst Watson applied machine learning, AlphaGo applied deep learning. In March 2016, AlphaGo defeated Lee Sedol, 18-time world champion at Go over a series of five games.

In achieving its victory, AlphaGo evaluated roughly a thousand fold less positions than Deep Blue did in its victory over Kasparov. It secured its victory by evaluating game play.

Impressive though that was, in 2017, AlphaGo Zero, managed a far more impressive feat. This time, the program didn’t learn the game by evaluating the game play from human competitors, rather it learned the game from scratch, playing against itself. 

All it needed was the rules. Initial gameplay was selected at random, by trial and error and via selecting the fittest gameplay, within three days it could surpass AlphaGo. Within 21 days it was at the level of AlphaGo Master, a later version which had defeated 60 professionals and the world champion online, and within 40 days it had arguably become the best Go player in the world, or so claim DeepMind.

Neural Networks

Deep learning and machine learning both apply neural networks.
Neural networks have been going in and out of fashion. Warren McCullough and Walter Pitts, from the University of Chicago, first proposed the concept in 1944. They came into vogue in the 1980s, went out of fashion in the noughties, but are now back.

The resurgence in AI can be credited to Alex Krizhevsky, who designed an artificial neural network in 2012, as part of the ImageNet Challenge.

It may be that until recently, computing power was insufficient to do neural networks justice, it may be that we just didn’t know enough about how they could work.

Like the human brain, neural networks consist of thousands, maybe millions of processors forming nodes — although, the human brain has over 100 billion nodes. The nodes in a neural network tend to be organised in layers, such that each layer of nodes may be given a specific task.

Applications of AI

AI can have applications in many guises, including autonomous cars, voice assistance and voice recognition, image/face recognition, personalised health monitoring, advertising and online shopping, such as identifying products that may be of interest to customers based on analysis of data, search, finance trading and in the war against cyber security.

AI can have applications in:

• AI analytics
• AI business processes
• AI data management.

According to a recent report from McKinsey, AI has the potential to deliver additional global economic activity of around $13 trillion by 2030, or about 16% higher cumulative GDP compared with today.

And Science Fiction

The realm of science fiction and AI appear to meet in the imagination of the media. But do we need to worry about the more-scary predictions of science fiction, AI gaining awareness? Is AI a threat to humanity as the late Stephen Hawking and Elon Musk, suggest?

Speak to most experts in AI, and they laugh at the idea. But maybe, in asking whether AI could one day become sentient and more intelligent than humans, we ask the wrong question. 

Instead we should ask, could AI become more intelligent and sentient than an amoeba, because remember, it was from an organism not that dissimilar from an amoeba that all complex life, including humanity evolved. 

Okay, it took several billion years, but then in a digital environment, evolution could work several order of magnitudes faster.

Information Age:

You Might Also Read:

The Human Factor Is Essential To Eliminating Bias in Artificial Intelligence

« US And France To Permit Fully Driverless Cars On Public Roads
AI Hyperdrives Into Outer Space »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

Syxsense

Syxsense

Syxsense brings together endpoint management and security for greater efficiency and collaboration between IT management and security teams.

Clayden Law

Clayden Law

Clayden Law advise global businesses that buy and sell technology products and services. We are experts in information technology, data privacy and cybersecurity law.

Resecurity, Inc.

Resecurity, Inc.

Resecurity is a cybersecurity company that delivers a unified platform for endpoint protection, risk management, and cyber threat intelligence.

Jooble

Jooble

Jooble is a job search aggregator operating in 71 countries worldwide. We simplify the job search process by displaying active job ads from major job boards and career sites across the internet.

DigitalStakeout

DigitalStakeout

DigitalStakeout enables cyber security professionals to reduce cyber risk to their organization with proactive security solutions, providing immediate improvement in security posture and ROI.

MixMode

MixMode

MixMode's PacketSled platform delivers network monitoring, deep forensic analysis and incident response.

Information-Technology Promotion Agency (IPA) - Japan

Information-Technology Promotion Agency (IPA) - Japan

IPA is an implementing agency in Japan with a role to address Information Security, IT Systems Reliability and IT Resource Development.

SecureDevice

SecureDevice

SecureDevice is a Danish IT Security company.

CryptoCodex

CryptoCodex

Cryptocodex has developed Counter-Fight, the most advanced, yet simple to implement, counterfeit detection system.

Fox-IT

Fox-IT

Fox-IT prevents, solves and mitigates the most serious cyber threats with smart solutions for governmental bodies, defense, law enforcement, critical infrastructure, banking and large enterprises.

CyberGreen Institute

CyberGreen Institute

The CyberGreen Institute is a global non-profit and collaborative organization conducting activities focused on helping to improve the health of the global Cyber Ecosystem.

National Cybersecurity Institute (NCI) - Excelsior College

National Cybersecurity Institute (NCI) - Excelsior College

NCI is Excelsior College’s research center dedicated to assisting government, industry, military and academic sectors meet the challenges in cybersecurity policy, technology and education.

Sternum

Sternum

Sternum provides reliable and effective endpoint security for any IoT device, using robust technology and seamless integration.

CyberSecurity Non-Profit (CSNP)

CyberSecurity Non-Profit (CSNP)

CyberSecurity Non-Profit (CSNP) is a 501(c)(3) non-profit organization dedicated to promoting cybersecurity awareness and education.

Cyber NYC

Cyber NYC

Cyber NYC is a suite of strategic investments to grow New York City’s cybersecurity workforce, help companies drive innovation, and build networks and community spaces.

Bleckwen

Bleckwen

Bleckwen is a proven fraud detection system that helps financial institutions build trust with customers.

AdaCore

AdaCore

AdaCore is focused on helping developers build safe, secure and reliable software.

Argentra

Argentra

Argentra is a specialist engineering company, we have years of experience developing custom security software and providing security risk consulting.

Cardonet

Cardonet

Cardonet is an IT Support and IT Services business offering end-to-end IT services, 24x7 IT Support to IT Consultancy, Managed IT and Cyber Security.

iSPIRAL IT Solutions

iSPIRAL IT Solutions

iSPIRAL is a leading regulatory technology software provider delivering state-of-art AML, KYC, Risk and Compliance solutions.

Praxis Security Labs

Praxis Security Labs

Praxis Security Labs is a research driven cybersecurity company that helps our customers to reduce risk and improve security.