A Brief History Of Artificial Intelligence & Its Potential Future

A Brief History Of Artificial Intelligence & The Potential Future


Directors Report: This article is texclusive to premium subscribers. For unrestricted website access please Subscribe: £5 monthly / £50 annual.


 

Artificial Intelligence (AI) technology is developing at high speed, and is changing many aspects of contemporary life, just as previous industrial revolutions have completely altered and changed the way we work. Human integration with technology has always been intertwined. This is in part what makes us unique from other species. We create and use tools to enhance our lives. From fire to the steam engine to AI. 

One main aspect of AI is its ability to rationalise information and thought, and take the necessary actions to achieve a specific goal. A subset of AI is Machine Learning (ML), which refers to the concept that computer programs can automatically learn from and adapt to new data without being assisted by humans. 

These automatic learning technologies work by absorbing huge amounts of unstructured data such as text, images, or video and this data allows a computer to think, act, and respond as a human. One example is Siri that can tell you when it will rain in your city, town or village today or if you’re somewhere where you’re not fluent in the local language, Siri can help by quickly translating a phrase for you. However, some experts fear that AI technology, which is often used for malicious purposes, may threaten many jobs.

This is because AI can process large amounts of data in ways that humans cannot. The goal for AI is to be able to do things like recognise patterns, make decisions, and judge like humans. 

AI is at the very foundations of some things, like image recognition and classification. It's also changing how we make decisions, for example, it can be used to predict traffic light systems or when you get your coffee in the morning. AI can be classified into analytical, human-inspired, and humanised AI depending on the types of intelligence it exhibits. Computers can be fed huge amounts of information and trained to identify the patterns in this particular data and can be used to make predictions, solve problems, and even learn from their own mistakes.
As well as data, AI relies on algorithms, lists of rules which must be followed in the correct order to complete a task. This is the technology behind the voice-controlled virtual assistants Siri and Alexa. It lets Spotify, YouTube and BBC iPlayer suggest what you might want to play next, and helps Facebook and Twitter decide which social media posts to show users.

Concerns have also been raised about the possibility of AI being employed for evil intent, such as malicious cyber attacks or disinformation campaigns. As a result, many researchers and decision-makers are attempting to ensure that AI is created and applied ethically and responsibly.

AI Spring: The Birth of Artificial Intelligence

Although it is difficult to pinpoint, the roots of AI can probably be traced back to the 1940s, specifically 1942, when the American Science Fiction writer Isaac Asimov published his short story Runaround.  The plot of Runaround, a story about a robot developed by the engineers Gregory Powell and Mike Donavan, evolves around the Three Laws of Robotics: 

1.   A robot may not injure a human being or, through inaction, allow a human being to come to harm; 

2.   A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law;  

3.   A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. 

Asimov’s work inspired generations of scientists in the field of robotics, AI, and computer science, among others the American cognitive scientist Marvin Minsky, who later co-founded the MIT AI laboratory. At roughly the same time, but over 3,000 miles away, the English mathematician Alan Turing worked on much less fictional issues and developed a code breaking machine called The Bombe for the British government, with the purpose of deciphering the Enigma code used by the German army in the Second World War. The Bombe, which was about 7 by 6 by 2 feet large and had a weight of about a ton, is generally considered the first working electro-mechanical computer. 

The powerful way in which The Bombe was able to break the Enigma code, a task previously impossible to even the best human mathematicians, made Turing wonder about the intelligence of such machines. In 1950, he published his seminal article “Computing Machinery and Intelligence” where he described how to create intelligent machines and in particular how to test their intelligence. 

This Turing Test is still considered today as a benchmark to identify intelligence of an artificial system: if a human is interacting with another human and a machine and unable to distinguish the machine from the human, then the machine is said to be intelligent. 

The words Artificial Intelligence were then officially coined about six years later, when in 1956 Marvin Minsky and John McCarthy (a computer scientist at Stanford) hosted the approximately eight-week-long Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) at Dartmouth College in New Hampshire. This workshop, which marks the beginning of the AI Spring and was funded by the Rockefeller Foundation, reunited those who would later be considered as the founding fathers of AI. 

Participants included the computer scientist Nathaniel Rochester, who later designed the IBM 701, the first commercial scientific computer, and mathematician Claude Shannon, who founded information theory.  The objective of DSRPAI was to reunite researchers from various fields in order to create a new research area aimed at building machines able to simulate human intelligence. 

History Of AI

But the idea of “artificial intelligence” goes back thousands of years, to ancient philosophers considering questions of life and death. In ancient times, inventors made things called “automatons” which were mechanical and moved independently of human intervention. The word automation comes from ancient Greek and means “acting of one’s own will.”

One of the earliest records of an automaton comes from 400 BCE and refers to a mechanical pigeon created by a friend of the philosopher Plato. Many years later, one of the most famous automatons was created by Leonardo Da Vinci in 1495. Leonardo Da Vinci wrote extensively about automatons, and his personal notebooks are littered with ideas for mechanical creations ranging from a hydraulic water clock to a robotic lion. Perhaps most extraordinary of all is his plan for an artificial man in the form of an armoured Germanic knight. 

Indeed, the history of AI dates back to antiquity with philosophers mulling over the idea that artificial beings, mechanical men, and other automatons had existed or could exist in some fashion. The idea of inanimate objects coming to life as intelligent beings has been around for a long time. The ancient Greeks had myths about robots, and Chinese and Egyptian engineers built automatons. 

Between 380 BC and the late 1600s:    Various mathematicians, theologians, philosophers, professors, and authors mused about mechanical techniques, calculating machines, and numeral systems that all eventually led to the concept of mechanised “human” thought in non-human beings.

Early 1700s:    Depictions of all-knowing machines akin to computers were more widely discussed in popular literature. Jonathan Swift’s novel Gulliver’s Travels mentioned a device called the engine, which is one of the earliest references to modern-day technology, specifically a computer. This device’s intended purpose was to improve knowledge and mechanical operations to a point where even the least talented person would seem to be skilled, all with the assistance and knowledge of a non-human mind mimicking artificial intelligence. 

And so the beginnings of modern AI can be traced to classical philosophers' attempts to describe human thinking as a symbolic system. But the field of AI wasn't formally founded until 1956, at a conference at Dartmouth College, in Hanover, New Hampshire, where the term Artificial Intelligence was coined.

 AI is based on the assumption that the process of human thought can be mechanised. The study of mechanical, or formal reasoning, has a long history. 

Chinese, Indian and Greek philosophers all developed structured methods of formal deduction in the first millennium BC. Their ideas were developed over the centuries by philosophers such as Aristotle, who gave a formal analysis of the syllogism, Euclid, whose Elements was a model of formal reasoning, al-Khwarizmi, who developed algebra and gave his name to algorithm, and European philosophers such as William of Ockham and Duns Scotus. Spanish philosopher Ramon Llull (1232–1315) developed several logical machines devoted to the production of knowledge by logical means. Llull described his machines as mechanical entities that could combine basic and undeniable truths by simple logical operations, produced by the machine by mechanical meanings, in such ways as to produce all the possible knowledge. His work had a great influence on Gottfried Leibniz, who redeveloped his ideas. 

In the 17th century, Leibniz, Thomas Hobbes and Rene Descartes explored the possibility that all rational thought could be made as systematic as algebra or geometry. Hobbes wrote in Leviathan: "reason is nothing but reckoning". Leibniz envisioned a universal language of reasoning, the 'characteristica universalis', which would reduce argumentation to calculation so that "there would be no more need of disputation between two philosophers than between two accountants. “For it would suffice to take their pencils in hand, down to their slates, and to say to each other (with a friend as witness, if they liked): Let us calculate." 

These philosophers had begun to articulate the physical symbol system hypothesis that would become the guiding faith of AI research. A physical symbol system takes physical patterns or symbols, combining them into structures or expressions and manipulating them, using processes to produce new expressions.

In the 20th century:   The study of mathematical logic provided the essential breakthrough that made artificial intelligence seem plausible. The foundations had been set by such works as Boole’s The Laws of Thought and Frege’s Begriffsschrift. Building on Frege’s system Russell and Whitehead presented a formal treatment of the foundations of mathematics in their masterpiece, the Principia Mathematics in 1913. In the 1940s and 50s, a handful of scientists from a variety of fields (mathematics, psychology, engineering, economics and political science) began to discuss the possibility of creating an artificial brain. The field of AI research was founded as an academic discipline in 1956

Turing's Test

Alan Turing, a young British polymath who explored the mathematical possibility of artificial intelligence. Turing suggested that humans use available information as well as reason in order to solve problems and make decisions, so why can’t machines do the same thing? In 1935 Turing described an abstract computing machine consisting of a limitless memory and a scanner that moves back and forth through the memory, symbol by symbol, reading what it finds and writing further symbols. The actions of the scanner are dictated by a program of instructions that also is stored in the memory in the form of symbols. This is Turing’s stored-program concept, and implicit is the possibility of the machine operating on, and so modifying or improving, its own program. 

Turing’s conception is now known simply as the universal Turing machine. All modern computers are in essence universal Turing machines.

In 1950 Turing published a landmark paper in which he speculated about the possibility of creating machines that think. He noted that "thinking" is difficult to define and devised his famous Turing Test. If a machine could carry on a conversation that was indistinguishable from a conversation with a human being, then it was reasonable to say that the machine was "thinking". This simplified version of the problem allowed Turing to argue convincingly that a "thinking machine" was at least plausible and the paper answered all the most common objections to the proposition.

AI maturation: 1957-1979:   The time between when the phrase “artificial intelligence” was created, and the 1980s was a period of both rapid growth and struggle for AI research. The late 1950s through the 1960s was a time of creation. From programming languages that are still in use to this day to books and films that explored the idea of robots, AI became a mainstream idea quickly. The 1970s showed similar improvements, such as the first anthropomorphic robot being built in Japan, to the first example of an autonomous vehicle being built by an engineering grad student. However, it was also a time of struggle for AI research, as the U.S. government showed little interest in continuing to fund AI research.

Notable dates include:

  • 1958: John McCarthy created LISP (acronym for List Processing), the first programming language for AI research, which is still in popular use to this day.
  • 1959: Arthur Samuel created the term ‘machine learning’ when doing a speech about teaching machines to play chess better than the humans who programmed them.
  • 1961: The first industrial robot Unimate started working on an assembly line at General Motors in New Jersey, tasked with transporting die casings and welding parts on cars, which was considered to be too dangerous for humans.
  • 1965: Edward Feigenbaum and Joshua Lederberg created the first ‘expert system’ which was a form of AI programmed to replicate the thinking and decision-making abilities of human experts.
  • 1966: Joseph Weizenbaum created the first “chatterbot” that used natural language processing (NLP) to converse with humans.1968: Soviet mathematician Alexey Ivakhnenko published “Group Method of Data Handling” in the journal “Avtomatika,” which proposed a new approach to AI that would later become what we now know as “Deep Learning.”
  • 1973: An applied mathematician named James Lighthill gave a report to the British Science Council, underlining that strides were not as impressive as those that had been promised by scientists, which led to much-reduced support and funding for AI research from the British government.
  • 1979: James L. Adams created The Standford Cart in 1961, which became one of the first examples of an autonomous vehicle. In ‘79, it successfully navigated a room full of chairs without human interference.
  • 1979: The American Association of Artificial Intelligence which is now known as the Association for the Advancement of Artifical Intelligence (AAAI) was founded.

Artificial Intelligence Is Everywhere

We now live in the age of ‘big data’ an age in which we have the capacity to collect huge sums of information too cumbersome for a person to process. The application of AI in this regard has already been quite fruitful in several industries such as technology, banking, marketing and entertainment.  We’ve seen that even if algorithms don’t improve much, big data and massive computing simply allow artificial intelligence to learn through brute force. 
Breakthroughs in computer science, mathematics, or neuroscience all serve as potential outs through the ceiling of Moore’s Law.

AI’s Future

Looking to the future, AI is likely to play an increasingly important role in solving some of the biggest challenges facing society, such as climate change, healthcare and cyber security. However, there are concerns about AI’s ethical and social implications, particularly as the technology becomes more advanced and autonomous.

Moreover, as AI continues to evolve, it will profoundly impact virtually every aspect of our lives, from how we work and communicate, to how we learn and make decisions.  

We can also expect to see driverless cars being used on the road in the next twenty years. In the long term, the goal is general intelligence, which is a machine that surpasses human cognitive abilities in all tasks. This is along the lines of the sentient robot we are used to seeing in movies. 

Even if the capability is there, the ethical questions would serve as a strong barrier against fruition. When that time comes, we will need to have a serious conversation about machine policy and ethics, but for now, we’ll allow AI to steadily improve and run amok in society.

AI has the potential to revolutionise the world of work, but this raises questions about which roles it might displace. 
A recent report by investment bank Goldman Sachs suggested that AI could replace 300 million jobs around the world as certain tasks and job functions become automated. That equates to a quarter of all the work humans currently do in the US and Europe. 

The report said that the jobs that might be affected include administration, legal, architecture and management. But it also identified huge potential benefits for many sectors and predicted that AI could lead to a 7% increase in global GDP. Some areas of medicine and science are already taking advantage of AI, with doctors applying AI to help recognise breast cancers, and scientists using it to develop new antibiotics. 

AI will transform the scientific method:   We can expect to see orders of magnitude of improvement in what can be accomplished. There's a certain set of ideas that humans can computationally explore. There’s a broader set of ideas that humans with computers can address. And there’s a much bigger set of ideas that humans with computers, plus AI, can successfully tackle. AI enables an unprecedented ability to analyse enormous data sets and computationally discover complex relationships and patterns. 

AI, augmenting human intelligence, is primed to transform the scientific research process, unleashing a new golden age of scientific discovery in the coming years.

AI will become part of Foreign Policy:    The National Security Commission on Artificial Intelligence has created important recommendations, concluding that the US government needs to accelerate AI innovation. There’s little doubt that AI will be imperative to the continuing economic resilience and geopolitical leadership of the United States.

AI will enable next-genertion consumer experiences:   Next-generation consumer experiences like the metaverse and cryptocurrencies have garnered much buzz. These experiences and others like them will be critically enabled by AI. The metaverse is inherently an AI problem because humans lack the sort of perception needed to overlay digital objects on physical contexts or to understand the range of human actions and their corresponding effects in a metaverse setting.

More and more of our life takes place at the intersection of the world of bits and the world of atoms. AI algorithms have the potential to learn much more quickly in a digital world (e.g., virtual driving to train autonomous vehicles). These are natural catalysts for AI to bridge the feedback loops between the digital and physical realms. For instance, blockchain, cryptocurrency and distributed finance, at their core, are all about integrating frictionless capitalism into the economy. 

To make this vision real, distributed applications and smart contracts will require a deeper understanding of how capital activities interact with the real world.  

Addressing the climate crisis will require AI:   As a society we have much to do in mitigating the socioeconomic threats posed by climate change. Carbon pricing policies, still in their infancy. Many promising emerging ideas require AI to be feasible. One potential new approach involves prediction markets powered by AI that can tie policy to impact, taking a holistic view of environmental information and interdependence. 

This would likely be powered by digital "twin Earth" simulations that would require staggering amounts of real-time data and computation to detect nuanced trends imperceptible to human senses. 

Other new technologies such as carbon dioxide sequestration cannot succeed without AI-powered risk modeling, downstream effect prediction and the ability to anticipate unintended consequences.

AI will Create truly Personalised Medicine:   Personalised medicine has been an aspiration since the decoding of the human genome. But tragically it remains an aspiration. One compelling emerging application of AI involves synthesising individualised therapies for patients. Moreover, AI has the potential to one day synthesise and predict personalised treatment modalities in near real-time, no clinical trials required.

Simply put, AI is uniquely suited to construct and analyse "digital twin" rubrics of individual biology and is able to do so in the context of the communities an individual lives in. Without AI, it is impossible to make sense of the massive datasets from an individual’s physiology, let alone the effects on individual health outcomes from environment, lifestyle and diet. AI solutions have the potential not only to improve the state of the art in healthcare, but also to play a major role in reducing persistent health inequities.

Philosophy of Artificial Intelligence

The philosophy of AI is a branch of the philosophy of mind and the philosophy of computer science that explores AI and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology and free will. Furthermore, the technology is concerned with the creation of artificial animals or artificial people so the discipline is of considerable interest to philosophers. 

These factors contributed to the emergence of the philosophy of artificial intelligence. Some scholars argue that the AI community's dismissal of philosophy is detrimentalThe philosophy of artificial intelligence attempts to answer such questions as follows: 

  • Can a machine act intelligently? Can it solve any problem that a person would solve by thinking?
  • Are human intelligence and machine intelligence the same? Is the human brain essentially a computer?
  • Can a machine have a mind, mental states, and consciousness in the same sense that a human being can? Can it feel how things are?

Questions like these reflect the divergent interests of AI researchers, cognitive scientists and philosophers. The scientific answers to these questions depend on the definition of "intelligence" and "consciousness" and exactly which "machines" are under discussion. Important questions in the philosophy of AI include some of the following: 

  • If a machine behaves as intelligently as a human being, then it is as intelligent as a human being. 
  • The Dartmouth proposal: "Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it." 
  •  A physical symbol system has the necessary and sufficient means of general intelligent action. 
  • The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds. 
  • For 'reason' ... is nothing but 'reckoning,' that is adding and subtracting, of the consequences of general names agreed upon for the 'marking' and 'signifying' of our thoughts. 

Conclusions

Nobody knows whether AI will allow us to enhance our own intelligence, as Raymond Kurzweil from Google thinks, or whether it will eventually lead us into World War III, a concern raised by Elon Musk. However, everyone agrees that it will result in unique ethical, legal, and philosophical challenges that will need to be addressed. 

For decades, ethics has dealt with the Trolley Problem, a thought experiment in which an imaginary person needs to choose between inactivity which leads to the death of many and activity which leads to the death of few. In a world of self-driving cars, these issues will become actual choices that machines, and, by extension, their human programmers will need to make. In response, calls for regulation have been numerous, including by major actors such as Mark Zuckerberg. 

But how do we regulate a technology that is constantly evolving by itself, and one that few experts, let alone politicians, fully understand? How do we overcome the challenge of being sufficiently broad to allow for future evolutions in this fast-moving world and sufficiently precise to avoid everything being considered as AI? 

There are today dozens of different apps that allow a user to play chess against her phone. Playing chess against a machine, and losing with near certainty, has become a thing not even worth mentioning. The applications of AI are likely to impact critical facets of our economy and society over the coming decade. We are in the early innings of what many credible experts view as the most promising era in technology innovation and value creation for the foreseeable future.

The AI revolution is upon us, and companies must prepare to adapt to this change. It is important to make an inventory of the current skills within the organisation to identify which additional skills the employees need to learn. 

Organisations will need to develop an AI strategy and identify those areas where AI is most effective, whether in a product or a service. Failing to act inevitably means falling behind. 

Given how AI has been presented in the media, particularly in some of our favorite science fiction films, it is obvious that the development of this technology has raised concerns about the possibility that humans could one day become redundant in the workplace. After all, many jobs formerly carried out by human hands have been mechanised as technology has improved. It makes sense to worry that the development of clever computers may spell the beginning of the end for much current employment as we know it. 

But jobs will still be available, just as they were as previous Industrial Revolutions took over older jobs and gradually humans replace the old jobs with current new ones and that’s the basic answer to what is the future of AI. The productivity of AI may boost our workplaces, which will benefit people by enabling them to do more work. 

As the future of AI replaces tedious or dangerous tasks, the human workforce is liberated to focus on tasks for which they are more equipped, such as those requiring creativity and empathy.

References:

LiveScience:    LinkedIn Pulse:   GoldmanSachs

Livingetc:   Investopedia:     BBC:

Forbes:     Haenlein & Kaplan:      Tableau:  

Harvard SITN:    G2:    Coin Telegraph:  

Forbes:      Forbes:    SimpliLearn:   

Image:  Steve Johnson on Unsplash

___________________________________________________________________________________________

If you like this website and use the comprehensive 6,500-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.

  • Individual £5 per month or £50 per year. Sign Up
  • Multi-User, Corporate & Library Accounts Available on Request

Cyber Security Intelligence: Captured Organised & Accessible


 

« The Nuclear Governance Model Won’t Work For AI
Twitter Hacker Goes To Jail »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

Practice Labs

Practice Labs

Practice Labs is an IT competency hub, where live-lab environments give access to real equipment for hands-on practice of essential cybersecurity skills.

IT Governance

IT Governance

IT Governance is a leading global provider of information security solutions. Download our free guide and find out how ISO 27001 can help protect your organisation's information.

Cyber Security Supplier Directory

Cyber Security Supplier Directory

Our Supplier Directory lists 6,000+ specialist cyber security service providers in 128 countries worldwide. IS YOUR ORGANISATION LISTED?

Authentic8

Authentic8

Authentic8 transforms how organizations secure and control the use of the web with Silo, its patented cloud browser.

CYRIN

CYRIN

CYRIN® Cyber Range. Real Tools, Real Attacks, Real Scenarios. See why leading educational institutions and companies in the U.S. have begun to adopt the CYRIN® system.

Tiro Security

Tiro Security

Tiro Security is a boutique company specializing in information security and IT audit recruitment and solutions.

Cryptomathic

Cryptomathic

Cryptomathic is an expert on commercial crypto - we develop, deliver and support the most secure and efficient off-the-shelf and customised solutions.

TeachPrivacy

TeachPrivacy

TeachPrivacy provides computer-based privacy and data security training that is engaging, memorable, and understandable.

DirectDefense

DirectDefense

DirectDefense is an information security services and managed services provider.

Smoothwall

Smoothwall

Smoothwall develop intelligent web filtering, Monitoring and security solutions designed to protect users worldwide.

Lumu Technologies

Lumu Technologies

Lumu is a cybersecurity company that illuminates threats and attacks affecting enterprises worldwide.

WidePoint

WidePoint

WidePoint Corporation is an innovative provider of Trusted Mobility Management (TM2) solutions.

Active Countermeasures

Active Countermeasures

Active Countermeasures believe in giving back to the security community. We do this through free training, thought leadership, and both open source and affordable commercial tools.

InferSight

InferSight

InferSight can help you design an architecture that takes into account security, performance, availability, functionality, resiliency and future capacity to avoid technological lock in and limitations

Have I Been Pwned (HIBP)

Have I Been Pwned (HIBP)

Have I Been Pwned is a free resource for anyone to quickly assess if they may have been put at risk due to an online account of theirs having been compromised or "pwned" in a data breach.

TekSynap

TekSynap

TekSynap is a full spectrum Information Technology services provider to federal government agencies.

PA Consulting

PA Consulting

PA Consulting Group is a consultancy that specialises in strategy, technology and innovation. Our cyber security experts work with you to spot digital and technology security risks and reduce them.

Entech

Entech

Entech is a managed IT service provider. We work behind the scenes on your network to ensure data security and integrity.

NetHope

NetHope

NetHope is a membership-based organization serving the international nonprofit humanitarian, development, and conservation sector through digital transformation.

Spirit Technology Solutions

Spirit Technology Solutions

Spirit Technology Solutions is a modern workplace services provider committed to delivering solutions that embody our core principles of security, sustainability, and scalability.

AUCloud

AUCloud

AUCloud is a leading Australian cyber security and secure cloud provider, specialising in supporting businesses and Governments with the latest cloud infrastructure.