Artificial Intelligence Today - How AI Works


Directors Report: This Premium article is temporarily free to view. For unrestricted website access please Subscribe: £5 monthly / £50 annual.


Artificial Intelligence (AI) is a revolutionary field of computer science, which is becoming the main component of a number of emerging technologies like big data, robotics, and the Internet of Things (IoT). It will continue to act as a technological innovator in the coming years just as different technologies have changed history.

AI has significantly moved from fiction and become reality and now machines that help humans with intelligence are not just in sci-fi movies but are in the real world. From steam power and electricity to computers and the Internet, technological advancements have always disrupted labour markets, pushing out some jobs while creating new ones and AI will do the same.

One of the most significant areas of development in AI is in the field of machine learning (ML). This technology allows machines to learn and improve their performance without being explicitly programmed.

ML algorithms are used in a variety of applications, including image and speech recognition, Natural Language Processing (NLP) and predictive analytics.

These algorithms have seen significant improvements in recent years, thanks to the availability of large amounts of data and advances in computing power.

AI has become a prevalent part of modern life. When you search something on Google, you’ll be dealing with its Multitask Unified Model (MUM) AI algorithm, the latest in a series of AI at the core of Google’s search engine.
If you own Amazon’s Alexa or a similar home virtual assistant, you’ve brought an AI into your home.

Popular misconceptions still tend to place AI on an island with robots and self-driving cars. However, this approach fails to recognise AI’s major practical application; processing the vast amounts of data generated daily.
By strategically applying AI to certain processes, insight gathering and task automation occur at an otherwise unimaginable rate and scale.

  • Parsing through the mountains of data created by humans, AI systems perform intelligent searches, interpreting both text and images to discover patterns in complex data, and then act on those learnings.
  • Many of AI’s revolutionary technologies are common buzzwords, like “natural language processing,” “deep learning,” and “predictive analytics.”
  • Cutting-edge technologies that enable computer systems to understand the meaning of human language, learn from experience, and make predictions, respectively.

Understanding AI jargon is the key to facilitating discussion about the real-world applications of this technology.

The various AI technologies are disruptive, revolutionising the way humans interact with data and make decisions, and should be understood in basic terms by all of us. 

  • AI is now a technology that enables machines to learn from experience and perform human-like tasks.
  • AI allows machines and computer applications to mimic human intelligence, learning from experience via iterative processing and algorithmic training.
  • AI systems work by combining large sets of data with intelligent, iterative processing algorithms to learn from patterns and features in the data that they analyse.
  • Each time an AI system runs a round of data processing, it tests and measures its own performance and develops additional expertise.

Because AI never needs a break, it can run through hundreds, thousands, or even millions of tasks extremely quickly, learning a great deal in very little time, and becoming extremely capable at whatever it’s being trained to accomplish. But the trick to understanding how AI truly works is understanding the idea that AI isn’t just a single computer programme or application, but an entire discipline, or a science.

The goal of AI science is to build a computer system that is capable of modelling human behaviour so that it can use human-like thinking processes to solve complex problems. 

To accomplish this objective, AI systems use a whole series of techniques and processes, as well as a vast array of different technologies.

History of AI

The notion of intelligent artificial dates back as far as ancient Greece with Aristotle’s development of the concept of syllogism and deductive reasoning. And the concept of intelligent beings has been around for a long time. The ancient Greeks, in fact, had myths about robots as the Chinese and Egyptian engineers built automatons, however the AI as we understand it now is less than a century old.

Between the 1940s and 50s, a handful of scientists from various fields discussed the possibility of creating an artificial brain. In 1943, Warren McCullough and Walter Pitts published the paper, ‘Logical Calculus of Ideas Immanent in Nervous Activity.’ This paper proposed the first mathematical model for building a neural network.

This idea was expanded upon in 1949 with the publication of Donald Webb’s book, ‘The Organisation of Behaviour: A Neuropsychological Theory.’ Webb proposed that neural pathways are created from experience, becoming stronger the more frequently they are used. This led to the rise of the field of AI research, which was founded as an academic discipline in 1956, at a conference at Dartmouth College, in Hanover, New Hampshire. The word was coined by John McCarthy, who is now considered as father of Artificial Intelligence.

These ideas were taken to the realm of machines in 1950 when Alan Turing published his ‘Computing Machinery and Intelligence,’ which set forth what is now known as the Turing Test to determine whether a machine is actually intelligent. That same year saw Harvard undergraduates Marvin Minsky and Dean Edmonds build tye Stochastic Neural Analog Reinforcement Calculator (SNARC), the first neural network computer and Claude Shannon publish the paper, ‘Programming a Computer for Playing Chess.’ Science fiction author, Isaac Asimov also published his ‘Three Laws of Robotics’ in 1950, setting out a basic blueprint for AI interaction with humanity. In 1952, Arthur Samuel created a self-learning computer programme to play draughts and in 1954 sixty Russian sentences were translated into English by the Georgetown-IBM machine translation experiment.

Less than 10 years after helping the Allied forces win World War II by breaking the Nazi encryption machine Enigma, the mathematician Alan Turing asked the question: “Can machines think?” Turing’s 1950 paper Computing Machinery and Intelligence and its subsequent Turing Test established the fundamental goal and vision of AI.   

At its core, AI is the branch of computer science that aims to answer Turing’s question in the affirmative. It is the endeavour to replicate in machines.  

The term artificial intelligence was coined in 1956 at the ‘Dartmouth Summer Research Project on Artificial Intelligence.’ This conference, led by John McCarthy, defined the scope and goals of AI and this same year saw Allen Newell and Herbert Simon demonstrate Logic Theorist, the first reasoning programme.

John McCarthy continued his work in AI in 1958 by developing the AI programming language Lisp and publishing a paper ‘Programs with Common Sense,’ which proposed a hypothetical complete AI system that was able to learn from experience as effectively as humans do.

In the 1960s, the US Department of Defence took interest in this type of work and began training computers to mimic basic human reasoning. For example, the Defence Advanced Research Projects Agency (DARPA) completed street mapping projects in the 1970s. And DARPA produced intelligent personal assistants in 2003, long before Siri, Alexa or Cortana were household names.

Japan entered the AI arena in 1982 with the Fifth Generation Computer Systems project, leading to the U.S. government restarting funding with the launch of the Strategic Computing Initiative. By 1985, AI development was increasing once more as over a billion dollars were invested in the industry and specialised companies sprang up to build systems based on the Lisp programming language.

Defining AI

Artificial Intelligence is the simulation of human intelligence processes by machines, especially computer systems. Specific applications of AI include expert systems, natural language processing, speech recognition and machine vision.

AI involves using computers to do things that traditionally require human intelligence. AI can process large amounts of data in ways that humans cannot. The goal for AI is to be able to do things like recognise patterns, make decisions, and judge like humans.

The major limitation in defining AI as simply “building machines that are intelligent” is that it doesn't actually explain what AI is and what makes a machine intelligent.

AI is an inter-disciplinary science with multiple approaches, but advancements in machine learning and deep learning are creating a paradigm shift in virtually every area of the technology industry. However, various new tests have been proposed recently that have been well received, including a 2019 research paper entitled On the Measure of Intelligence.

In the paper, veteran deep learning researcher and Google engineer François Chollet argues that intelligence is the “rate at which a learner turns its experience and priors into new skills at valuable tasks that involve uncertainty and adaptation.” In other words: The most intelligent systems are able to take just a small amount of experience and go on to guess what would be the outcome in many varied situations.

Meanwhile, in their book Artificial Intelligence: A Modern Approach, authors Stuart Russell and Peter Norvig approach the concept of AI by unifying their work around the theme of intelligent agents in machines. With this in mind, AI is “the study of agents that receive percepts from the environment and perform actions.” Norvig and Russell focus particularly on rational agents that act to achieve the best outcome, noting “all the skills needed for the Turing Test also allow an agent to act rationally.”

Former MIT professor of AI and computer science Patrick Winston defined AI as “algorithms enabled by constraints, exposed by representations that support models targeted at loops that tie thinking, perception and action together.”

While these definitions may seem abstract to the average person, they help focus the field as an area of computer science and provide a blueprint for infusing machines and programs with Machine Learning.

Types of AI

Narrow AI:   This form of AI focuses on performing a single task well. And this form of AI includes examples such as Google search, image recognition software, personal assistants such as Siri and Alexa, and self-driving cars.

This narrow reactive form of AI uses algorithms to optimise outputs based on a set of inputs. Chess-playing AIs, for example, are reactive systems that optimise the best strategy to win the game. Reactive AI tends to be fairly static, unable to learn or adapt to novel situations. Thus, it will produce the same output given identical inputs.

These computer systems all perform specific tasks and are powered by advances in machine learning and deep learning. Machine learning takes computer data and uses statistical techniques to allow the AI system to ‘learn’ and get better at performing a task.

This learning uses algorithms to optimise outputs based on a set of inputs. Chess-playing AIs, for example, are reactive systems that optimise the best strategy to win the game. Reactive AI tends to be fairly static, unable to learn or adapt to novel situations. Thus, it will produce the same output given identical inputs.

Artificial General Intelligence (AGI):   This form of AI is the type that has been seen in science fiction books, TV programmes and movies. It is a more intelligent system than narrow AI and uses a general intelligence, like a human being, to solve problems. However, truly achieving this level of AI has proven difficult.

AI researchers have struggled to create a system that can learn and act in any environment, with a full set of cognitive abilities, as a human would. Theory-of-mind AI are fully adaptive and have an extensive ability to learn and retain past experiences. These types of AI include advanced chat-bots that could pass the Turing Test, fooling a person into believing the AI was a human being. While advanced and impressive, these AI are not self-aware.

Self-aware AI, as the name suggests, become sentient and aware of their own existence. Still in the realm of science fiction, some experts believe that an AI will never become conscious or "alive". AGI is the type of AI that is seen in movies such as The Terminator, where super-intelligent robots are able to become an independent danger to humanity. However, experts agree that this is not something that we need to worry about at any point soon.

Background To How AI Works

Building an AI system is a careful process of reverse-engineering human traits and capabilities in a machine, and using its computational prowess to surpass what we are capable of. To understand how AI actually works, one needs to deep dive into the various sub domains of AI and understand how those domains could be applied into the various fields of the industry.

Machine Learning:    ML teaches a machine how to make inferences and decisions based on past experience. It identifies patterns, analyses past data to infer the meaning of these data points to reach a possible conclusion without having to involve human experience. This automation to reach conclusions by evaluating data, saves a human time for businesses and helps them make a better decision.

Deep Learning:    Deep Learning is an ML technique. It teaches a machine to process inputs through layers in order to classify, infer and predict the outcome.

Neural Networks:   Neural Networks work on similar principles as of Human Neural cells. They are a series of algorithms that capture the relationship between various underlying variables and processes the data as a human brain does.

Natural Language Processing:    NLP is a science of reading, understanding, interpreting a language by a machine. Once a machine understands what the user intends to communicate, it responds accordingly.

Computer Vision:   Computer vision algorithms try to understand an image by breaking down an image and studying different parts of the objects. This helps the machine classify and learn from a set of images, to make a better output decision based on previous observations.

Cognitive Computing:   Cognitive computing algorithms try to mimic a human brain by analysing text/speech/images/objects in a manner that a human does and tries to give the desired output.

Artificial Intelligence can be built over a diverse set of components and will function as an amalgamation of:

Philosophy

The purpose of philosophy for humans is to help us understand our actions, their consequences, and how we can make better decisions. Modern intelligent systems can be built by following the different approaches of philosophy that will enable these systems to make the right decisions, mirroring the way that an ideal human being would think and behave.

Philosophy would help these machines think and understand about the nature of knowledge itself. It would also help them make the connection between knowledge and action through goal-based analysis to achieve desirable outcomes.

Mathematics

Mathematics is the language of the universe and a system built to solve universal problems would need to be proficient in it. For machines to understand logic, computation, and probability are necessary. The earliest algorithms were just mathematical pathways to make calculations easy, soon to be followed by theorems, hypotheses and more, which all followed a pre-defined logic to arrive at a computational output.

The third mathematical application, probability, makes for accurate predictions of future outcomes on which AI algorithms would base their decision-making.

Economics

Economics is the study of how people make choices according to their preferred outcomes. It’s not just about money, although money is the medium of people’s preferences being manifested in the real world.

There are many important concepts in economics, such as Design Theory, operations research and Markov decision processes. They all have contributed to our understanding of ‘rational agents’ and laws of thought, by using mathematics to show how these decisions are being made at large scales along with their collective outcomes are.

These types of decision-theoretic techniques help build these intelligent systems.

Neuroscience

Since neuroscience studies how the brain functions and Artificial Intelligence is trying to replicate the same, there’s an obvious overlap here.

The biggest difference between human brains and machines is that computers are millions of times faster than the human brain, but the human brain still has the advantage in terms of storage capacity and interconnections.
This advantage is slowly being closed with advances in computer hardware and more sophisticated software, but there’s still a big challenge to overcome as we are still not aware of how to use computer resources to achieve the brain’s level of intelligence.

Psychology

Psychology can be viewed as the middle point between neuroscience and philosophy. It tries to understand how our specially configured and developed brain reacts to stimuli and responds to its environment, both of which are important to building an intelligent system. Cognitive psychology views the brain as an information processing device, operating based on beliefs and goals and beliefs, similar to how we would build an intelligence machine of our own.

Many cognitive theories have already been codified to build algorithms that power the chatbots of today.

Computer Engineering

The most obvious application here, but we’ve put this the end to help you understand what all this computer engineering is going to be based on.

Computer engineering will translate all our theories and concepts into a machine-readable language so that it can make its computations to produce an output that we can understand. Each advance in computer engineering has opened up more possibilities to build even more powerful AI systems, which are based on advanced operating systems, programming languages, information management systems, tools, and state-of-the-art hardware.

Control Theory & Cybernetics

To be truly intelligent, a system needs to be able to control and modify its actions to produce the desired output.

The desired output in question is defined as an objective function, towards which the system will try to move towards, by continually modifying its actions based on the changes in its environment using mathematical computations and logic to measure and optimise its behavior.

Linguistics

All thought is based on some language and is the most understandable representation of thoughts. Linguistics has led to the formation of natural language processing, which help machines understand our syntactic language, and also to produce output in a manner that is understandable to almost anyone.

Understanding a language is more than just learning how sentences are structured, it also requires a knowledge of the subject matter and context, which has given rise to the knowledge representation branch of linguistics.

The Future of AI

When one considers the computational costs and the technical data infrastructure running behind artificial intelligence, actually executing on AI is a complex and costly business. Fortunately, there have been massive advancements in computing technology, as indicated by Moore’s Law, which states that the number of transistors on a microchip double about every two years while the cost of computers is halved.

By that logic, the advancements AI has made across a variety of industries have been major over the last several years. And the potential for an even greater impact over the next several decades seems inevitable.   
With artificial intelligence technology constantly developing, we will soon rely heavily on it for our daily tasks. Several everyday tasks, such as contacting friends, using an email service, or renting a car, are now made easier with the help of AI.

There are increasing challenges, including figuring out who is at fault when an autonomous vehicle hits a pedestrian and managing a global independent arms race.

There is no doubt about the transformational impact of artificial intelligence on the economy, legal system, political system, and regulatory system; however, attaining all the benefits from AI at the global scale we are getting in today's age will have far-reaching implications for discussion and preparation.

Many people affirm machines will inevitably become super-intelligent, and humans will eventually lose control. The likelihood of this scenario is debated, but we know new technology has always had unintended consequences.
We will likely face challenges related to artificial intelligence's unintended outcomes, but AI will significantly shape our future.

Today, AI is painting artwork, making music, completely changing how movies are made, preparing food and forging recipes for spices, whiskey, seeds, and everything that could be done in an innumerable number of ways.

Deep fakes are lifelike and could easily disrupt world events. AI-based robots can read text, charts, and faces/emotions better than humans. Robots are permeating all aspects of manufacturing and are beginning to become companions to the elderly, infirm or lonely.

We're moving to a world of personalised medicine based on your sequence genome. It’s all there. It’s who you are. Once you, and millions of others for algorithm training, are sequenced, then healthcare will be predictive.

Healthcare Goes Genomic & Predictive

AI will be used to treat, and eliminate, neurological disorders like Alzheimer's, Parkinson’s, most birth defects, and spinal cord injuries as well as blindness and deafness. By 2050 robotic prosthetics may be stronger and more advanced than our own biological ones and they will be controlled by our minds.

AI will be able to do the initial examination, take tests, do X-rays and MRIs, and make a primary diagnosis and even treatment. Most necessary doctor interactions will be by videoconference, while robots will be on hand for assistance with everything, even surgery.

Quantum Computing

We are moving through the era of bits and bytes as our computing frame of reference. The future is about quantum computing. Quantum computing uses qubits, which can be any proportion of the 0 to 1 state. Early testing has shown a dramatic, exponential speed up in queries. All calculations happen simultaneously.

Searching large data sets, which will still be quite interesting in 2050, will only be done with quantum. Quantum computing may usher in a whole new wave of relevant technology companies by 2050.

Transportation

The increased population of 2050, though mainly working from home, will still need to get around. Recently, the technology behemoths had all acquired autonomous driving technology.

The Society of Automotive Engineers (SAE) defines six levels of vehicle driving automation systems. We’re now at level two, with cars able to control steering, acceleration, and braking, while still requiring steering wheels and drivers to remain engaged.

The Metaverse

The metaverse is about simulation. Avatars can act, within tightly defined parameters, as our agents, our companions, and some may even be considered co-workers. By 2050, we will be unable to tell the difference between a virtualised real person and an AI-driven avatar. Avatars will be as ubiquitous as the cell phone is today.

As metaverse function grows, by 2050 it will be extraordinary by today’s standards. We will virtually be able to travel the world and innumerable custom planets, all from home. It will be our parallel life, connected via multiple devices, wearables and even our brain. It will be a mixed reality.

Many people will opt to spend much of their day in virtual worlds where they can become whoever they want in a “life” with apparently no limits. The metaverse will also give rise to NFTs and crypto currencies. We will look back and call today the early days.

Relationship to Work

Changes in how we work will be the most profound impact of AI to most people. Administrative, sales, food service, transportation, and manufacturing jobs, among many others, will see massive disruption. The need for work to maintain what we now consider a basic life will be reduced. This may or may not be treated delicately instead of with decades of pain committed to the present work ethos. There’s always more to do but the skill gap will be profound.

There are many benefits of Artificial Intelligence, but the big plus to using AI in the workplace is that it does repetitive and mundane tasks that no one really wants to do.

This can make work for humans easier, making us more productive with less effort. For example, AI can input data into spreadsheets, help online customers return purchases or fill out forms. The bottom line is we are the start of general AI, where machines have the capacity to understand or learn any intellectual task that a human being can.

We have forged a new path with machine learning where it is able to be “general” in its approach. Neural networks were built with a specific task in mind and trained on curated datasets, but neither are increasingly necessary.

AI is now able to have higher level goals that are removed from specific tasks and datasets. AI can find the right data, make the calls, and execute. The tech industry as a whole is constantly pushing for progress, and artificial intelligence has been one of the pillars of that progress throughout the 21st century.

As advances are made and research conducted, AI’s influence over industry and the world will likely only grow. By 2050, AI will become generalised and an inseparable part of life.

Conclusion

The development of AI is at a point where technology is being integrated into a wide range of industries and applications, with the potential to greatly improve efficiency and productivity. Advances in machine learning, particularly deep learning, robotics and autonomous systems have been particularly successful.

However, there are still many challenges that need to be addressed, including the lack of interpretability and the lack of diversity in the data used to train AI models. It is important for ongoing research and development to address these challenges to ensure the continued success and responsible use of AI.

References

CSU Global:    The Conversation:   BuiltIn:   SAS:     TWI Global:     

SimpliLearn:    Innoplexus:   Information Week:   Digital Silk:  Medium:     

Digital SpeakerJavaTpoint:    RD:    eWeek:     Independent Australia

___________________________________________________________________________________________

If you like this website and use the comprehensive 6,500-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.

  • Individual £5 per month or £50 per year. Sign Up
  • Multi-User, Corporate & Library Accounts Available on Request

Cyber Security Intelligence: Captured Organised & Accessible


 

« Australia’s Victoria Leads On National Cyber Strategy
Hackers Have Stolen GoDaddy's Source Code »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

Practice Labs

Practice Labs

Practice Labs is an IT competency hub, where live-lab environments give access to real equipment for hands-on practice of essential cybersecurity skills.

IT Governance

IT Governance

IT Governance is a leading global provider of information security solutions. Download our free guide and find out how ISO 27001 can help protect your organisation's information.

Cyber Security Supplier Directory

Cyber Security Supplier Directory

Our Supplier Directory lists 6,000+ specialist cyber security service providers in 128 countries worldwide. IS YOUR ORGANISATION LISTED?

CSI Consulting Services

CSI Consulting Services

Get Advice From The Experts: * Training * Penetration Testing * Data Governance * GDPR Compliance. Connecting you to the best in the business.

Clayden Law

Clayden Law

Clayden Law advise global businesses that buy and sell technology products and services. We are experts in information technology, data privacy and cybersecurity law.

Via Resource

Via Resource

Via Resource specialise in Information and Cyber Security recruitment in the UK, Europe and USA.

Infoblox

Infoblox

Infoblox solutions help businesses automate complex network control functions to reduce costs, increase security and maximize uptime.

Zscaler

Zscaler

Zscaler enables the world’s leading organizations to securely transform their networks and applications for a mobile and cloud first world.

NordForsk

NordForsk

NordForsk facilitates and provides funding for Nordic research cooperation and research infrastructure. Project areas include digitalisation and digital security.

National Cyber Security Centre (NCSC) - New Zealand

National Cyber Security Centre (NCSC) - New Zealand

The role of the NCSC is to help New Zealand’s most significant public and private sector organisations to protect their information systems from advanced cyber-borne threats.

Safetica

Safetica

Safetica Technologies is a Czech software company that delivers data protection solutions for businesses of all types and sizes.

CIRISK

CIRISK

CIRISK offers a wide range of services from consulting to audit or project management to help you develop your cyber security or information security strategy.

Cyber Security Africa

Cyber Security Africa

Cyber Security Africa is a full-service Information Security Consulting firm offering a comprehensive range of Services and Products to help organizations protect their valuable assets.

Stellar Cyber

Stellar Cyber

Stellar Cyber makes Open XDR, the only comprehensive security platform providing maximum protection of applications and data wherever they reside.

Cyber Intelligence 4U

Cyber Intelligence 4U

Cyber Intelligence 4U is an educational services company that provides two levels of cybersecurity training programs: executive and technical.

Logit.io

Logit.io

Logit.io is a log analysis & management platform that provides a scalable solution for hosting the open-source tools Elasticsearch, Logstash, and Kibana.

Deft

Deft

Deft (formerly ServerCentral Turing Group) is a trusted provider of colocation, cloud, and disaster recovery services.

Royal United Services Institute (RUSI)

Royal United Services Institute (RUSI)

The Royal United Services Institute is an independent think tank engaged in cutting edge defence and security research. Areas of research include cyber security and resilience.

SEALSQ

SEALSQ

For the last 25 years, SEALSQ have been developing secure semiconductor chips, secure embedded firmware, and tested hardware provisioning services to serve the vision of a safer connected world.

CyberSecureRIA

CyberSecureRIA

We founded CyberSecureRIA specifically to secure and support RIAs. We exist to secure SEC-registered RIAs, and keep them compliant with cybersecurity regulations.

eGyanamTech (EGT)

eGyanamTech (EGT)

eGyanamTech provides robust security solutions tailored for Operational Technology (OT) and Supervisory Control and Data Acquisition (SCADA) systems used in critical infrastructure systems.