From Machine Learning To Machine Reasoning

The conversation around Artificial Intelligence usually revolves around technology-focused topics: machine learning, conversational interfaces, autonomous agents, and other aspects of data science, math, and implementation. 

However, the history and evolution of AI is also inextricably linked with waves of innovation and research breakthroughs that run headfirst into economic and technology roadblocks. 

There seems to be an indelible pattern of discovery, innovation, interest, investment, cautious optimism, boundless enthusiasm, realisation of limitations, technological roadblocks, withdrawal of interest, and retreat of AI research back to academic settings. These waves of advance and retreat appear to be as consistent as sea waves on the shore.

This pattern is vexing to technologists and investors because it doesn’t follow the usual technology adoption lifecycle. Popularised by Geoffrey Moore in his book “Crossing the Chasm“, technology adoption usually follows a well-defined path. 
Technology is developed and finds early interest by innovators and then early adopters, and if the technology can make the leap across the “chasm”, it gets adopted by the early majority market and then it’s off to the races with demand by the late majority and finally technology laggards.

If it can’t cross the chasm, then it ends up in the dustbin of history. However, what makes AI distinct is that it doesn’t fit the technology adoption lifecycle pattern.

AI isn’t a discrete technology. Rather it’s a quest… a quest for the intelligent machine. This quest inspires academicians and researchers to come up with theories of how the brain and intelligence works, and their concepts of how to mimic these aspects with technology. 

AI is a generator of technologies, which individually go through the technology lifecycle. Investors aren’t investing in AI, they’re investing in the output of AI research. As researchers discover new insights that help them surmount previous challenges, or as technology infrastructure finally catches up with concepts that were previously infeasible, then new technology implementations are spawned and the cycle of investment renews.

The Need for Understanding
It’s clear that intelligence is like an onion, many layers. Once we understand one layer, we find that it only explains a limited amount of what intelligence is about. We discover there’s another layer underneath, and back to our research institutions we go to figure out how it works. In our recent exploration of the intelligence of voice assistants, we’re teasing at one of those next layers: understanding. 

That is, knowing what something is, recognising an image among a category of trained concepts, converting audio wave forms into words, identifying patterns among a collection of data, or even playing games at advanced levels, is different from actually understanding what those things are. 

This lack of understanding is why we get hilarious results in our Voice Assistant Benchmark, but also why we can’t truly get autonomous machine capabilities in a wide range of situations. Without understanding, there’s no common sense. Without common sense and understanding, machine learning is just a bunch of learned patterns that can’t adapt to the constantly evolving changes of the real world.

While this description conveniently skips the Understanding step, we believe that understanding is the next logical threshold of AI capability. And like all previous layers of this AI onion, tackling this layer will require new research breakthroughs, dramatic increases in compute capabilities, and volumes of data.

What? Don’t we have almost limitless data and boundless computing power? Not quite. Read on.

The Quest for Common Sense: Machine Reasoning
Early in the development of artificial intelligence, researchers realised that for machines to successfully navigate the real world, they would have to gain an understanding of how the world works and how various different things are related to each other. 

In 1984, the world’s longest-lived AI project started. The Cyc project is focused on generating a comprehensive “ontology” and knowledge base of common sense, basic concepts and “rules of thumb” about how the world works. The Cyc ontology uses a knowledge graph to structure how different concepts are related to each other, and an inference engine that allows systems to reason about facts.

The main idea behind Cyc and other understanding-building knowledge encodings is the realization that systems can’t be truly intelligent if they don’t understand what the underlying things they are recognising or classifying are. This means we have to dig deeper than machine learning for intelligence.

We need to peel this onion one level deeper, scoop out another tasty parfait layer. We need more than machine-learning, we need, machine reasoning. 

Machine reason is the concept of giving machines the power to make connections between facts, observations, and all the magical things that we can train machines to do with machine learning. Machine learning has enabled a wide range of capabilities and functionality and opened up a world of possibility that was not possible without the ability to train machines to identify and recognise patterns in data. 

However, this power is crippled by the fact that these systems are not really able to functionally use that information for higher ends, or apply learning from one domain to another without human involvement. Even transfer learning is limited in application.

Indeed, we’re rapidly facing the reality that we’re going to soon hit the wall on the current edge of capabilities with machine learning-focused AI. To get to that next level we need to break through this wall and shift from machine learning-centric AI to machine reasoning-centric AI. However, that’s going to require some breakthrough in research that we haven’t realized yet.

Are we Still Limited by Data and Compute Power?
The fact that the Cyc project has the distinction as being the longest-lived AI project is a bit of a back-handed compliment. The Cyc project is long lived because after all these decades the quest for common sense knowledge is proving elusive. 
Codifying commonsense into a machine-processable form is a tremendous challenge. Not only do you need to encode the entities themselves in a way that a machine knows what you’re talking about but also all the inter-relationships between those entities. 

There are millions, if not billions, of “things” that a machine needs to know. Some of these things are tangible like “rain” but others are intangible such as “thirst”. The work of encoding these relationships is being partially automated, but still requires humans to verify the accuracy of the connections… because after all, if machines could do this we would have solved the machine recognition challenge. It’s a bit of a chicken and egg problem this way. 

You can’t solve machine recognition without having some way to codify the relationships between information. But you can’t scalable codify all the relationships that machines would need to know without some form of automation.

Machine learning has proven to be very data-hungry and compute-intensive. Over the past decade, many iterative enhancements have lessened compute load and helped to make data use more efficient. GPUs, TPUs, and emerging FPGAs are helping to provide the raw compute horsepower needed. Yet, despite these advancements, complicated machine learning models with lots of dimensions and parameters still require intense amounts of compute and data. Machine reasoning is easily one order or more of complexity beyond machine learning. Accomplishing the task of reasoning out the complicated relationships between things and truly understanding these things might be beyond today’s compute and data resources.

Onward Progress
The current wave of interest and investment in AI doesn’t show any signs of slowing or stopping any time soon, but it’s inevitable it will slow at some point for one simple reason: we still don’t understand intelligence and how it works. Despite the amazing work of researchers and technologists, we’re still guessing in the dark about the mysterious nature of cognition, intelligence, and consciousness. 

At some point we will be faced with the limitations of our assumptions and implementations and we’ll work to peel the onion one more layer and tackle the next step of challenges. Machine reasoning is quickly approaching as the next challenge we must surmount on the quest for artificial intelligence.  

If we can apply our research and investment talent to tackling this next layer, we can keep the momentum going with AI research and investment. If not, the pattern of AI will repeat itself, and the current wave will crest.  It might not be now or even within the next few years, but the ebb and flow of AI is as inevitable as the waves upon the shore.

CTO Vision:

You Might Also Read:

Machine Learning & Big Data - Where You Least Expect It:

Next-Gen Robotic Process Automation Leverages AI And Machine Learning:

 

« Russian Hackers Have New Weapons
Cyber Attacks On Australia Reveal A Pattern »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

Perimeter 81 / How to Select the Right ZTNA Solution

Perimeter 81 / How to Select the Right ZTNA Solution

Gartner insights into How to Select the Right ZTNA offering. Download this FREE report for a limited time only.

Jooble

Jooble

Jooble is a job search aggregator operating in 71 countries worldwide. We simplify the job search process by displaying active job ads from major job boards and career sites across the internet.

DigitalStakeout

DigitalStakeout

DigitalStakeout enables cyber security professionals to reduce cyber risk to their organization with proactive security solutions, providing immediate improvement in security posture and ROI.

North Infosec Testing (North IT)

North Infosec Testing (North IT)

North IT (North Infosec Testing) are an award-winning provider of web, software, and application penetration testing.

Resecurity, Inc.

Resecurity, Inc.

Resecurity is a cybersecurity company that delivers a unified platform for endpoint protection, risk management, and cyber threat intelligence.

Digital Gurus Recruitment

Digital Gurus Recruitment

Digital Gurus provide specialist recruitment services in areas including IT and information security

TitanFile

TitanFile

TitanFile is an award-winning, easy and secure way for professionals to communicate without having to worry about security and privacy.

Orolia

Orolia

Orolia are experts in deploying high precision GPS time through network infrastructure to synchronize critical operations.

iTrinegy

iTrinegy

iTrinegy is a world leader in Application Risk Management offering solutions to mitigate all networked application deployment risks

International Organization for Standardization (ISO)

International Organization for Standardization (ISO)

ISO is an independent, non-governmental international standards organization. The ISO/IEC 27001 is the standard for information security management systems.

CERT.GOV.AZ

CERT.GOV.AZ

Azerbaijan Government Computer Incident Response Team

Ceerus

Ceerus

Ceerus was created to simplify the process of deploying and managing security across all the channels in an organisation.

Cyxtera Technologies

Cyxtera Technologies

Cyxtera offers powerful, secure IT infrastructure capabilities paired with agile, dynamic software-defined security.

Cyber Security Centre - Daffodil International University

Cyber Security Centre - Daffodil International University

Cyber Security Centre, DIU is a non-profitable organization which is focused on applied research in cyber security.

United Biometrics

United Biometrics

United Biometrics is an anonymous and real-time authentication platform designed to stop the fraud for mobile payments, e-Commerce and applications.

ThreatAware

ThreatAware

Total visibility of your business cybersecurity. Monitoring, management and compliance for your cybersecurity tools, people and processes from one easy to use dashboard.

Estio Training

Estio Training

Estio Training is a specialist digital and IT apprenticeships provider, dedicated to introducing new skills and developing existing talent in businesses across the UK.

NeuVector

NeuVector

NeuVector, the leader in Full Lifecycle Container Security, delivers uncompromising end-to-end security from DevOps vulnerability protection to complete protection in production.

In Fidem

In Fidem

In Fidem specializes in information security management, with a bold approach that views cybersecurity as a springboard to organizational transformation rather than a barrier to innovation.

Trenton Systems

Trenton Systems

Trenton Systems are committed to providing high-performance computing solutions to customers running mission-critical applications in harsh settings worldwide and across various industries.

Exalens

Exalens

With deep roots in AI-driven cyber-physical security research and intrusion detection, at Exalens, we are enhancing operational resilience for cyber-physical systems at the OT edge.