Artificial Intelligence Will Change The Future Of The World

Artificial Intelligence Will Change The Future Of The World 


Directors Report:  This Premium article is exclusive to Premium subscribers. For unrestricted website access please Subscribe: £5 monthly / £50 annual.


Many experts predict that machines in the form of Artificial Intelligence (AI) may out-perform humans at every task within the next 45 years. A fundamental and measurable difference between now and the past is that machine learning has helped us come a long way towards solving perception. 

Machines were unable to read, hear, or see, which typically required the input to be curated for them. Modern systems can take the visual, auditory, or language input directly. This development enables the machine to take direct inputs from the world without human involvement and create its own internal representation for further processing.

Big data is another difference, along with powerful tools, especially in the area of supervised learning where the machine learns from data represented in input and output pairs. 

Many problems are amenable to this type of formulation, and we have witnessed a mushrooming of machine learning systems in virtually every domain where large data sets have become available. It is becoming more common for computers to perform tasks better than the best humans can. Human and artificial intelligence interaction are presenting prodigious and exciting technological opportunities for mutual development even in today’s current technological climate, but the real potential for mutual development is in the near future and beyond which has the potential to be overwhelming. 

With the continued rapid development of not just technology but also in quantum research the potential for Artificial Intelligence to evolve at a frightening speed is within our grasp.

Imagine the progress of AI if the power of Quantum Computing was readily available with potential processing power a million times more powerful than today’s classical computers with each manufacturer striving to reach not just Quantum Supremacy but far and beyond this revolutionary breakthrough. 

Humanity can harness this amazing technology with AI and Brain Computer Interface (BCI) generating a technology revolution, this is the future, but the future is fast approaching. 

With Quantum Computing bringing the ability to process big data in abundance this will in turn skyrocket AI development leading to the advance of BCIs, with AI having the ability to not just pinpoint but to map and increase for example DNA mapping identifying genes in strength and intelligence. 

The possibility for super strength and super intelligent humans to match the huge advancement in AI intelligence is attainable. 

Current trends in AI are nothing if not remarkable. Day after day, we hear stories about systems and machines taking on tasks that, until very recently, were the apparent permanent preserve of humankind. AI has become an important aspect of the future. This applies equally as well to Information Technology (IT) as it does many other industries that rely on it. Just a decade ago, AI technology seemed like something straight out of science fiction; today, we use it in everyday life without realising it, from intelligence research to facial recognition and speech recognition to automation.

AI and Machine Learning (ML) have taken over the traditional computing methods, changing how many industries perform and conduct their day-to-day operations. From research and manufacturing to modernising finance and healthcare streams, leading AI has changed everything in a relatively short amount of time. The digital transformation and adoption of AI technologies by industries has given rise to new advancements to solve and optimise many core challenges in the IT industry. Among all tech applications, AI sits at the core of development for almost every industry, with Information Technology being among the first. The integration of AI systems can help reduce the burden on developers by improving efficiency, enhancing productivity, and assuring quality. If the development and deployment of IT systems at large scale were next to impossible, through AI’s development of advanced algorithmic functions this is now possible.

The very way we as humans work both physically and mentally could be changed and potentially improved by AI.

However, concerns are already here and the prospect of high-level machine intelligence systems that outperform human beings at every task is not now considered science fiction. AI is one of the most important technologies in the world today. The United States and China compete for dominance in its development. CEOs believe it will significantly change the way they do business. And it has helped companies such as Facebook, Google, and Apple to become among the largest in the world.

But how will this technology affect work in the future? Will it lead to a permanent underclass of people who are no longer employable because computers are doing their jobs? Will super intelligent computers someday take over the world, finding little use for the humans who created them? Or will robotic servants usher in a golden age of human leisure and prosperity?

The experts predicted networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency, and capabilities. 

  • They spoke of the wide-ranging possibilities; that computers might match or even exceed human intelligence and capabilities on tasks such as complex decision-making, reasoning and learning, sophisticated analytics and pattern recognition, visual acuity, speech recognition and language translation. 
  • They said “smart” systems in communities, in vehicles, in buildings and utilities, on farms and in business processes will save time, money and lives and offer opportunities for individuals to enjoy a more-customised future.
  • Many focused their optimistic remarks on health care and the many possible applications of AI in diagnosing and treating patients or helping senior citizens live fuller and healthier lives. They were also enthusiastic about AI’s role in contributing to broad public-health programmes built around massive amounts of data that may be captured in the coming years about everything from personal genomes to nutrition. 
  • Additionally, a number of these experts predicted that AI would abet long-anticipated changes in formal and informal education systems.

While there are many parallels between human intelligence and AI, there are stark differences too. Every autonomous system that interacts in a dynamic environment must construct a world model and continually update that model.

This means that the world must be perceived (or sensed through cameras, microphones and/or tactile sensors) and then reconstructed in such a way that the computer ‘brain’ has an effective and updated model of the world it is in before it can make decisions. The fidelity of the world model and the timeliness of its updates are the keys to an effective autonomous system. 

History

AI traces its beginnings to Year 1943: The first work which is now recognised as AI was done by Warren McCulloch and Walter pits in 1943. They proposed a model of artificial neurons. In 1949: Donald Hebb demonstrated an updating rule for modifying the connection strength between neurons. His rule is now called Hebbian learning.
Then, Alan Turing, who in his 1950 paper, Computing Machinery and Intelligence, imagined a machine that could communicate, via an exchange of typed messages, so capably that people conversing with it could not tell whether they were interacting with a machine or another person. “The idea of a digital computer is an old one. Charles Babbage, Lucasian Professor of Mathematics at Cambridge from 1828 to 1839, planned such a machine, called the Analytical Engine, but it was never completed,” says Turing. “Although Babbage had all the essential ideas, his machine was not at that time such a very attractive prospect. The speed which would have been available would be definitely faster than a human computer, but something like I 00 times slower than the Manchester machine, itself one of the slower of the modern machines, The storage was to be purely mechanical, using wheels and cards... The fact that Babbage's Analytical Engine was to be entirely mechanical will help us to rid ourselves of a superstition. Importance is often attached to the fact that modern digital computers are electrical, and that the nervous system also is electrical.... Since Babbage's machine was not electrical, and since all digital computers are in a sense equivalent, we see that this use of electricity cannot be of theoretical importance... Of course electricity usually comes in where fast signalling is concerned, so that it is not surprising that we find it in both these connections. In the nervous system chemical phenomena are at least as important as electrical... In certain computers the storage system is mainly acoustic. The feature of using electricity is thus seen to be only a very superficial similarity. If we wish to find such similarities we should took rather for mathematical analogies of function.,” says Turing in his paper.

Only a bit later the term Artificial Intelligence was coined in mid-1956, by a group of computer scientists, including Marvin Minsky of MIT and John McCarthy who held a workshop at Dartmouth College. The goal of the envisioned workshop was “to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves”. The workshop was held between June and August 1956; many consider the event to mark the birth of the field of AI. 

Progress in the field has since accelerated, notably over the past decade or so. This has been enabled by the convergence of advances in three areas: the explosion of data, ever-increasing computing power, and new algorithms (especially algorithms called “neural networks” and “deep learning”). 

Together, these advances have created a technological tidal wave. The first sign appeared in 2011, when IBM’s Watson program beat the best human players of the TV game show Jeopardy. Other advances followed in quick succession. In 2015, for example, Google’s AlphaGo beat a grand master of the game of Go, and in 2017, AlphaGo beat the number one–ranked human player of the game, a feat that had previously been considered impossible, since Go is far more complex than chess. 

Historically, humans have worked in concert with machines, using them to make us more productive. Industrial machines boosted labour efficiency. Office machines have made individuals and commerce more productive and created opportunities and demand for new types of work. Big data and powerful algorithms have changed this landscape by providing machines with the grist from which to learn and improve autonomously. Improvements in decision making have led to the proliferation of learning machines in several areas. 

  • For example, systematic decision making at scale by algorithms has proliferated in industries such as finance and advertising where volume and complexity makes inclusion of a human in the loop infeasible, and the cost of mistakes is quite small relative to the benefit from automation. 
  • In contrast, in domains where these costs are high and/or not easily definable, automation becomes risky. For driverless cars to become widespread, for example, we would need to have good estimates of the costs and consequences of errors. 

Even more importantly, we must address the moral issues involved in algorithm design when critical trade-offs must be made, such as those associated with life-and-death decisions. As AI systems begin to take on more of the tasks formerly the purview of humans, a related question is whether they will ultimately create more jobs than they will destroy. 

AI Systems

Data security is of critical importance when it comes to securing personal, financial, or, otherwise, confidential data. Government and private organisations store large amounts of customer and strategic data that needs to be secure at all times. Artificial Intelligence also uses a series of algorithms that can be applied directly to help programmers when it comes to detecting and overcoming software bugs, as well as when it comes to writing code. 

Some forms of Artificial Intelligence have been developed to provide suggestions when it comes to coding, which, in turn, helped increase efficiency, productivity, and provide a clean and bug-free code for developers. By looking at the structure of the code, the AI system will be able to provide useful suggestions, not only improving the overall productivity but also helping to cut on downtime during the production process.

AI systems aim to perform complex, problem-solving tasks in a way that is similar to what humans do to solve problems. Efforts include developing and implementing algorithms for playing games, planning and executing movement in space, representing knowledge and reasoning, making decisions, adapting actions, perceiving the world, communicating using natural languages, and learning. 

Summarised below are the objectives of subfields of AI with recent advances that impact the workforce: 

  • Machine Learning Robotics 
  • Computer Vision 
  • Natural Language Processing 
  • Machine Learning 

Arguably the most difficult work in computing is telling the machines, in painstaking detail, exactly what they need to do. This is usually done by professional programmers who write software programs with these very detailed instructions. Machine learning is an alternative, powerful approach. With machine learning, human programmers don’t need to write detailed instructions for solving every different kind of problem. Instead, they can write very general programmes with instructions that enable machines to learn from experience, often by analysing large amounts of data. 

More specifically, machine learning refers to a process that starts with a body of data and then tries to derive rules or procedures to explain the data or predict future data. 

The function of a machine learning system can be descriptive, meaning that the system uses the data to explain what happened; predictive, meaning the system uses the data to predict what will happen; or prescriptive, meaning the system will use the data to make suggestions about what action to take. The output of a machine learning system is a model that can be thought of as an algorithm for future computations. The more data the system is presented with, the more refined the model. The quality of the learned model is also dependent on the quality of the data used to train it. If the data is biased, the output of the model will also be biased. 

AI Career Prospects

The field of Artificial Intelligence has a tremendous career outlook, with the Bureau of Labour Statistics predicting a 31.4 percent, by 2030, increase in jobs for data scientists and mathematical science professionals, which are crucial to AI.

AI In Industry & Business

One of the most common reasons is to optimise the company’s processes. Say, for instance, AI can be used to send out automatic reminders to departments, team members, and customers. It can also be used to monitor network traffic, as well as handle a wide variety of mundane and repetitive tasks that would, otherwise, eat up a lot of people’s time. This, in turn, will free them up to focus their time and energy on more critical aspects of the business.

As AI continues to develop over the coming years, it will disrupt more operations across more sectors, leading to increased efficiency and decreased strain on workers going forward. 

The biggest impact from AI is set to come from those companies that can move their models into production most efficiently, and find ways to integrate those models best with their existing business processes. Manufacturing will see great potential for innovation, through an emerging framework called machine health. This capability uses the Internet of Things (IoT) and AI to predict and prevent industrial machine failures, and improve machine performance, via analytics. AI can also be used by businesses to put together large amounts of data, which can lead to strategic insights and business intelligence that would have, otherwise, not be discovered.

In fact, some 84% of businesses say that AI will help them obtain and/or maintain a competitive advantage. Likewise, some 75% of companies believe that this technology will allow them to move into new businesses and ventures. 

Film

In the future, you could sit on the couch and order up a custom movie featuring virtual actors of your choice. 
Meanwhile, film studios may have a future without flops: Sophisticated predictive programmes will analyse a film script’s storyline and forecast its box office potential.

Health Care

AI algorithms will enable doctors and hospitals to better analyse data and customise their health care to the genes, environment and lifestyle of each patient. From diagnosing brain tumours to deciding which cancer treatment will work best for an individual, AI will drive the personalised medicine revolution.

Cyber Security

There were about 707 million cyber security breaches in 2015, and 554 million in the first half of 2016 alone. Companies are struggling to stay one step ahead of hackers. Experts say the self-learning and automation capabilities enabled by AI can protect data more systematically and affordably, keeping people safer from terrorism or even smaller-scale identity theft. AI-based tools look for patterns associated with malicious computer viruses and programmes before they can steal massive amounts of information or cause havoc.

Transportation

The place where AI may have the biggest impact in the near future is self-driving cars. Unlike humans, AI drivers never look down at the radio, put on mascara or argue with their children. Thanks to Google, autonomous cars are already here, but watch for them to be ubiquitous by 2030. Driverless trains already rule the rails in European cities, and Boeing is building an autonomous jetliner, but pilots will still be required to put info into the system.

Military

Autonomous UAV navigation, for example, is relatively straightforward, since the world model according to which it operates consists simply of maps that indicate preferred routes, height obstacles and no-fly zones.  Radars augment this model in real time by indicating which altitudes are clear of obstacles. GPS coordinates convey to the UAV where it needs to go, with the overarching goal of the GPS coordinate plan being not to take the aircraft into a no-fly zone or cause it to collide with an obstacle. 

The future of AI in military systems is directly tied to the ability of engineers to design autonomous systems that demonstrate independent capacity for knowledge- and expert-based reasoning. 

There are no such autonomous systems currently in operation. Most ground robots are teleoperated, essentially meaning that a human is still directly controlling a robot from some distance away as though via a virtual extension cord. Most military UAVs are only slightly more sophisticated: they have some low-level autonomy that allows them to navigate, and in some cases land, without human intervention, but almost all require significant human intervention to execute their missions. 

Even those that take off, fly over a target to capture images, and then return home still operate at an automated and not autonomous level, and do not reason on the fly as true autonomous systems would. Given the current extent of commercial development of drones and other robotic systems, there are other important considerations such as the possible latent consequences of companies and countries that rush AI technologies to market, as against nation states that tend to take more conservative approaches. 

Fielding nascent technologies without comprehensive testing could put both military personnel and civilians at undue risk. However, the rapid development of commercial autonomous systems could normalise the acceptance of autonomous systems for the military and the public, and this could encourage state militaries to fund the development of such systems at a level that better matches investment in manned systems. 

Meanwhile, it remains unclear how the rise of autonomous drones for civilian use could influence popular attitudes and perceptions concerning autonomous military platforms, including weapons. Although it is not in doubt that AI is going to be part of the future of militaries around the world, the landscape is changing quickly and in potentially disruptive ways. 

AI is advancing, but given the current struggle to imbue computers with true knowledge and expert-based behaviours, as well as limitations in perception sensors, it will be many years before AI will be able to approximate human intelligence in high-uncertainty settings, as epitomised by the fog of war. 

Given the present inability of AI to reason in such high-stakes settings, it is understandable that many people want to ban autonomous weapons, but the complexity of the field means that prohibition must be carefully scoped. 
Fundamentally, for instance, does the term autonomous weapon describe the actual weapon such as a missile on a drone, or the drone itself? Autonomous guidance systems for missiles on drones will likely be strikingly similar to those that deliver packages, so banning one could affect the other. 

Machines, computers and robots are getting ‘smarter’ primarily because roboticists and related engineers are getting smarter, so this relatively small group of expert humans is becoming a critical commodity. Universities have been slow to respond to this demand, and governments and industry have also lagged behind in providing scholarship mechanisms to incentivize students in the field of AI. 

Ultimately, the growth in the commercial information technology and automotive sectors, in terms of both attracting top talent and expanding autonomous systems capabilities in everyday commercial products, could be a double-edged sword that will undoubtedly affect militaries around the world in as yet imagined ways. 

Driverless Cars

Navigation for driverless cars is much more difficult. Cars not only need similar mapping abilities, but they must also understand where all nearby vehicles, pedestrians and cyclists are, and where all these are going in the next few seconds. Driverless cars (and some drones) do this through a combination of sensors like LIDAR (Light Detection And Ranging), traditional radars, and stereoscopic computer vision.

Thus, the world model of a driverless car is much more advanced than that of a typical UAV, reflecting the complexity of the operating environment. 

A driverless car computer is required to track all the dynamics of all nearby vehicles and obstacles, constantly compute all possible points of intersection, and then estimate how it thinks traffic is going to behave in order to make a decision to act. Indeed, this form of estimating or guessing what other drivers will do is a key component of how humans drive, but humans do this with little cognitive effort. 

It takes a computer significant computation power to keep track of all these variables while also trying to maintain and update its current world model. 

Given this immense problem of computation, in order to maintain safe execution times for action a driverless car will make best guesses based on probabilistic distributions. In effect, therefore, the car is guessing which path or action is best, given some sort of confidence interval. The best operating conditions for autonomous systems are those that promote a high-fidelity world model with low environment uncertainty, a concept that will be further discussed in the next section. 

Balancing Tasks Between Humans & Robots 

Given this understanding of the basics of autonomous robot reasoning, how should we think about the design of autonomous systems, particularly in terms of the extent to which humans should be involved?  It is important first to understand when systems can and should be supervised by humans. This is a decision involving clear technical questions (such as whether a computer vision system can generate an image of sufficient resolution to make an accurate decision) as well as ethical and policy considerations (such as whether a robot should be allowed to take the life of a human being). 

Detailed understanding of the construction and capabilities of such military AI systems is needed in order to form cogent arguments around this polarising issue. In future AI will have chat bots on a world-wide scale interacting with humans, solicitors, accountants and midwifes will be potential robots. This continued interaction with humans will enable future advancement and development from both sides on a huge scale. Robot midwifes in future will possibly be delivering humans with superhuman traits compared with today’s humans due to DNA mapping. 
With brain computer interface (BCI) technology AI can identify, enhance and implant these genes making humans faster stronger and more intelligent.

The Internet of Things (IOT) will provide vast amounts of new big data with future chatbot technology available to every household bringing constant AI and human interaction. 

With Quantum Computing processing huge amounts of big data this in turn will enable Wizard of Oz and Artificial General Intelligence (AGI) technology to thrive with AI learning from human characteristics, this can be used in conjunction with BCI technology to advance AI and human interaction to an unprecedented level in the future.
Humans must embrace and evolve alongside AI to continue to develop as a species, the potential of BCI technologies to rapidly advance will bring the evolution a new breed of human and AI interaction. 

The commercial viability of Quantum Computing is the key to the huge advancements in AI and human interaction. The future of humanity is linked to AI and must be faced concomitantly, commercial viability of Quantum Computing will change future AI and human interaction immeasurably.

References: 

Chatham House:      USC:       MIT:      Pew Research:        WEF:      Information- Age

My Computer Career:           GISReportsOnline:       Economist Impact:          A.M Turing / UMBC

Journal of Database Management:       Springer:        Liebert Publisher:       

You Might Also Read: 

History, Robotics, Artificial Intelligence & Bio-Technology:    

 

« Preventing Exploitation Of Digital Images Of Children
Penetration Testing For An Effective Cyber Security Defence »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

IT Governance

IT Governance

IT Governance is a leading global provider of information security solutions. Download our free guide and find out how ISO 27001 can help protect your organisation's information.

ON-DEMAND WEBINAR: What Is A Next-Generation Firewall And Why Does It Matter

ON-DEMAND WEBINAR: What Is A Next-Generation Firewall And Why Does It Matter

See how to use next-generation firewalls (NGFWs) and how they boost your security posture.

Jooble

Jooble

Jooble is a job search aggregator operating in 71 countries worldwide. We simplify the job search process by displaying active job ads from major job boards and career sites across the internet.

Perimeter 81 / How to Select the Right ZTNA Solution

Perimeter 81 / How to Select the Right ZTNA Solution

Gartner insights into How to Select the Right ZTNA offering. Download this FREE report for a limited time only.

The PC Support Group

The PC Support Group

A partnership with The PC Support Group delivers improved productivity, reduced costs and protects your business through exceptional IT, telecoms and cybersecurity services.

ATSEC Information Security

ATSEC Information Security

ATSEC is an independent, privately-owned company that focuses on providing laboratory and consulting services for information security.

Beame.io

Beame.io

Beame.io is an information security company that distributes open source authentication infrastructure based on encryption.

OneVisage

OneVisage

Our award-winning 3DAuth digital identity platform turns any consumer mobile device into a real-time 3D facial scanner that securely authenticates the user in seconds.

Cobalt Labs

Cobalt Labs

Pen Testing as a Service for Modern SaaS Businesses. Cobalt is redefining the modern pen test for companies who want serious hacker-like testing built into their development cycle.

Garrison

Garrison

Garrison SAVI® is a unique technology for secure remote browsing that can dramatically change the risk profile for enterprise cyber security.

Omada

Omada

Omada is a leading provider of IT security solutions and services for identity management and access governance.

Viscount Systems

Viscount Systems

Viscount Systems is a global security software solutions company that is changing the way access control is deployed and managed in the enterprise.

Pioneer Search

Pioneer Search

Pioneer Search is a UK based Technology & Change, Electronics Engineering, Cyber Security & Cloud and Data & Analytics Employment Agency.

Cyber Talents

Cyber Talents

CyberTalents is on a mission to close the gap of cyber security professionals shortage across the globe.

Stratum Security

Stratum Security

Stratum Security is an information security consulting company that focuses on providing clear and concise risk guidance to its clients through high quality assessment services.

Axio Global

Axio Global

Axio is a leading cyber risk management SaaS company. Our Axio360 platform gives companies visibility to their cyber risk, and enables them to prioritize investments to protect their business.

Conversant Group

Conversant Group

Conversant Group is an IT infrastructure and security consulting company, providing technical, organizational, procedural, and process consulting internationally.

South West Cyber Resilience Centre (SWCRC)

South West Cyber Resilience Centre (SWCRC)

The South West Cyber Resilience Centre (SWCRC) is led by serving police officers, as part of a not-for-profit partnership with business and academia.

Sotero

Sotero

Sotero is the first cloud-native, zero trust data security platform that consolidates your entire security stack into one easy-to-manage environment.

Nasuni

Nasuni

The Nasuni File Data Platform offers the protection, detection, and recovery of file shares from ransomware attacks or random disasters within minutes.

MLSecOps Community

MLSecOps Community

The MLSecOps Community is a collaborative space for machine learning security experts and industry leaders to connect and shape the future of AI/ML security.