Can Ethical AI Become A Reality?

Rapid developments in Artificial Intelligence (AI) carries huge potential benefits, however it is necessary to explore the full ethical, social and legal aspects of AI systems if we are to avoid negative consequences and risks arising from AIs implementation in society.

AI will have significant impact on the development of humanity in the near future and it has raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control them.

Companies are leveraging data and AI to create scalable solutions, but they’re also scaling their reputational, regulatory, and legal risks. For decades AI, was the engine of high-level STEM research, which is learning and development that integrates aspects of science, technology, maths and engineering. Most consumers became aware of the technology’s power and potential through Internet platforms like Google and Facebook, and retailer Amazon.

Today, AI is essential across a vast array of industries, including health care, banking, retail, and manufacturing. 

When we consider the term AI , it is easy to imagine a time when humans become enslaved by machines. While the standards of AI technology are ever-improving, the idea that machines can realistically achieve a state of human consciousness is currently not realistic and is better left to Hollywood’s imagination. Many common applications of AI are comparatively mundane, but AI will augment our day-to-day lives.

Examples may include the technology embedded in virtual assistants such as Amazon’s Alexa or Google Home; natural language processing (NLP) is adopted in these platforms to boost the quality of communication with users. And with AI-powered software pulling information from a business’s bank account, taxes, and online bookkeeping records and comparing it with data from thousands of similar businesses, even small community banks will be able to make informed assessments in minutes, without the agony of paperwork and delays.

Firms now use AI to manage sourcing of materials and products from suppliers and to integrate vast troves of information to aid in strategic decision-making, and because of its capacity to process data so quickly, AI tools are helping to minimise time in the pricey trial-and-error of product development.

As the AI landscape continues to expand and evolve, however, it’s critical that discussions around ethics are at the centre of the applications that may threaten to infringe on our essential data protection and privacy rights. Facial recognition is the perfect example of this, with its use by law enforcement deemed highly controversial.

One of the most widely-publicised issues is the lack of visibility over how algorithms arrive at the conclusions they do. It’s also difficult to know whether these results are skewed by any underlying biases embedded within the datasets fed into these systems. There may be a conscious effort to develop AI that renders human-like results, but it remains to be seen whether these systems can factor in the ethical issues that we deliberate over when making decisions ourselves.

Facial recognition in particular is deemed to be a contentious application of AI technology and this is because of these questions that we arrive at the idea of ethics, namely the moral principles that govern the actions of an individual or group, or, in this case, a machine.

This is to say that AI ethics does not simply concern the application of the technology, the results and predictions of AI are just as important.

Let's consider the example of a system designed to establish how happy a person is based on their facial characteristics. A system would need to be trained on a variety of demographics to account for all the combinations of race, age and gender possible. What's more, even if we were to assume the system could account for all of that, how do we establish beyond doubt what happiness looks like?

Bias is one of the major problems with AI, as its development is always based on the choices of the researchers involved. This effectively makes it impossible to create a system that's entirely neutral, and why the field of AI ethics is so important.

Roboethics, or robot ethics, is the principle of designing artificially intelligent systems using codes of conduct that ensure an automated system is able to respond to situations in an ethical way. That is, ensure that a robot behaves in a way that would fit the ethical framework of the society it's operating in. Like traditional ethics, roboethics involves ensuring that when a system that's capable of making its own decisions comes into contact with humans, it's able to prioritise the health and wellbeing of the human above all else, while also behaving in a way that's considered appropriate to the situation.

Roboethics often features heavily in discussions around the use of artificial intelligence in combat situations, a popular school of thought being that robots should never be built to explicitly harm or kill human beings.

While roboethics usually focuses on the resulting action of the robot, the field is only concerned with the thoughts and actions of the human developer behind it, rather than the robot itself. For that, we turn to machine ethics, which is concerned with the process of adding moral behaviours to AI machines.

Some industry thinkers have, however, attacked ethical AI, saying it's not possible to treat robots and artificial intelligence as their human counterparts.The renowned computer scientist Joseph Weizenbaum has argued that non-human beings shouldn't be used in roles that rely on human interaction or relationship building. He said that roles of responsibility such as customer services, therapists, carers for the elderly, police officers, soldiers and judges should never be replaced by AI, whether robots or other systems that would go against human intuition. 

In these roles, humans need to experience empathy, and however human-like the interactions with artificial intelligence are, they will never be able to replace the emotions experienced in scenarios where these job roles exist and in the European Commission has published a set of guidelines for the ethical development of artificial intelligence, chief among these being the need for consistent human oversight.

Google was one of the first companies to vow that its AI will only ever be used ethically. The company's boss, Sundar Pichai said Google won't undertake in AI-powered surveillance. Google published its own ethical code of practice in June 2018 in response to widespread criticism over its relationship with the US government's weapon programme. The company has since said it will no longer cooperate with the US government on projects intending to weaponise algorithms.

Amazon, Google, Facebook, IBM, and Microsoft have joined forces to develop best practice for AI, with a big part of that examining how AI should be, and can be, used ethically as well as sharing ideas on educating the public about the uses of AI and other issues surrounding the technology. The consortium explained: "This partnership on AI will conduct research, organise discussions, provide thought leadership, consult with relevant third parties, respond to questions from the public and media, and create educational material that advance the understanding of AI technologies including machine perception, learning, and automated reasoning."

Microsoft had also cooperated with the European Union on an AI law framework, a draft version of which was recently published on 21st April 2021. Under the proposed regulations, EU citizens will be protected from the use of AI for mass surveillance by law enforcement. Companies that break the rules would face fines up to 6% of their global turnover or €30 million, whichever is the higher figure, slightly higher than the already steep fines imposed by GDPR.

The new interconnected digital world powered by 5G technology is delivering great potential and rapid gains in the power of Artificial Intelligence to better society.  With the rapid advancements in computing power and access to vast amounts of big data, Artificial Intelligence and Machine Learning systems will continue to improve and evolve. In just a few years into the future, AI systems will be able to process and use data not only at even more speed but also with more accuracy.

Despite the advantages and benefits that technologies such as Artificial Intelligence bring to the world, they may potentially cause irreparable harm to humans and society if they are misused or poorly designed. The development of AI systems must always be responsible and developed toward optimal sustainability for public benefit.

Today the biggest tech companies in the world are putting together fast-growing teams to tackle the ethical problems that arise from the widespread collection, analysis, and use of massive troves of data, particularly when that data is used to train machine learning models like AI. 

AI technology also poses questions for both civil and criminal law, particularly whether existing legal frameworks apply to decisions taken by AIs. Pressing legal issues include liability for tortious, criminal and contractual misconduct involving AI.

While it may seem unlikely that AIs will be deemed to have sufficient autonomy and moral sense to be held liable themselves, they do raise questions about who is liable for which crime, or indeed if human agents can avoid liability by claiming they did not know the AI could or would do such a thing.In addition to challenging questions around liability, AI could abet criminal activities, such as smuggling using unmanned vehicles, as well as harassment, torture, sexual offences, theft and fraud.

Harvard Gazette:    Harvard Business Review:   ITPro:    European Parliament:     

Interesting Engineering:    Stanford University

 

You Might Also Read: 

AI Is The New Weapon In The Cyber Arms Race:

 

« Maritime Shipping Is An Ideal Target For Ransom
Ransomware Attack On Ireland's Health Service »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

MIRACL

MIRACL

MIRACL provides the world’s only single step Multi-Factor Authentication (MFA) which can replace passwords on 100% of mobiles, desktops or even Smart TVs.

BackupVault

BackupVault

BackupVault is a leading provider of automatic cloud backup and critical data protection against ransomware, insider attacks and hackers for businesses and organisations worldwide.

CYRIN

CYRIN

CYRIN® Cyber Range. Real Tools, Real Attacks, Real Scenarios. See why leading educational institutions and companies in the U.S. have begun to adopt the CYRIN® system.

Alvacomm

Alvacomm

Alvacomm offers holistic VIP cybersecurity services, providing comprehensive protection against cyber threats. Our solutions include risk assessment, threat detection, incident response.

Clayden Law

Clayden Law

Clayden Law advise global businesses that buy and sell technology products and services. We are experts in information technology, data privacy and cybersecurity law.

Adeptis Group

Adeptis Group

Adeptis are experts in cyber security recruitment, providing bespoke staffing solutions to safeguard your organisation against ever-changing cyber threats.

Secusmart

Secusmart

Secusmart provide highly secure and encrypted speech and data communication solutions.

ClickDatos

ClickDatos

ClickDatos specializes in consulting, auditing, data protection training, accredited by ISO/IEC 27001 certification.

Institute for Cybersecurity & Privacy (ICSP) -  University of Georgia

Institute for Cybersecurity & Privacy (ICSP) - University of Georgia

The goal of ICSP is to become a state hub for cybersecurity research and education, including multidisciplinary programs and research opportunities, outreach activities, and industry partnership.

iONLINE

iONLINE

iONLINE delivers high quality IT services and solutions to businesses in Azerbaijan.

RHEA Group

RHEA Group

RHEA Group offers aerospace and security engineering services and solutions, system development, and technologies including cyber security.

Swisscom Blockchain

Swisscom Blockchain

Swisscom Blockchain is focused on supporting the implementation and adaption of Blockchain-based platforms in enterprises across diverse industries.

DataDog

DataDog

DataDog provides Cloud-native Security Monitoring. Real-time threat detection across your applications, network, and infrastructure.

Fasken

Fasken

Fasken is one of the largest business law firms in Canada and a recognized leader in privacy and cybersecurity law.

Infosec Cloud

Infosec Cloud

Infosec Cloud is a specialist Cyber Security company offering fully managed Training & Testing Services in addition to market leading Cyber Security technology and accredited professional services.

Sectyne

Sectyne

Sectyne is a full-stack cyber consultancy committed to providing tailored services, advisory consultations, and training.

AnyTech365

AnyTech365

AnyTech365 is a leading European IT Security and Support company helping end users and small businesses have a worry-free experience with all things tech.

Skyhigh Security

Skyhigh Security

Skyhigh Security enables your remote workforce while addressing your cloud, web, data, and network security needs.

endpointX

endpointX

endpointX is a preventative cyber security company. We help companies minimize their risk of breach by improving cyber hygiene.

DataPatrol

DataPatrol

DataPatrol is a software company, specialized in providing Security and Privacy of company’s data and information in an evolved way.

TELUS

TELUS

TELUS provide Canadian businesses with the services and solutions they need to securely thrive in a digital world. Partner with a cybersecurity leader you can rely on.