The Human Factor Is Essential To Eliminating Bias in Artificial Intelligence

It is not enough to open the ‘black box’ of machine learning. Direct human evaluation is the only way to ensure biases are not perpetuated through AI.

More and more technology and digital services are built upon, and driven, by AI and machine learning. But as we are beginning to see, these programmes are starting to replicate the biases which are fed into them, notably biases around gender. It is therefore imperative that the machine learning process is managed from input to output – including data, algorithms, models, training, testing and predictions – to assure that this bias is not perpetuated.

Bahar Gholipour notes this bias as AI’s so-called ‘black box’ problem — our inability to see the inside of an algorithm and therefore understand how it arrives at a decision. He claims that ‘left unsolved, it can devastate our societies by ensuring that historical discrimination, which many have worked hard to leave behind, is hard-coded into our future.’

Technological expertise is not enough to scrutinize, monitor and safeguard each stage of the machine learning process. The experience and perspective of people of all ages and all walks of life is needed to identify both obvious and subliminal social and linguistic biases, and make recommendations for adjustments to build accuracy and trust. Even more important than having an opportunity to evaluate gender bias in the ‘black box’ is having the freedom to correct the biases discovered.

The first step is to open the ‘black box’. Users are increasingly demanding that AI be honest, fair, transparent, accountable and human-centric. But proprietary interests and security issues have too often precluded transparency. However, positive initiatives are now being developed to accelerate open-sourcing code and create transparency standards. AI Now, a nonprofit at New York University advocating for algorithmic fairness, has a simple principle worth following: ‘When it comes to services for people, if designers can’t explain an algorithm’s decision, you shouldn’t be able to use it.’

Now there are a number of public and private organizations who are beginning to take this seriously. Google AI has several projects to push the business world, and society, to consider the biases in AI, including GlassBox, Active Question Answering and its PAIR initiative (People + AI Research) which add manual restrictions to machine learning systems to make their outputs more accurate and understandable.

The US Defense Advanced Research Projects Agency is also funding a big effort called XAI (Explainable AI) to make systems controlled by artificial intelligence more accountable to their users.

Microsoft CEO Satya Nadella has also gone on the record defending the need for ‘algorithmic accountability’ so that humans can undo any unintended harm.

But laudable as these efforts are, opening the box and establishing regulations and policies to ensure transparency is of little value until you have a human agent examining what’s inside to evaluate if the data is fair and unbiased. Automated natural language processing alone cannot do it because language is historically biased – not just basic vocabulary, but associations between words, and relationships between words and images.

Semantics matter. Casey Miller and Kate Swift, two women who in 1980 wrote The Handbook of Nonsexist Writing – the first handbook of its kind – dedicated their lives to promoting gender equity in language. That was almost 40 years ago and, while technology has advanced exponentially in that time period, we've made little progress removing gender bias from our lexicon.

The challenge for AI is in programming a changing vocabulary into a binary numerical system. Human intervention is necessary to adjudicate the bias in the programmer, the context and the language itself. But gender bias is not just in the algorithms. It lies within the outcomes – predictions and recommendations – powered by the algorithms.

Common stereotypes are even being reinforced by AI's virtual assistants: those tasked with addressing simple questions (e.g. Apple’s Siri and Amazon’s Alexa) have female voices while more sophisticated problem-solving bots (e.g. IBM’s Watson and Microsoft’s Einstein) have male voices.

Gender bias is further exacerbated by the paucity of women working in the field. AI Now’s 2017 report (opens in new window) identifies the lack of women, and ethnic minorities, working in AI as a foundational problem that is most likely having a material impact on AI systems and shaping their effects in society.

Human agents must question each stage of the process, and every question requires the perspective of a diverse, cross-disciplinary team, representing both the public and private sectors and inclusive of race, gender, culture, education, age and socioeconomic status to audit and monitor the system and what it generates. They don't need to know the answers – just how to ask the questions.

In some ways, 21st century machine learning needs to circle back to the ancient Socratic method of learning based on asking and answering questions to stimulate critical thinking, draw out ideas and challenge underlying presumptions. Developers should understand that this scrutiny and reformulation helps them clean identified biases from their training data, run ongoing simulations based on empirical evidence and fine tune their algorithms accordingly. This human audit would strengthen the reliability and accountability of AI and ultimately people’s trust in it.

By  Elizabeth Isele.  Associate Fellow, Global Economy and Finance, Royal Institute of International Affairs

Chatham House

You Might Also Read: 

Real Dangers of Artificial Intelligence:

Do Companies Need A Chief AI Officer?:

Don't Leave AI Governance To The Machines:

 

« Keeping Young People Off The Dark Web
UK Fallout From The Massive Breach At Equifax »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

ZenGRC

ZenGRC

ZenGRC - the first, easy-to-use, enterprise-grade information security solution for compliance and risk management - offers businesses efficient control tracking, testing, and enforcement.

DigitalStakeout

DigitalStakeout

DigitalStakeout enables cyber security professionals to reduce cyber risk to their organization with proactive security solutions, providing immediate improvement in security posture and ROI.

ON-DEMAND WEBINAR: What Is A Next-Generation Firewall And Why Does It Matter

ON-DEMAND WEBINAR: What Is A Next-Generation Firewall And Why Does It Matter

See how to use next-generation firewalls (NGFWs) and how they boost your security posture.

MIRACL

MIRACL

MIRACL provides the world’s only single step Multi-Factor Authentication (MFA) which can replace passwords on 100% of mobiles, desktops or even Smart TVs.

NordLayer

NordLayer

NordLayer is an adaptive network access security solution for modern businesses — from the world’s most trusted cybersecurity brand, Nord Security. 

Asavie

Asavie

Asavie provide solutions for Enterprise Mobility Management and secure IoT Connectivity.

ForeScout Technologies

ForeScout Technologies

ForeScout delivers pervasive network security by allowing organisations to continuously monitor & mitigate security exposures & cyberattacks.

Rapid7

Rapid7

Rapid7 unites cloud risk management and threat detection to deliver results that secure your business and ensure you’re always ready for what comes next.

SecuriThings

SecuriThings

SecuriThings is a User and Entity Behavioral Analytics (UEBA) solution for IoT security.

STMicroelectronics

STMicroelectronics

ST is a global semiconductor leader delivering intelligent and energy-efficient products and solutions that power the electronics at the heart of everyday life.

ACI Worldwide

ACI Worldwide

ACI Worldwide powers electronic payments for more than 5,000 organizations around the world.

Skurio

Skurio

Skurio create cost-effective, intuitive and powerful Cloud based solutions to identify threats, detect data breaches outside the network and automate the response.

SCIPP International

SCIPP International

SCIPP’s courses are based on internationally recognized best business practices for security awareness, for both technical and non-technical staff and to comply with regulatory mandates.

DataSixth Security Consulting

DataSixth Security Consulting

DataSixth delivers Cybersecurity Intelligence. With our unique capabilities, we’re able to deliver value, deliver answers, and deliver actionable security intelligence.

GrayMatter

GrayMatter

GrayMatter provides Advanced Industrial Analytics, OT Cybersecurity, Digital Transformation and Automation & Control services to clients across the U.S. and Canada.

Purism

Purism

Purism works with hardware component manufactures and the free software community to build high quality hardware that respects your digital life.

Cyber Crucible

Cyber Crucible

Cyber Crucible is a cybersecurity Software as a Service company definitively removing the risk of data extortion from customer environments.

Sonet.io

Sonet.io

Sonet.io is built for IT leaders that want a great experience for their remote workers, while enhancing security and observability.

AVANT Communications

AVANT Communications

AVANT is a premier distributor of next generation technologies with the resources and relationships needed to successfully navigate the ever-changing world of communications and IT infrastructure.

Sequentur

Sequentur

Sequentur is an award-winning Managed IT Services company. We are SOC 2 certified and provide Managed IT Services and Cybersecurity services to businesses nationwide.

TRM Labs

TRM Labs

TRM enables risk management and compliance for a global community of financial institutions, cryptocurrency businesses and government agencies.