Can We Stop Algorithms Telling Lies?

Algorithms are formal rules, usually written in computer code, that make predictions on future events based on historical patterns. To train an algorithm you need to provide historical data as well as a definition of success.

We’ve seen finance get taken over by algorithms in the past few decades. Trading algorithms use historical data to predict movements in the market. Success for that algorithm is a predictable market move, and the algorithm is vigilant for patterns that have historically happened just before that move.
.
Since 2008, we’ve heard less from algorithms in finance, and much more from big data algorithms. The target of this new generation of algorithms has been shifted from abstract markets to individuals. But the underlying functionality is the same: collect historical data about people, profiling their behaviour online, location, or answers to questionnaires, and use that massive dataset to predict their future purchases, voting behaviour, or work ethic.

The recent proliferation in big data models has gone largely unnoticed by the average person, but it’s safe to say that most important moments where people interact with large bureaucratic systems now involve an algorithm in the form of a scoring system.

Getting into college, getting a job, being assessed as a worker, getting a credit card or insurance, voting, and even policing are in many cases done algorithmically. Moreover, the technology introduced into these systematic decisions is largely opaque, even to their creators, and has so far largely escaped meaningful regulation, even when it fails. That makes the question of which of these algorithms are working on our behalf even more important and urgent.

Let’s have a four-layer hierarchy when it comes to bad algorithms. At the top there are the unintentional problems that reflect cultural biases. For example, when Harvard professor Latanya Sweeney found that Google searches for names perceived to be black generated ads associated with criminal activity, we can assume that there was no Google engineer writing racist code.

In fact, the ads were trained to be bad by previous users of Google search, who were more likely to click on a criminal records ad when they searched for a black sounding name. Another example: the Google image search result for “unprofessional hair”, which returned almost exclusively black women, is similarly trained by the people posting or clicking on search results throughout time.

One layer down we come to algorithms that go bad through neglect. These would include scheduling programs that prevent people who work minimum wage jobs from leading decent lives. The algorithms treat them like cogs in a machine, sending them to work at different times of the day and on different days each week, preventing them from having regular childcare, a second job, or going to night school. They are brutally efficient, hugely scaled, and largely legal, collecting pennies on the backs of workers.

Or consider Google’s system for automatically tagging photos. It had a consistent problem whereby black people were being labelled gorillas. This represents neglect of a different nature, namely quality assessment of the product itself: they didn’t check that it worked on a wide variety of test cases before releasing the code.

The third layer consists of nasty but legal algorithms. For example, there were Facebook executives in Australia showing advertisers ways to find and target vulnerable teenagers. Awful but probably not explicitly illegal.

Indeed, online advertising in general can be seen as a spectrum, where on the one hand the wealthy are presented with luxury goods to buy but the poor and desperate are preyed upon by online payday lenders. Algorithms charge people more for car insurance if they don’t seem likely to comparison shop and Uber just halted an algorithm it was using to predict how low an offer of pay could be, thereby reinforcing the gender pay gap.

Finally, there’s the bottom layer, which consists of intentionally nefarious and sometimes outright illegal algorithms. There are hundreds of private companies, including dozens in the UK, that offer mass surveillance tools. They are marketed as a way of locating terrorists or criminals, but they can be used to target and root out citizen activists. And because they collect massive amounts of data, predictive algorithms and scoring systems are used to filter out the signal from the noise.

The illegality of this industry is under debate, but a recent undercover operation by journalists at Al Jazeera has exposed the relative ease with which middlemen representing repressive regimes in Iran and South Sudan have been able to buy such systems. For that matter, observers have criticised China’s social credit scoring system. Called “Sesame Credit,” it’s billed as mostly a credit score, but it may also function as a way of keeping tabs on an individual’s political opinions, and for that matter as a way of nudging people towards compliance.

Closer to home, there’s Uber’s “Greyball,” an algorithm invented specifically to avoid detection when the taxi service is functioning illegally in a city. It used data to predict which riders were violating the terms of service of Uber, or which riders were undercover government officials. Telltale signs that Greyball picked up included multiple use of the app in a single day and using a credit card tied to a police union.

The most famous malicious and illegal algorithm we’ve discovered so far is the one used by Volkswagen in 11 million vehicles worldwide to deceive the emissions tests, and in particular to hide the fact that the vehicles were emitting nitrogen oxide at up to 35 times the levels permitted by law. And although it seemed simply like a devious device, this qualifies as an algorithm as well. It was trained to identify and predict testing conditions versus road conditions, and to function differently depending on that result. And, like Greyball, it was designed to deceive.

So, what can we learn from the current, mature world of car makers in the context of illegal software?
First, similar types of software are being deployed by other car manufacturers that turn off emissions controls in certain settings. In other words, this was not a situation in which there was only one bad actor, but rather a standard operating procedure.

Next, the VW cheating started in 2009, which means it went undetected for five years. What else has been going on for five years? This line of thinking makes us start looking around, wondering which companies are currently hoodwinking regulators, evading privacy laws, or committing algorithmic fraud with impunity.

Put it another way. We’re all expecting cars to be self-driving in a few years or a couple of decades at most. When that happens, can we expect there to be international agreements on what the embedded self-driving car ethics will look like? Or will pedestrians be at the mercy of the car manufacturers to decide what happens in the case of an unexpected pothole? If we get rules, will the rules differ by country, or even by the country of the manufacturer?

If this sounds confusing for something as easy to observe as car crashes, imagine what’s going on under the hood, in the relatively obscure world of complex “deep learning” models.

It’s time to gird ourselves for a fight. It will eventually be a technological arms race, but it starts, now, as a political fight. We need to demand evidence that algorithms with the potential to harm us be shown to be acting fairly, legally, and consistently.

When we find problems, we need to enforce our laws with sufficiently hefty fines that companies don’t find it profitable to cheat in the first place. This is the time to start demanding that the machines work for us, and not the other way around.

Guardian:

You Might Also Read:

Artificial Intelligence Gives Business Wings:

 

 

« Global Cyber Attack Could Cost $53Billion.
Predicting Crime From Open Source Data »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

MIRACL

MIRACL

MIRACL provides the world’s only single step Multi-Factor Authentication (MFA) which can replace passwords on 100% of mobiles, desktops or even Smart TVs.

CSI Consulting Services

CSI Consulting Services

Get Advice From The Experts: * Training * Penetration Testing * Data Governance * GDPR Compliance. Connecting you to the best in the business.

Practice Labs

Practice Labs

Practice Labs is an IT competency hub, where live-lab environments give access to real equipment for hands-on practice of essential cybersecurity skills.

CYRIN

CYRIN

CYRIN® Cyber Range. Real Tools, Real Attacks, Real Scenarios. See why leading educational institutions and companies in the U.S. have begun to adopt the CYRIN® system.

Authentic8

Authentic8

Authentic8 transforms how organizations secure and control the use of the web with Silo, its patented cloud browser.

44CON

44CON

44CON is an Information Security Conference & Training event taking place in London. Designed to provide something for the business and technical Information Security professional.

CYBERPOL

CYBERPOL

CYBERPOL's mission is to facilitate the widest possible mutual assistance between all cyber crime law enforcement authorities to help mitigate global cyber threats.

Homeland Security Advanced Research Projects Agency (HSARPA)

Homeland Security Advanced Research Projects Agency (HSARPA)

HSARPA's Cyber Security Division (CSD) was set up to address DHS cyber operational and critical infrastructure protection requirements.

SKKU Security Lab (seclab)

SKKU Security Lab (seclab)

SKKU Security Lab supports research and education in information security engineering. The lab is a part of the College of Software, Sungkyunkwan University.

C3.ai

C3.ai

The C3 AI Suite supports configurable, pre-built, high value AI applications for predictive maintenance, fraud detection, anti-money laundering, sensor network health and more.

Cyber Threat Alliance

Cyber Threat Alliance

CTA is working to improve cybersecurity of our digital ecosystem by enabling near real-time cyber threat information sharing among companies and organizations in the cybersecurity field.

Braintrace

Braintrace

Braintrace’s services include Managed Detection and Response (MDR), Managed SIEM, SIEM-as-a-Service, SOC-as-a-Service, Advisory Services, and Incident Response.

Cingo Solutions

Cingo Solutions

Cingo Solutions is a Managed Detection & Response company providing specialized data security services.

International Association of Security Awareness Professionals (IASAP)

International Association of Security Awareness Professionals (IASAP)

IASAP provides a members-only virtual sharing platform where security awareness professionals engage in a lively, year-round exchange of information and ideas.

CyberCyte

CyberCyte

CyberCyte provides a disruptive built-in integrated physical, network and perimeter security solution framework.

Digitpol

Digitpol

Digitpol’s Cyber Crime Investigation experts investigate hacking incidents, ransomware, extortion and conduct security audits and IT upgrades.

7layers

7layers

7layers has established itself as one of the world’s leading test house groups for mobile devices and the growing number of wireless devices, modules and chipsets.

Sparrow

Sparrow

Sparrow specializes in application security testing solutions to cope with new technology trends such as cloud, mobile, and DevSecOps.

Cyber Management Alliance

Cyber Management Alliance

Cyber Management Alliance is closing the divide in cyberspace by bringing together the best qualities of thought leadership and operational mastery of cyber security management.

Xmirror Security

Xmirror Security

Xmirror Security focuses on integrated detection and defense of the continuous threat to the DevSecops software supply-chain with artificial intelligence technology as the core.

BAE Systems

BAE Systems

BAE Systems develop, engineer, manufacture, and support products and systems to deliver military capability, protect national security, and keep critical information and infrastructure secure.