Can We Stop Algorithms Telling Lies?

Algorithms are formal rules, usually written in computer code, that make predictions on future events based on historical patterns. To train an algorithm you need to provide historical data as well as a definition of success.

We’ve seen finance get taken over by algorithms in the past few decades. Trading algorithms use historical data to predict movements in the market. Success for that algorithm is a predictable market move, and the algorithm is vigilant for patterns that have historically happened just before that move.
.
Since 2008, we’ve heard less from algorithms in finance, and much more from big data algorithms. The target of this new generation of algorithms has been shifted from abstract markets to individuals. But the underlying functionality is the same: collect historical data about people, profiling their behaviour online, location, or answers to questionnaires, and use that massive dataset to predict their future purchases, voting behaviour, or work ethic.

The recent proliferation in big data models has gone largely unnoticed by the average person, but it’s safe to say that most important moments where people interact with large bureaucratic systems now involve an algorithm in the form of a scoring system.

Getting into college, getting a job, being assessed as a worker, getting a credit card or insurance, voting, and even policing are in many cases done algorithmically. Moreover, the technology introduced into these systematic decisions is largely opaque, even to their creators, and has so far largely escaped meaningful regulation, even when it fails. That makes the question of which of these algorithms are working on our behalf even more important and urgent.

Let’s have a four-layer hierarchy when it comes to bad algorithms. At the top there are the unintentional problems that reflect cultural biases. For example, when Harvard professor Latanya Sweeney found that Google searches for names perceived to be black generated ads associated with criminal activity, we can assume that there was no Google engineer writing racist code.

In fact, the ads were trained to be bad by previous users of Google search, who were more likely to click on a criminal records ad when they searched for a black sounding name. Another example: the Google image search result for “unprofessional hair”, which returned almost exclusively black women, is similarly trained by the people posting or clicking on search results throughout time.

One layer down we come to algorithms that go bad through neglect. These would include scheduling programs that prevent people who work minimum wage jobs from leading decent lives. The algorithms treat them like cogs in a machine, sending them to work at different times of the day and on different days each week, preventing them from having regular childcare, a second job, or going to night school. They are brutally efficient, hugely scaled, and largely legal, collecting pennies on the backs of workers.

Or consider Google’s system for automatically tagging photos. It had a consistent problem whereby black people were being labelled gorillas. This represents neglect of a different nature, namely quality assessment of the product itself: they didn’t check that it worked on a wide variety of test cases before releasing the code.

The third layer consists of nasty but legal algorithms. For example, there were Facebook executives in Australia showing advertisers ways to find and target vulnerable teenagers. Awful but probably not explicitly illegal.

Indeed, online advertising in general can be seen as a spectrum, where on the one hand the wealthy are presented with luxury goods to buy but the poor and desperate are preyed upon by online payday lenders. Algorithms charge people more for car insurance if they don’t seem likely to comparison shop and Uber just halted an algorithm it was using to predict how low an offer of pay could be, thereby reinforcing the gender pay gap.

Finally, there’s the bottom layer, which consists of intentionally nefarious and sometimes outright illegal algorithms. There are hundreds of private companies, including dozens in the UK, that offer mass surveillance tools. They are marketed as a way of locating terrorists or criminals, but they can be used to target and root out citizen activists. And because they collect massive amounts of data, predictive algorithms and scoring systems are used to filter out the signal from the noise.

The illegality of this industry is under debate, but a recent undercover operation by journalists at Al Jazeera has exposed the relative ease with which middlemen representing repressive regimes in Iran and South Sudan have been able to buy such systems. For that matter, observers have criticised China’s social credit scoring system. Called “Sesame Credit,” it’s billed as mostly a credit score, but it may also function as a way of keeping tabs on an individual’s political opinions, and for that matter as a way of nudging people towards compliance.

Closer to home, there’s Uber’s “Greyball,” an algorithm invented specifically to avoid detection when the taxi service is functioning illegally in a city. It used data to predict which riders were violating the terms of service of Uber, or which riders were undercover government officials. Telltale signs that Greyball picked up included multiple use of the app in a single day and using a credit card tied to a police union.

The most famous malicious and illegal algorithm we’ve discovered so far is the one used by Volkswagen in 11 million vehicles worldwide to deceive the emissions tests, and in particular to hide the fact that the vehicles were emitting nitrogen oxide at up to 35 times the levels permitted by law. And although it seemed simply like a devious device, this qualifies as an algorithm as well. It was trained to identify and predict testing conditions versus road conditions, and to function differently depending on that result. And, like Greyball, it was designed to deceive.

So, what can we learn from the current, mature world of car makers in the context of illegal software?
First, similar types of software are being deployed by other car manufacturers that turn off emissions controls in certain settings. In other words, this was not a situation in which there was only one bad actor, but rather a standard operating procedure.

Next, the VW cheating started in 2009, which means it went undetected for five years. What else has been going on for five years? This line of thinking makes us start looking around, wondering which companies are currently hoodwinking regulators, evading privacy laws, or committing algorithmic fraud with impunity.

Put it another way. We’re all expecting cars to be self-driving in a few years or a couple of decades at most. When that happens, can we expect there to be international agreements on what the embedded self-driving car ethics will look like? Or will pedestrians be at the mercy of the car manufacturers to decide what happens in the case of an unexpected pothole? If we get rules, will the rules differ by country, or even by the country of the manufacturer?

If this sounds confusing for something as easy to observe as car crashes, imagine what’s going on under the hood, in the relatively obscure world of complex “deep learning” models.

It’s time to gird ourselves for a fight. It will eventually be a technological arms race, but it starts, now, as a political fight. We need to demand evidence that algorithms with the potential to harm us be shown to be acting fairly, legally, and consistently.

When we find problems, we need to enforce our laws with sufficiently hefty fines that companies don’t find it profitable to cheat in the first place. This is the time to start demanding that the machines work for us, and not the other way around.

Guardian:

You Might Also Read:

Artificial Intelligence Gives Business Wings:

 

 

« Global Cyber Attack Could Cost $53Billion.
Predicting Crime From Open Source Data »

CyberSecurity Jobsite
Check Point

Directory of Suppliers

CYRIN

CYRIN

CYRIN® Cyber Range. Real Tools, Real Attacks, Real Scenarios. See why leading educational institutions and companies in the U.S. have begun to adopt the CYRIN® system.

ZenGRC

ZenGRC

ZenGRC (formerly Reciprocity) is a leader in the GRC SaaS landscape, offering robust and intuitive products designed to make compliance straightforward and efficient.

CSI Consulting Services

CSI Consulting Services

Get Advice From The Experts: * Training * Penetration Testing * Data Governance * GDPR Compliance. Connecting you to the best in the business.

Authentic8

Authentic8

Authentic8 transforms how organizations secure and control the use of the web with Silo, its patented cloud browser.

Resecurity

Resecurity

Resecurity is a cybersecurity company that delivers a unified platform for endpoint protection, risk management, and cyber threat intelligence.

Council on Foreign Relations (CFR)

Council on Foreign Relations (CFR)

CFR is dedicated to better understanding the world and the foreign policy choices facing the USA and other countries. Cyber security is covered within the CFR topic areas.

Security Weekly

Security Weekly

Security Weekly provides free content within the subject areas of IT security news, vulnerabilities, hacking, and research.

Karamba Security

Karamba Security

Karamba provide an IoT Security solution for ECUs in automobiles which ensures that all cars are protected (not just autonomous cars).

Center for Cyber Safety and Education

Center for Cyber Safety and Education

The Center for Cyber Safety and Education works to ensure that people across the globe have a positive and safe experience online through our educational programs, scholarships, and research.

NSO Group

NSO Group

NSO Group develops technology that enables government intelligence and law enforcement agencies to prevent and investigate terrorism and crime.

Agility Networks

Agility Networks

Agility Networks is a technology company providing integrated services and solutions for Digital Transformation and Cyber Security.

Cyber Pop-Up

Cyber Pop-Up

Cyber Pop-Up provide on-demand access to top security experts. No recruiting. No onboarding. No overhead costs.

BlackScore

BlackScore

BlackScore is a technology company seeking to disrupt risk assessment using AI-driven technology.

Sharktech

Sharktech

Sharktech designs, develops, and supports advanced DDoS protection and web technologies.

Nassec

Nassec

Nassec is a Cyber Security firm dedicated to providing the best vulnerability management solutions. We offer tailor-made cyber security solutions based upon your requirements and nature of business.

riskmethods

riskmethods

riskmethods helps you proactively identify, assess and mitigate supply chain risk. You need to master supply chain risk management—we can help.

IntegraONE

IntegraONE

IntegraONE is a IT solutions provider offering a full range of networking and technology solutions.

Trace3

Trace3

Trace3 is a pioneer in business transformation solutions, empowering organizations to keep pace with the rapid changes in IT innovations and maximize organizational health.

Daisy Corporate Services

Daisy Corporate Services

Daisy is one of the largest providers of communications and IT solutions across the UK, with a portfolio spanning unified communications, cloud, cyber security and resilience.

Cura Technology

Cura Technology

Cura Technology offers a wide array of security solutions meticulously designed to address specific facets of your security requirements.

SFY Information Technology

SFY Information Technology

SFY helps companies with Cyber Security and Managed IT, allowing them to focus on what really matters to them.