Algorithms, Lies & Social Media

Achieving a more transparent and less manipulative online media may well be the defining political battle of the 21st century. The Internet has evolved into a ubiquitous and indispensable digital environment in which people communicate, seek information, and make decisions.

Despite offering various benefits, online environments are also replete with smart, highly adaptive choice architectures designed primarily to maximise commercial interests, capture and sustain users’ attention, monetise user data, and predict and influence future behaviour.

This online landscape holds multiple negative consequences for society, such as a decline in human autonomy, rising incivility in online conversation, the facilitation of political extremism, and the spread of disinformation. The Internet and the devices people use to access it represent not just new technological achievements but also entirely new artificial environments. Much like people’s physical surroundings, these are environments in which people spend time, communicate with each other, search for information, and make decisions. Yet the digital world is a recent phenomenon:

The Internet is 50 years old, the Web is 30 years old, and the advanced social Web is merely 15 years old.

New adjustments and features are added to these environments on a continuous basis, making it nearly impossible for most users, let alone regulators, to keep abreast of the inner workings of their digital surroundings. There was a time when the Internet was seen as an unequivocal force for social good. It propelled progressive social movements from Black Lives Matter to the Arab Spring; it set information free and flew the flag of democracy worldwide.

But today, democracy apperas to be in decline and the Internet’s role as driver is palpably clear. From fake news bots to misinformation to conspiracy theories, social media has commandeered mindsets, evoking the sense of a dark force that must be countered by authoritarian, top-down controls.

This paradox, that the Internet is both saviour and executioner of democracy, can be understood through the lenses of classical economics and Cognitive Science. In traditional markets, firms manufacture goods, such as cars or toasters, that satisfy consumers’ preferences.

Markets on social media and the Internet are radically different because the platforms exist to sell information about their users to advertisers, thus serving the needs of advertisers rather than consumers.

On social media and parts of the Internet, users “pay” for free services by relinquishing their data to unknown third parties who then expose them to ads targeting their preferences and personal attributes. This economic model has driven online and social media platforms to exploit the cognitive limitations and vulnerabilities of their users. For instance, human attention has adapted to focus on cues that signal emotion or surprise.

Paying attention to emotionally charged or surprising information makes sense in most social and uncertain environments and was critical within the close-knit groups in which early humans lived. In this way, information about the surrounding world and social partners could be quickly updated and acted on. But when the interests of the platform do not align with the interests of the user, these strategies become maladaptive. Platforms know how to capitalise on this: To maximise advertising revenue, they present users with content that captures their attention and keeps them engaged.

For example, YouTube’s recommendations amplify increasingly sensational content with the goal of keeping people’s eyes on the screen. Research by the free software organisation Mozilla confirms that YouTube not only hosts but actively recommends videos that violate its own policies concerning political and medical misinformation, hate speech, and inappropriate content.

YouTube is the second-most visited website in the world, and its algorithm drives 70% of watch time on the platform, an estimated 700 million hours every single day.

For years, that recommendation algorithm has helped spread health misinformation, political disinformation, hateful diatribes, and other regrettable content to people around the globe. YouTube’s enormous influence means these films reach a huge audience, having a deep impact on countless lives, from radicalisation to polarisation.

There is common tendency for humans to react more strongly to negative than positive information. “Negativity biases” in human cognition and behaviour are well documented, but existing research is based on small Anglo-American samples and stimuli that are only tangentially related to our political world.

In pursuit of our attention, digital platforms have become paved with misinformation, particularly the kind that feeds outrage and anger. Following recent revelations by a whistle-blower, we now know that Facebook’s newsfeed curation algorithm gave content eliciting anger five times as much weight as content evoking happiness. it has also been reported that political parties in Europe began running more negative ads because they were favoured by Facebook’s algorithm.

Besides selecting information on the basis of its personalised relevance, algorithms can also filter out information considered harmful or illegal, for instance by automatically removing hate speech and violent content. But until recently, these algorithms went only so far. But during the pandemic, these same platforms took a more interventionist approach to false information and vowed to remove or limit Covid-19 misinformation and conspiracy theories. Here, too, the platforms relied on automated tools to remove content without human review.

None of this is transparent to consumers, because Internet and social media platforms lack the basic signals that characterise conventional commercial transactions. When people buy a car, they know they are buying a car. If that car fails to meet their expectations, consumers have a clear signal of the damage done because they no longer have money in their pocket. When people use social media, by contrast, they are not always aware of being the passive subjects of commercial transactions between the platform and advertisers involving their own personal data.

Users are also often unaware of how their news feed on social media is curated and even people who are aware of algorithmic curation tend not to have an accurate understanding of what that involves. A Pew research paper found that 74% of Americans did not know that Facebook maintained data about their interests and traits.

Most commercial sites, from social media platforms to news outlets to online retailers, collect a wide variety of data about their users’ behaviours. Platforms use this data to deliver content and recommendations based on users’ interests and traits, and to allow advertisers to focus advertising to relatively precise segments of the public.

They are often unaware that the information they consume and produce is curated by algorithms. And hardly anyone understands that algorithms will present them with information that is curated to provoke outrage or anger, attributes that fit hand in glove with political misinformation. People cannot be held responsible for their lack of awareness. They were neither consulted on the design of online architectures nor considered as partners in the construction of the rules of online governance.

Several legislative proposals in Europe suggest a way forward, but it remains to be seen whether any of these laws will be passed. In the US there is considerable public and political scepticism about regulations and about governments stepping in to regulate social media content in particular. This scepticism is at least partially justified because paternalistic interventions may, if done improperly, result in censorship.

In March 2022, the Russian parliament approved jail terms of up to 15 years for sharing “fake”, as in contradicting official government position, information about the war against Ukraine, causing many foreign and local journalists and news organisations to limit their coverage of the invasion or to withdraw from the country entirely.

In liberal democracies, regulations must not only be proportionate to the threat of harmful misinformation but also respectful of fundamental human rights. Fears of authoritarian government control must be weighed against the dangers of the status quo.

Achieving a more transparent and less manipulative media may well be the defining political battle of the 21st century.

Nieman Lab:     MozillaPew Reserach:    FreedonHouse:     SagePub:    PNAS:   

Ahmed Al-Rawii / Reserachgate:     Emerald Insight:       

You Might Also Read: 

The Limits Of Social Media Soft Power:

 

« Russia’s AI Plans Might Not Survive The Ukraine War
Remote Access Scams Open The Door To Thieves »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

The PC Support Group

The PC Support Group

A partnership with The PC Support Group delivers improved productivity, reduced costs and protects your business through exceptional IT, telecoms and cybersecurity services.

NordLayer

NordLayer

NordLayer is an adaptive network access security solution for modern businesses — from the world’s most trusted cybersecurity brand, Nord Security. 

Resecurity, Inc.

Resecurity, Inc.

Resecurity is a cybersecurity company that delivers a unified platform for endpoint protection, risk management, and cyber threat intelligence.

Cyber Security Supplier Directory

Cyber Security Supplier Directory

Our Supplier Directory lists 6,000+ specialist cyber security service providers in 128 countries worldwide. IS YOUR ORGANISATION LISTED?

Clayden Law

Clayden Law

Clayden Law advise global businesses that buy and sell technology products and services. We are experts in information technology, data privacy and cybersecurity law.

Acuity Risk Management

Acuity Risk Management

Acuity Risk Management helps businesses worldwide effectively manage, prioritize and report on their risks to inform strategic and tactical decision-making and build long-term resilience.

FIRST Conference

FIRST Conference

Annual conference organised by the Forum of Incident Response and Security Teams (FIRST), a recognized global leader in computer incident response.

RiskIQ

RiskIQ

RiskIQ is the leader in digital threat management, providing the most comprehensive discovery, intelligence, and mitigation of threats associated with an organization’s digital presence.

Bluink

Bluink

Bluink specializes in identity and access management and customer identity verification, using your smartphone as a strong authenticator and secure identity store.

CSIRT GOV - Poland

CSIRT GOV - Poland

Computer Security Incident Response Team CSIRT GOV, run by the Head of the Internal Security Agency, acts as the national CSIRT responsible for coordinating the response to computer incidents.

Finnish Accreditation Service (FINAS)

Finnish Accreditation Service (FINAS)

FINAS is the national accreditation body for Finland. The directory of members provides details of organisations offering certification services for ISO 27001.

Learn How To Become

Learn How To Become

At LearnHowToBecome.org, our mission is to help any job-seeker understand what it takes to build and develop a career. We cover many specialist areas including cybersecurity.

Tehtris

Tehtris

TEHTRIS XDR Platform was developed to control and improve the IT security of private and public companies against advanced cyber threats such as cyber espionage or cyber sabotage activities.

Authomize

Authomize

Authomize aggregates identities and authorization mechanisms from any applications around your hybrid environment into one unified platform so you can easily and rapidly manage and secure all users.

Chicago Quantum Exchange (CQE)

Chicago Quantum Exchange (CQE)

Chicago Quantum Exchange is an intellectual hub and community of researchers with the common goal of advancing academic and industrial efforts in the science and engineering of quantum information.

Xopero Software

Xopero Software

Xopero Software develops a comprehensive range of professional tools for protecting and restoring critical business data.

Trapp Technology

Trapp Technology

Trapp Technology combines the very best cloud, Internet, IT managed services, and IT consulting to provide a true all-in-one IT solution for small to mid-sized businesses.

3B Data Security

3B Data Security

3B Data Security offer a range of Penetration Testing, Digital Forensics, Incident Response and Data Breach Management Services.

Lintu Solutions

Lintu Solutions

Lintu Solutions is a trusted provider of comprehensive cybersecurity and enterprise risk management solutions.

Winslow Technology Group (WTG)

Winslow Technology Group (WTG)

Winslow Technology Group is a leading provider of IT Solutions, Managed Services, and Cybersecurity Services dedicated to providing exceptional business outcomes for our customers since 2003.

SecureLake

SecureLake

SecureLake (formerly Managni) is one of the most trusted US-based IT security and infrastructure companies.