Algorithms, Lies & Social Media

Achieving a more transparent and less manipulative online media may well be the defining political battle of the 21st century. The Internet has evolved into a ubiquitous and indispensable digital environment in which people communicate, seek information, and make decisions.

Despite offering various benefits, online environments are also replete with smart, highly adaptive choice architectures designed primarily to maximise commercial interests, capture and sustain users’ attention, monetise user data, and predict and influence future behaviour.

This online landscape holds multiple negative consequences for society, such as a decline in human autonomy, rising incivility in online conversation, the facilitation of political extremism, and the spread of disinformation. The Internet and the devices people use to access it represent not just new technological achievements but also entirely new artificial environments. Much like people’s physical surroundings, these are environments in which people spend time, communicate with each other, search for information, and make decisions. Yet the digital world is a recent phenomenon:

The Internet is 50 years old, the Web is 30 years old, and the advanced social Web is merely 15 years old.

New adjustments and features are added to these environments on a continuous basis, making it nearly impossible for most users, let alone regulators, to keep abreast of the inner workings of their digital surroundings. There was a time when the Internet was seen as an unequivocal force for social good. It propelled progressive social movements from Black Lives Matter to the Arab Spring; it set information free and flew the flag of democracy worldwide.

But today, democracy apperas to be in decline and the Internet’s role as driver is palpably clear. From fake news bots to misinformation to conspiracy theories, social media has commandeered mindsets, evoking the sense of a dark force that must be countered by authoritarian, top-down controls.

This paradox, that the Internet is both saviour and executioner of democracy, can be understood through the lenses of classical economics and Cognitive Science. In traditional markets, firms manufacture goods, such as cars or toasters, that satisfy consumers’ preferences.

Markets on social media and the Internet are radically different because the platforms exist to sell information about their users to advertisers, thus serving the needs of advertisers rather than consumers.

On social media and parts of the Internet, users “pay” for free services by relinquishing their data to unknown third parties who then expose them to ads targeting their preferences and personal attributes. This economic model has driven online and social media platforms to exploit the cognitive limitations and vulnerabilities of their users. For instance, human attention has adapted to focus on cues that signal emotion or surprise.

Paying attention to emotionally charged or surprising information makes sense in most social and uncertain environments and was critical within the close-knit groups in which early humans lived. In this way, information about the surrounding world and social partners could be quickly updated and acted on. But when the interests of the platform do not align with the interests of the user, these strategies become maladaptive. Platforms know how to capitalise on this: To maximise advertising revenue, they present users with content that captures their attention and keeps them engaged.

For example, YouTube’s recommendations amplify increasingly sensational content with the goal of keeping people’s eyes on the screen. Research by the free software organisation Mozilla confirms that YouTube not only hosts but actively recommends videos that violate its own policies concerning political and medical misinformation, hate speech, and inappropriate content.

YouTube is the second-most visited website in the world, and its algorithm drives 70% of watch time on the platform, an estimated 700 million hours every single day.

For years, that recommendation algorithm has helped spread health misinformation, political disinformation, hateful diatribes, and other regrettable content to people around the globe. YouTube’s enormous influence means these films reach a huge audience, having a deep impact on countless lives, from radicalisation to polarisation.

There is common tendency for humans to react more strongly to negative than positive information. “Negativity biases” in human cognition and behaviour are well documented, but existing research is based on small Anglo-American samples and stimuli that are only tangentially related to our political world.

In pursuit of our attention, digital platforms have become paved with misinformation, particularly the kind that feeds outrage and anger. Following recent revelations by a whistle-blower, we now know that Facebook’s newsfeed curation algorithm gave content eliciting anger five times as much weight as content evoking happiness. it has also been reported that political parties in Europe began running more negative ads because they were favoured by Facebook’s algorithm.

Besides selecting information on the basis of its personalised relevance, algorithms can also filter out information considered harmful or illegal, for instance by automatically removing hate speech and violent content. But until recently, these algorithms went only so far. But during the pandemic, these same platforms took a more interventionist approach to false information and vowed to remove or limit Covid-19 misinformation and conspiracy theories. Here, too, the platforms relied on automated tools to remove content without human review.

None of this is transparent to consumers, because Internet and social media platforms lack the basic signals that characterise conventional commercial transactions. When people buy a car, they know they are buying a car. If that car fails to meet their expectations, consumers have a clear signal of the damage done because they no longer have money in their pocket. When people use social media, by contrast, they are not always aware of being the passive subjects of commercial transactions between the platform and advertisers involving their own personal data.

Users are also often unaware of how their news feed on social media is curated and even people who are aware of algorithmic curation tend not to have an accurate understanding of what that involves. A Pew research paper found that 74% of Americans did not know that Facebook maintained data about their interests and traits.

Most commercial sites, from social media platforms to news outlets to online retailers, collect a wide variety of data about their users’ behaviours. Platforms use this data to deliver content and recommendations based on users’ interests and traits, and to allow advertisers to focus advertising to relatively precise segments of the public.

They are often unaware that the information they consume and produce is curated by algorithms. And hardly anyone understands that algorithms will present them with information that is curated to provoke outrage or anger, attributes that fit hand in glove with political misinformation. People cannot be held responsible for their lack of awareness. They were neither consulted on the design of online architectures nor considered as partners in the construction of the rules of online governance.

Several legislative proposals in Europe suggest a way forward, but it remains to be seen whether any of these laws will be passed. In the US there is considerable public and political scepticism about regulations and about governments stepping in to regulate social media content in particular. This scepticism is at least partially justified because paternalistic interventions may, if done improperly, result in censorship.

In March 2022, the Russian parliament approved jail terms of up to 15 years for sharing “fake”, as in contradicting official government position, information about the war against Ukraine, causing many foreign and local journalists and news organisations to limit their coverage of the invasion or to withdraw from the country entirely.

In liberal democracies, regulations must not only be proportionate to the threat of harmful misinformation but also respectful of fundamental human rights. Fears of authoritarian government control must be weighed against the dangers of the status quo.

Achieving a more transparent and less manipulative media may well be the defining political battle of the 21st century.

Nieman Lab:     MozillaPew Reserach:    FreedonHouse:     SagePub:    PNAS:   

Ahmed Al-Rawii / Reserachgate:     Emerald Insight:       

You Might Also Read: 

The Limits Of Social Media Soft Power:

 

« Russia’s AI Plans Might Not Survive The Ukraine War
Remote Access Scams Open The Door To Thieves »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

Syxsense

Syxsense

Syxsense brings together endpoint management and security for greater efficiency and collaboration between IT management and security teams.

Perimeter 81 / How to Select the Right ZTNA Solution

Perimeter 81 / How to Select the Right ZTNA Solution

Gartner insights into How to Select the Right ZTNA offering. Download this FREE report for a limited time only.

Jooble

Jooble

Jooble is a job search aggregator operating in 71 countries worldwide. We simplify the job search process by displaying active job ads from major job boards and career sites across the internet.

Resecurity, Inc.

Resecurity, Inc.

Resecurity is a cybersecurity company that delivers a unified platform for endpoint protection, risk management, and cyber threat intelligence.

LockLizard

LockLizard

Locklizard provides PDF DRM software that protects PDF documents from unauthorized access and misuse. Share and sell documents securely - prevent document leakage, sharing and piracy.

Fredda Stanza

Fredda Stanza

Fredda Stanza specialize in Information Security and Forensics Consulting.

Intrinsic-ID

Intrinsic-ID

Intrinsic-ID's authentication technology creates unique IDs and keys to authenticate chips, data, devices and systems.

Napatech

Napatech

Napatech develops and manufactures high speed network accelerators specifically designed for real-time network monitoring and analysis applications.

Ahope

Ahope

Ahope is a mobile security solution provider in Korea with a long history of security solution development.

CSIRT-IE

CSIRT-IE

CSIRT-IE is the body within the NCSC that provides assistance to constituents in responding to cyber security incidents at a national level for Ireland.

La Fosse Associates

La Fosse Associates

The InfoSec Recruitment team at La Fosse Associates specialises in placing Information Security & Risk professionals on a permanent and contract basis.

OpSec Security

OpSec Security

OpSec Online is the only brand protection solution that spans all channels so your brands are protected no matter what digital venue the criminals target.

Stefanini Group

Stefanini Group

Stefanini is a global IT services company providing a broad range of solutions for digital transformation including automation, cloud, IoT and cybersecurity.

Twingate

Twingate

Twingate help organizations secure and manage access to their technology resources in a world where people work from anywhere.

CYSIAM

CYSIAM

CYSIAM provides world-leading expertise in offensive security and critical incident response. We train our clients to be able to protect themselves and respond to attacks and breaches when they occur.

ProArch

ProArch

ProArch is a global team of multidisciplinary experts in cloud, infrastructure, data analytics, cybersecurity, compliance, and software development.

Turngate

Turngate

Turngate simplify security investigations so you can see employee activities and entitlements in your enterprise in seconds.

Convergence Networks

Convergence Networks

Convergence Networks is one of North America's leading Managed Services & Security Providers.

Digital.ai

Digital.ai

Digital.ai empowers organizations to scale software development teams, continuously deliver software with greater quality and security.

Bearer

Bearer

Bearer helps modern teams ship trustworthy products with the help of our code security solution built for security, privacy and engineering teams.

Oasis Security

Oasis Security

Oasis is the market leading platform for non-human identity management. Our mission is to fortify cybersecurity defenses by enabling enterprises to efficiently secure non-human identities.