Elections 2024 - Fake News & Misinformation  

Elections 2024 - Fake News & Misinformation


Directors Report: This article is exclusive to premium customers. For unrestricted website access please Subscribe: £5 monthly / £50 annual.


While the US and UK 2024 elections are several months away, the US the Congress has been prioritising the security and integrity of the vote. At the centre of these concerns is generative Artificial Intelligence (AI) and the potential impact it could have on misinformation, fraud, and other digital campaigns leading up to the elections.  

What must be taken into account is the power technology has to create realistic, but actually fake news stories, soundbites, and even videos of candidates.

The distributed and decentralised nature of elections is both good and bad for cyber security. Fortunately, decentralisation makes it hard, though not impossible, for a single cyber operation to compromise multiple jurisdictions. 

Now, OpenAI has declared it is going to introduce new tools meant to combat disinformation, just in time for the many elections that will be held in some of the world’s leading countries. It is clear by now that AI’s great assistance and technological advances come with the heavy price of flooding the internet with disinformation. 
With elections looming in countries like the US, India and the UK, OpenAI declared that it will not allow its tech (like the chatbot ChatGPT and image generator DALL-E 3) to be used for political campaigns.

According to Techxplore, OpenAI said in a blog post that they want to make sure their technology is not used in a way that could undermine the democratic process. “We’re still working to understand how effective our tools might be for personalised persuasion. Until we know more, we don’t allow people to build applications for political campaigning and lobbying.”

While fears over election disinformation are nothing new, the sheer availability of AI text and image generators has greatly increased this threat, especially with users not having the ability to easily distinguish fake or manipulated content. However, disparities in cyber security resources and experience across jurisdictions creates vulnerabilities. Smaller jurisdictions with fewer resources may be seen as more vulnerable targets by adversaries.

With the coming US election is on everybody’s mind, government and local officials now realise that they are more likely than ever to be targeted by malicious cyber actors utilising generative AI to fake information. Local leaders should be carefully monitoring social media in their region to dispel any possible misinformation campaigns before they spread. 

With the evolution of generative AI, malicious actors can conjure up fake news articles and social media accounts quicker than ever, easily sowing confusion or swaying public opinion before, during, or after an election.

Recently, a former Federal Bureau of Investigation (FBI) special agent published an article in which he states that misinformation and not tampering with election systems remains the primary threat against US elections. This concern is magnified when AI is added into the equation, when generative AI programs could be leveraged to mass produce misinformation or “other digital campaigns” during election season.  

After all, generative AI has been capitalised already by cyber threat actors to aid in their criminal campaigns by using the technology to perfect social engineering content used in phishing campaigns, or else to create malware that can alter its behaviour in response to security measures to enhance the effectiveness of their attacks.  

For the former FBI agent, the threat is clear: foreign entities capitalising on misinformation could interfere in elections by stoking the misinformation fire targeted at electors to sow discord and discontent. There is something to be said about the difficulty in relying on hacking to interfere in the US election process.  According to a US Intelligence Community Report, Russian intelligence hackers did not breach voting machines or computers that tallied election results, even though they were involved in other hacking activities.  It should be noted whether this was due to the difficulty in doing so, or the intent on the hackers not to change votes remains unclear.  

Nevertheless, as one think tank Report on election security pointed out, the most significant concern cited in the report was states not having enough resources to secure the election process, especially in smaller counties. This provides some confidence that it would be extremely difficult to alter an election solely from hacking, although such activities could still have an impact on isolated incidents. This is reassuring given a recent report citing that only 4% are fully prepared to address cyber attacks come election time. Still, this does not even take into account a successful hack does not have to be done, merely the appearance that one had occurred.  

The 2016 Mueller Report revealed that it may not be necessary to corrupt or manipulate the systems or data; the intimation that something has occurred may be enough to achieve the attacker’s objective to cast doubt on voters’ minds.

Fake news is not new, but the rate at which it can spread is. Many people have a hard time sorting real news from fake news on the internet, causing confusion. One example of how quickly disinformation can spread is the conflict in the Ukraine. As part of its war efforts, Russia deployed another powerful weapon, disinformation. Indeed, Russia built a digital barricade to prevent its citizens from accessing information, cutting them off from the rest of the world. Instead, Russian citizens must rely on the information their authorities permit. The free and open internet does not exist in Russia.

One of the main problems with this digital barricade is the spreading of disinformation. Russians receive false information, such as the assertion that Ukraine is the aggressor in this conflict. 

This digital isolation enables Russia to clamp down on information not following the government line. Russia recently passed a censorship law preventing journalists, websites and other sources from publishing what government authorities deem as disinformation. Social media is becoming a more common way for readers to get their news and information. However, not all information on these sites can be trusted. Disinformation can cause mistrust, as its main goal is deception. Disinformation can spread through bots, bias, sharing and hackers. 

What is Fake News?:   

Fake news is articles that are intentionally false and designed to manipulate the readers' perceptions of events, facts, news and statements. The information looks like news but either cannot be verified or did not happen.  This fabricated information often mimics the real news media, without credibility and accuracy.
Elements that make a News Story Fake include:

  • Unverifiable information
  • Pieces written by non-experts
  • Information not found on other sites
  • Information that comes from a fake site
  • Stories that appeal to emotions instead of stating facts

Categories of Fake News Include:  

Clickbait:   This uses exaggerated, questionable or misleading headlines, images or social media descriptions to generate web traffic. These stories are deliberately fabricated to attract readers.

Propaganda:   This spreads information, rumours or ideas to harm an institution, country, group of people or individual, typically for political gain.

Imposter content:  This impersonates general news sites to contain made-up stories to deceive readers.

Biased/slanted news:   This attracts readers to confirm their own biases and beliefs. 

Satire:   This creates fake news stories for parody and entertainment.

State-sponsored news:   This operates under government control to spread disinformation to residents.   

Misleading headlines:   These stories may not be completely false but are distorted with misleading headlines and small snippets displayed in newsfeeds.

Fake news is harmful because it can create misunderstanding and confusion on important issues. Spreading false information can intensify social conflict and stir up controversy. These stories can also cause mistrust.

What Contributes To Disinformation?

Fake news spreads more rapidly than other news because it appeals to the emotions, grabbing attention. Here are some ways disinformation spreads on social media:

Continuous sharing:  It's easy to share and "like" content on social media. The number of people that see this content increases each time a user shares it with their social network. 

Recommendation engines:  Social media platforms and search engines also provide readers with personalized recommendations based on past preferences and search history. This further contributes to who sees fake news.

Engagement metrics:  Social media feeds prioritise content using engagement metrics, including how often readers share or like stories. However, accuracy is not a factor.

Artificial Intelligence:   AI can create fake info based on the target audience. An AI engine can generate messages and test them immediately for effectiveness at swaying targeted demographics. It can also use bots to impersonate human users and spread disinformation.

Hackers:   These people can plant stories into real media news outlets, appearing as though they are from reliable sources. For example, Ukrainian government reported that hackers had broken into government websites and posted false news about a peace treaty.

Trolls:   Fake news can also appear in the comments of reputable articles.  Trolls deliberately post to upset and start arguments with other readers. They are sometimes paid for political reasons, which can play a part in spreading fake news.

Misinformation Versus Disinformation

Misinformation and disinformation are two terms that can be used interchangeably, however, they do have different meanings and intent Misinformation is inaccurate information shared without any intention to cause harm. Misinformation can be shared unintentionally either due to lack of knowledge or understanding of the topic. Typically, people spread misinformation unknowingly because they believe it to be true.

Disinformation is spread to deceive deliberately. Typically, there is an objective to disinformation. For example, some of the most profound disinformation posts revolve around the government such as the Russian’s government disinformation campaigns on its war with Ukraine to get public support. They post information they want people to believe that is not true.

How To Spot Disinformation on Social Media

The first step of fighting the spread of disinformation on social media is to identify fake news. It's best to double-check before sharing with others. Here are Ten Tips to recognise fake news and identify disinformation.

Check other reliable sources:  Search other reputable news site and outlets to see if they are reporting on this story. Check for credible sources cited within the story. Credible, professional news agencies have strict editorial guidelines for fact-checking an article.

Check the source of the information:  If this story is from an unknown source, do some research. Examine the web address of the page and look for strange domains other than".com", such as ".infonet" or ".offer." Check for any spelling errors of the company name in the URL address. Consider the reputation of the source and their expertise on the matter. Bad actors may create webpages to mimic professional sites to spread fake news. When in doubt, go to the home page of the organisation and check for the same information.  For example, if a story looks like it is from the U.S. Centers for Disease Control and Prevention (CDC), go to the CDC's secured website and search for that information to verify it.

Look at the Author:  Perform a search on the author. Check for credibility, how many followers they have and how long the account has been active. Scan other posts to determine if they have bot behaviours, such as posting at all times of the day and from various parts of the world. Check for qualities such as a username with numbers and suspicious links in the author's bio. If the content is retweeted from other accounts and has highly polarised political content, it is likely a fake bot account.

Search the Profile Photo:  In addition to looking at the author's information and credibility, check their profile picture. Complete a reverse image search of profile photo on Google Reverse Image Search. Check the image is not a stock image or a celebrity. If the image doesn't appear to be original, then the article is likely not reliable because it is anonymous.

Read Beyond the Headline:  Think about if the story sounds unrealistic or too good to be true. A credible story has plenty of facts conveyed with expert quotes, official statistics and survey data. It can also have eyewitness accounts. If there are not detailed or consistent facts beyond the headline, question the information. Look for evidence to support that the event really happened. Make sure facts are not solely used to back up a certain viewpoint.

Develop a Critical Mindset:  Don't let personal beliefs cloud judgement. Biases can influence how someone responds to an article. Social media platforms suggest stories that match a person's interests, opinions and browsing habits. Don't let emotions influence views on the story. Look at a story critically and rationally. If the story is trying to persuade the reader or send readers to another site, it is probably fake news.

Determine if it is a Joke:  Satirical websites make the story a parody or a joke. Check the website to see if they consistently post funny stories and if they are known for satire.  

Watch for Sponsored Content:  Look at the top of the content for "sponsored content" or a similar designation. These stories often have catchy photos and appear to link to other news stories. They are ads designed to reach the reader's emotions.Check the page and look for such labels as "paid sponsor" or "advertisement." These articles are baiting readers into buying something, whether they are legitimate or deceitful. Some of these sites may also take users to malicious sites to install malware. Malware can steal data from devices, causing hardware failure, or make a computer or system network inoperable.

Use a Fact-Checking Site:  Fact-checking sites can also help determine if the news is credible or fake. These sites use independent fact checkers to review and research the accuracy of the information by checking reputable media sources. They are often part of larger news outlets that identify incorrect facts and statements. Popular Fact-Checking Sites Include:

  • PolitFact: This Pulitzer Prize-winning site researches claims from politicians to check accuracy.
  • Fact Check: This site from the Annenberg Public Policy Center also checks the accuracy of political claims.
  • Snopes: This is one of the oldest and most popular debunking sites on the internet that focuses on news stories, urban legends and memes. The independent fact-checkers cite all sources at the end of the debunking.
  • BBC Reality Check: This site is part of the British Broadcasting Company (BBC) that checks facts for news stories.

Check Image Authenticity:  Modern editing software makes it easy to create fake images that look real. Look for shadows or jagged edges in the photo. Google Reverse Image Search is another way to check the image to see where it originated and if it's altered.

What are Social Networks doing to Combat Disinformation?

Social media platforms are cracking down on false information. In October 2023, the Israel-Hamas war took center stage on social media as disinformation started to spread quickly, and social media platforms are taking precautions.

Israel-Hamas War:  Platforms issued statements about how they are handling disinformation on the war, which may be used to incite hate and violence. Here is what some social platforms released:

TikTok:   TikTok has said that said it launched a command centre to manage safety globally. The company plans to improve the software to detect and remove any graphic or violent content, and it also hired Arabic and Hebrew linguists to moderate content.

Facebook and Instagram:   Facebook and Instagram parent company Meta said that they launched a special operations centre with experts who speak Arabic and Hebrew to monitor content. They also lowered their threshold and rules for posting content to prevent questionable content.

Twitter / X:   X has said it increased resources for the crisis and is monitoring content around the clock, especially content about hostages.

YouTube:   YouTube has removed videos since the attack and says it continues to monitor hate, graphic images and extremism, according.

Telegram:   The messaging app Telegram restricted Hamas-operated channels or those channels closely associated with the militant’s war group. These channels are no longer accessible to Telegram users.

Regular Moderations to Prevent Disinformation

Facebook runs two initiatives to address the general rise of disinformation called News Integrity Initiative and Facebook Journalism Project that highlight problems with fake news and spread awareness. The organisation also takes actions against pages and individuals that share fake news and remove them from the site.

  • Instagram and Facebook have a new "false information" label to combat disinformation. Third-party fact checkers review and identify potential false claims and posts. If this team determines this information is untrue, they flag it with a label to notify social media users it contains misinformation. When readers want to view a post with this label, they must click an acknowledgement that says the information is not true. If they try to share this information, they get a warning they are about to share false information.
  • Twitter said in a statement it released in May 2020 that it does not tolerate disinformation. They have suspended accounts for manipulative or spam activity.
  • LinkedIn says we don’t tolerate misinformation and they ask people to report any disinformation. If the review deems the information false, LinkedIn will remove the post. LinkedIn has a strict user agreement, and if users do not comply, they will be removed.

To fight fake news on social media, users must first recognise what is false. If the user deems the information as fake news, it's best to report it to the platform.

Organised social media manipulation campaigns were found in each of the 81 surveyed countries, up 15% in one year, from 70 countries in 2019. Governments, public relations firms and political parties are producing misinformation on an industrial scale, according to the report. It shows disinformation has become a common strategy, with more than 93% of the countries  seeing disinformation deployed as part of political communication. 

Image: Tara Winstead

References:  

University of Oxford:     I-HLS:  Pagefreezer:   

Springer / Olan et alTechTarget:       

TechTarget:   Oodaloop:   US DNI

US Dept. of JusticeBelfer CentreFortune

US State Dept:    LinkedIn:   ReutersTikTok:

You Might Also Read:

DIRECTORY OF SUPPLIERS - Deepfake & Disinformation Detection:

___________________________________________________________________________________________

If you like this website and use the comprehensive 6,500-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.

  • Individual £5 per month or £50 per year. Sign Up
  • Multi-User, Corporate & Library Accounts Available on Request

Cyber Security Intelligence: Captured Organised & Accessible


 

« Ransomware: Businesses Are Well Equipped But Underprepared
Supply Chain: AnyDesk Customers Affected By Credentials Breach »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

Syxsense

Syxsense

Syxsense brings together endpoint management and security for greater efficiency and collaboration between IT management and security teams.

CYRIN

CYRIN

CYRIN® Cyber Range. Real Tools, Real Attacks, Real Scenarios. See why leading educational institutions and companies in the U.S. have begun to adopt the CYRIN® system.

CSI Consulting Services

CSI Consulting Services

Get Advice From The Experts: * Training * Penetration Testing * Data Governance * GDPR Compliance. Connecting you to the best in the business.

Jooble

Jooble

Jooble is a job search aggregator operating in 71 countries worldwide. We simplify the job search process by displaying active job ads from major job boards and career sites across the internet.

NordLayer

NordLayer

NordLayer is an adaptive network access security solution for modern businesses — from the world’s most trusted cybersecurity brand, Nord Security. 

IX Associates

IX Associates

IX Associates is a UK based IT Integration business specialising in risk, compliance, eDefence, and network security solutions.

Precise Biometrics

Precise Biometrics

Precise Biometrics develop and sell fingerprint software for convenient and secure authentication of people’s identity in mobile devices, smart cards and other products with fingerprint sensors.

Sandline Discovery

Sandline Discovery

Sandline Discovery provides digital forensics, eDiscovery solutions, managed review and litigation consulting services.

MASS

MASS

MASS provides world-class capabilities in electronic warfare operational support, cyber security, information management, support to military operations and law enforcement.

Secura

Secura

The Secura Cyber Security and Intelligence system predicts and prevents security threats by discovering hidden patterns through the meticulous analysis of large amounts of data.

Sonda

Sonda

SONDA is the leading systems integrator and IT service provider in Latin America.

Cybersecurity Manufacturing Innovation Institute (CyManII)

Cybersecurity Manufacturing Innovation Institute (CyManII)

CyManII was established to create economically viable, pervasive, and inconspicuous cybersecurity in American manufacturing to secure the digital supply chain and energy automation.

Blackbird.AI

Blackbird.AI

Blackbird.AI provides an intelligence and early-warning system to help users detect disinformation and take action against threats.

AirEye

AirEye

AirEye is a leader in Network Airspace Protection (NAP). Block attacks against your corporate network launched from wireless devices in your corporate network airspace.

Easy Dynamics

Easy Dynamics

Easy Dynamics is a leading technology services provider with a core focus in Cybersecurity, Cloud Computing, and Information Sharing.

Secure Diversity

Secure Diversity

Secure Diversity is an innovative non-profit organization with leaders that think out of the box to create strategies & solutions to increase diversity in the cybersecurity industry.

Upstack

Upstack

UPSTACK - One partner, end-to-end expertise, helping develop the solutions you need – when you need them.

XpertDPO

XpertDPO

XpertDPO provides data security, governance, risk and compliance, GDPR and ISO consultancy to public and private sector organisations.

Ivolv Cybersecurity

Ivolv Cybersecurity

Ivolv is here to assist your organization in building effective protection and resilience against cyber attacks.

Alset Technologies

Alset Technologies

Alset Technologies provides DASH - a comprehensive solution to DISA STIG (Security Technical Implementation Guide) compliance.

Arculus Cyber Security

Arculus Cyber Security

Arculus Cyber Security enables customers to securely realise the benefits of digital transformation through pragmatic solutions, guidance and services.