Facebook Delivers AI To Detect Suicidal Posts

This is software to save lives. Facebook’s new “proactive detection” artificial intelligence technology will scan all posts for patterns of suicidal thoughts, and when necessary send mental health resources to the user at risk or their friends, or contact local first-responders. 

By using AI to flag worrisome posts to human moderators instead of waiting for user reports, Facebook can decrease how long it takes to send help.

Facebook previously tested using AI to detect troubling posts and more prominently surface suicide reporting options to friends in the US. 

Now Facebook is will scour all types of content around the world with this AI, except in the European Union, where General Data Protection Regulation privacy laws on profiling users based on sensitive information complicate the use of this tech.
Facebook also will use AI to prioritise particularly risky or urgent user reports so they’re more quickly addressed by moderators, and tools to instantly surface local language resources and first-responder contact info. It’s also dedicating more moderators to suicide prevention, training them to deal with the cases 24/7, and now has 80 local partners like Save.org, National Suicide Prevention Lifeline and Forefront from which to provide resources to at-risk users and their networks.
“This is about shaving off minutes at every single step of the process, especially in Facebook Live,” says VP of product management Guy Rosen. Over the past month of testing, Facebook has initiated more than 100 “wellness checks” with first-responders visiting affected users. “There have been cases where the first-responder has arrived and the person is still broadcasting.”

The idea of Facebook proactively scanning the content of people’s posts could trigger some dystopian fears about how else the technology could be applied. Facebook didn’t have answers about how it would avoid scanning for political dissent or petty crime, with Rosen merely saying “we have an opportunity to help here so we’re going to invest in that.” 

There are certainly massive beneficial aspects about the technology, but it’s another space where we have little choice but to hope Facebook doesn’t go too far.

Facebook CEO Mark Zuckerberg praised the product update in a post today, writing that “In the future, AI will be able to understand more of the subtle nuances of language, and will be able to identify different issues beyond suicide as well, including quickly spotting more kinds of bullying and hate.”

According to TechCrunch there is no way for Facebook users to opt out of having their posts scanned. A Facebook spokesperson noted that the feature is designed to enhance user safety and that support resources offered by Facebook can be quickly dismissed if a user doesn’t want to see them.

Facebook trained the AI by finding patterns in the words and imagery used in posts that have been manually reported for suicide risk in the past. It also looks for comments like “are you OK?” and “Do you need help?”
“We’ve talked to mental health experts, and one of the best ways to help prevent suicide is for people in need to hear from friends or family that care about them,” Rosen says. “This puts Facebook in a really unique position. We can help connect people who are in distress connect to friends and to organizations that can help them.”
How suicide reporting works on Facebook now

Through the combination of AI, human moderators and crowdsourced reports, Facebook could try to prevent tragedies like when a father killed himself on Facebook Live last month. 

Live broadcasts in particular have the power to wrongly glorify suicide, hence the necessary new precautions, and also to affect a large audience, as everyone sees the content simultaneously unlike recorded Facebook videos that can be flagged and brought down before they’re viewed by many people.

Now, if someone is expressing thoughts of suicide in any type of Facebook post, Facebook’s AI will both proactively detect it and flag it to prevention-trained human moderators, and make reporting options for viewers more accessible.
When a report comes in, Facebook’s tech can highlight the part of the post or video that matches suicide-risk patterns or that’s receiving concerned comments. That avoids moderators having to skim through a whole video themselves. 

AI prioritises users reports as more urgent than other types of content-policy violations, like depicting violence or nudity. Facebook says that these accelerated reports get escalated to local authorities twice as fast as un-accelerated reports.
Facebook’s tools then bring up local language resources from its partners, including telephone hotlines for suicide prevention and nearby authorities. 

The moderator can then contact the responders and try to send them to the at-risk user’s location, surface the mental health resources to the at-risk user themselves or send them to friends who can talk to the user. “One of our goals is to ensure that our team can respond worldwide in any language we support,” says Rosen.

Back in February, Facebook CEO Mark Zuckerberg wrote that “There have been terribly tragic events, like suicides, some live streamed, that perhaps could have been prevented if someone had realised what was happening and reported them sooner.   Artificial intelligence can help provide a better approach.”

With more than 2 billion users, it’s good to see Facebook stepping up here. Not only has Facebook created a way for users to get in touch with and care for each other. It’s also unfortunately created an unmediated real-time distribution channel in Facebook Live that can appeal to people who want an audience for violence they inflict on themselves or others.

Creating a ubiquitous global communication utility comes with responsibilities beyond those of most tech companies, which Facebook seems to be coming to terms with.

Techcrunch:

You Might Also Read:

Social Media - 'Jargon-Busted':

Give Children More Control Of Data Privacy:


 

 

« Staff Training Is Important But Does Not Reduce Cyber Risk
China Intends To Be An AI Superpower »

ManageEngine
CyberSecurity Jobsite
Check Point

Directory of Suppliers

ZenGRC

ZenGRC

ZenGRC (formerly Reciprocity) is a leader in the GRC SaaS landscape, offering robust and intuitive products designed to make compliance straightforward and efficient.

CSI Consulting Services

CSI Consulting Services

Get Advice From The Experts: * Training * Penetration Testing * Data Governance * GDPR Compliance. Connecting you to the best in the business.

Directory of Cyber Security Suppliers

Directory of Cyber Security Suppliers

Our Supplier Directory lists 8,000+ specialist cyber security service providers in 128 countries worldwide. IS YOUR ORGANISATION LISTED?

Jooble

Jooble

Jooble is a job search aggregator operating in 71 countries worldwide. We simplify the job search process by displaying active job ads from major job boards and career sites across the internet.

NordLayer

NordLayer

NordLayer is an adaptive network access security solution for modern businesses — from the world’s most trusted cybersecurity brand, Nord Security. 

Security Compass

Security Compass

Security Compass, the Security by Design Company, enables organizations to shift left and build secure applications by design, integrated directly with existing DevSecOps tools and workflows.

Cyber Defense Media Group (CDMG)

Cyber Defense Media Group (CDMG)

CDMG is the leading global media group for all things cyber defense.

Vade Secure

Vade Secure

Vade Secure provides protection against the most sophisticated email scams such as phishing and spear phishing, malware and ransomware.

Italian Association of Critical Infrastructure Experts (AIIC)

Italian Association of Critical Infrastructure Experts (AIIC)

AIIC acts as a focal point in Italy for expertise on the protection of Critical Infrastructure including ICT networks and cybersecurity.

Arsenal Recon

Arsenal Recon

Arsenal Recon are digital forensics experts, providing consultancy services and powerful software tools to improve the analysis of electronic evidence.

CNA Insurance

CNA Insurance

CNA offers a market-leading suite of cyber liability insurance products and risk control resources for businesses of all sizes.

CyFIR

CyFIR

CyFIR is a network investigation and Incident Response tool for performing live computer investigations across any size enterprise.

Help AG

Help AG

Help AG provides leading enterprise businesses and governments across the Middle East with strategic consultancy combined with tailored information security solutions and services.

Outseer

Outseer

Outseer is a leading technology company in the fight against payments fraud. Outseer reliably determines authentic customers from fraudulent behavior.

Guernsey

Guernsey

Guernsey provides a wide range of engineering, architecture and consulting services to multiple markets, including cybersecurity consulting and CMMC certification.

SolidRun

SolidRun

SolidRun is a leading provider of computing and network technology designed to streamline the deployment of edge computing infrastructure and support embedded and IoT markets.

Crispmind

Crispmind

Crispmind creates innovative solutions to some of today’s most challenging technology problems.

DHCO IT

DHCO IT

The DHCO IT team are experts in IT support, cyber security, cloud support and disaster recovery, and are Microsoft 365 partners.

NMi Group

NMi Group

NMi Group is a global pioneer in mission-critical Testing, Inspection, Certification, and Calibration (TICC) services.

Finlaw Associates

Finlaw Associates

Finlaw Associates is a trusted cybercrime law firm providing a wide range of taxation, legal, advisory and regulatory services to the financial, commercial and industrial communities.

CheapSSLWEB

CheapSSLWEB

Buy SSL Certificates for your Website at Affordable Prices – Save Up to 90%