Facebook said Monday that it’s using artificial intelligence to detect whether someone is expressing thoughts of suicide in a post or live video.
The company has been grappling with how to respond to suicides streamed live on the world’s largest social network. Family and friends of some suicide victims have criticized Facebook for not pulling down videos depicting self-harm quickly enough before they go viral.
Now the company says it’s stepping up its efforts to prevent people from killing themselves. More workers are also reviewing reports of suicide and self-harm and the company said it’s improving how it identifies first responders.
“With all the fear about how AI may be harmful in the future, it’s good to remind ourselves how AI is actually helping save people’s lives today,” wrote Facebook CEO and co-founder Mark Zuckerberg in a post.
Through the use of AI, Facebook is looking for patterns that signal a user might be suicidal. Comments such as “Are you OK?” and “Can I help?” are phrases that can indicate someone is in danger.
This helps the company, which has 2 billion users, figure out what reports need to be prioritized.
Facebook said in the past month it has worked with first responders on more than 100 “wellness checks,” which were done after detecting users who may have been expressing suicidal thoughts.
Facebook said it’s rolling out this new AI tool outside the United States first, but plans to make it available worldwide expect for in the European Union.
Zuckerberg signaled that Facebook is also turning to AI to help flag other type of posts that run afoul of its online rules.
“In the future, AI will be able to understand more of the subtle nuances of language, and will be able to identify different issues beyond suicide as well, including quickly spotting more kinds of bullying and hate,” he wrote.
———
© 2017 The Mercury News (San Jose, Calif.)
Distributed by Tribune Content Agency, LLC.