University of Oregon associate professor of computer science Daniel Lowd is getting attention for his article on Facebook’s use of artificial intelligence to control spam and curate unauthorized content.
The article been published at The Conversation, The Associated Press, the Los Angeles Times, the Seattle Post-Intelligencer and more.
Lowd’s article discusses the issues and complexities that come with using AI to control content on social media sites. He uses Facebook as a recent example of such issues in action.
“Facebook has released statistics on abusive behavior on its social media network, deleting more than 22 million posts for violating its rules against pornography and hate speech — and deleting or adding warnings about violence to another 3.5 million posts,” the article says.
Utilizing AI to detect things such as hate speech and pornography is difficult because not only is internet content rapidly changing, but so are humans.
“Even Facebook’s human moderators have trouble defining hate speech, inconsistently applying the company’s guidelines and even reversing their decisions (especially when they make headlines),” Lowd writes.
Lowd concludes the article by mentioning the larger importance of AI use.
“Of course, no machine learning system will ever be perfect,” he wrote. “Like humans, computers should be used as part of a larger effort to fight abuse.”
To read the full article, see “Can Facebook use AI to fight online abuse?”