Facebook founder Mark Zuckerberg said Wednesday that identifying hate speech on the platform through artificial intelligence is a way more difficult task than doing the same for a nipple.
The tech executive’s candid comments came after his company revealed their first quarter earnings report, and during a conference call with analysts, according to Business Insider. Results in the first portion of the year were fairly positive, as it posted revenue of $12 billion, a substantial jump over the same quarter last year, and its number of users shot up by 3.5 percent (49 million).
Somewhat like the updated statistics, Zuckerberg’s remarks offer a glimpse into the corporation’s wellbeing and mindset.
“It’s easier to build an AI system to detect a nipple than what is hate speech,” he said, BI reports.
The wunderkind’s line of thinking seems to show that he has reservations about the company’s ability to fairly police content on the platform since determining hate speech is subjective and thus inherently difficult. Identifying offensive subject matter and communications is prone to the differing personalities of the individual or groups of moderators, leaving AI to deal with the purported problem.
But just as it is tough for humans to impartially monitor the billions of pieces of content on the massive social media network, so too is it hard to create an AI system that doesn’t reflect any innate biases of their creator. Nevertheless, Zuckerberg said or implied multiple times during his highly anticipated congressional hearings that AI is really the only choice for the ultimate goal of making the platform as squeaky clean as possible. (RELATED: Facebook Looks To Ramp Up Outreach With Conservatives And Libertarians To Help Combat Privacy Rules)
Since those hearings, which followed a number of recent events and ostensibly new revelations, Facebook has tried to be more transparent about how it operates. For example, it revealed its once hush censorship rules Tuesday, detailing how it cracks down on content like “graphic violence,” “cruel and insensitive” behavior, like bullying, “adult nudity and sexual activity,” and “hate speech.” Each category provides criteria that can be interpreted and therefore enforced in many various ways. When an AI system is the entity doing the deciphering and subsequent implementation, then the same is possible, apparently corroborating Zuckerberg’s claim that nipple spotting is way less complex than ambiguously defined hate speech.
But even drawing the line for nipples isn’t easy.
“We restrict the display of nudity or sexual activity because some people in our community may be sensitive to this type of content,” the company’s community standards read. “Our nudity policies have become more nuanced over time. For example, while we restrict some images of female breasts that include the nipple, we allow other images, including those depicting acts of protest, women actively engaged in breast-feeding, and photos of post-mastectomy scarring.”
As calls to do more by portions of the population increase over the years, Facebook has been constantly accused of inappropriate censorship — including those of content in some way displaying a female body part. The company on separate occasions in recent years rejected images of a woman’s bare back, blocked an instructional breast examination video that showed animated boobs, and restricted a photographer’s social media capabilities who records mothers’ giving birth.
Send tips to email@example.com.
Content created by The Daily Caller News Foundation is available without charge to any eligible news publisher that can provide a large audience. For licensing opportunities of our original content, please contact firstname.lastname@example.org.