Connect with us

Technology

Facebook says it removed hateful anti-Muslim posts when it hadn’t

Published

on


Mark Zuckerberg question mark
Facebook
CEO Mark Zuckerberg.


Robert
Galbraith/Reuters



  • Facebook sent messages to a user telling her that it
    had removed hate-speech she had reported, but it hadn’t,
    the
    BBC reports
    .
  • Facebook said this was caused by a glitch, which sent
    automated messages telling users that reported content had been
    taken down when it was still live.
  • The company could not comment on how many users the
    glitch may have affected.

Facebook told a user that it had removed hate speech she
reported, when it hadn’t, according to the BBC.

Facebook told user Janet (a name given to the user by the BBC to
protect her identity) that it had removed hateful anti-Muslim
posts when they actually remained live on the social network.

After reporting the posts, she received a message saying: “We
removed both the group and all its posts, including the one you
reported.” But this was not the case.

Facebook told the BBC that it is looking into a possible glitch
in its content moderation system. The glitch reportedly sends a
message telling users that content they’ve reported has been
taken down, when in fact Facebook’s moderators have deemed it
permissible to stay online.

“We are investigating this issue, and will share more information
as soon as we can,” Facebook said. Business Insider contacted
Facebook to ask if the glitch has been fixed, what caused it, and
how many users it may have affected.

Janet shared examples of content which had stayed up after she
was told they’d been removed, including from a group with upwards
of 54,000 members named “LARGEST GROUP EVER! We need 10000000
members to Make America Great Again!” Janet reported the group
for anti-Muslim and anti-immigrant rhetoric.

“[Facebook] has been promoting themselves in my Newsfeed saying
they are trying to keep our democracy safe by eliminating content
that is false and divisive,” Janet said.

“If they are sending me notices they removed the content and
offensive groups but in reality are not, doesn’t this go against
what they say in public or to Congress?”

How Facebook goes about removing content that is false or
divisive was a key talking point when
COO Sheryl Sandberg gave testimony to Congress earlier this
month
. Facebook also admitted in August that it had been “too
slow” to act on
hate speech in Myanmar
.

“Facebook claims to be removing this content but obviously they
are not,” Janet said. “I can’t be the only one.”

Facebook has been under the microscope for how it polices its
platform recently, as some critics feel it hasn’t invested enough
in employing people to moderate content that gets reported.
Sandberg told Congress that Facebook will be
doubling the number of people it employs in safety and security
to 20,000
.

Continue Reading
Advertisement Find your dream job

Trending