Connect with us

Social Media

LinkedIn formally joins EU Code on hate speech takedowns

Published

on

Microsoft-owned LinkedIn has committed to doing more to quickly purge illegal hate speech from its platform in the European Union by formally signing up to a self-regulatory initiative that seeks to tackle the issue through a voluntary Code of Conduct.

In statement today, the European Commission announced that the professional social network has joined the EU’s Code of Conduct on Countering Illegal Hate Speech Online, with justice commissioner, Didier Reynders, welcoming LinkedIn’s (albeit tardy) participation, and adding in a statement that the code “is and will remain an important tool in the fight against hate speech, including within the framework established by digital services legislation”.

“I invite more businesses to join, so that the online world is free from hate,” Reynders added.

While LinkedIn’s name wasn’t formally associated with the voluntary Code before now it said it has “supported” the effort via parent company Microsoft, which was already signed up.

In a statement on its decision to formally join now, it also said:

“LinkedIn is a place for professional conversations where people come to connect, learn and find new opportunities. Given the current economic climate and the increased reliance jobseekers and professionals everywhere are placing on LinkedIn, our responsibility is to help create safe experiences for our members. We couldn’t be clearer that hate speech is not tolerated on our platform. LinkedIn is a strong part of our members’ professional identities for the entirety of their career — it can be seen by their employer, colleagues and potential business partners.”

In the EU ‘illegal hate speech’ can mean content that espouses racist or xenophobic views, or which seeks to incite violence or hatred against groups of people because of their race, skin color, religion or ethnic origin etc.

A number of Member States have national laws on the issue — and some have passed their own legislation specifically targeted at the digital sphere. So the EU Code is supplementary to any actual hate speech legislation. It is also non-legally binding.

The initiative kicked off back in 2016 — when a handful of tech giants (Facebook, Twitter, YouTube and Microsoft) agreed to accelerate takedowns of illegal speech (or well, attach their brand names to the PR opportunity associated with saying they would).

Since the Code became operational, a handful of other tech platforms have joined — with video sharing platform TikTok signing up last October, for example.

But plenty of digital services (notably messaging platforms) still aren’t participating. Hence the Commission’s call for more digital services companies to get on board.

At the same time, the EU is in the process of firming up hard rules in the area of illegal content.

Last year the Commission proposed broad updates (aka the Digital Services Act) to existing ecommerce rules to set operational ground rules that they said are intended to bring online laws in line with offline legal requirements — in areas such as illegal content, and indeed illegal goods. So, in the coming years, the bloc will get a legal framework that tackles — at least at a high level — the hate speech issue, not merely a voluntary Code. 

The EU also recently adopted legislation on terrorist content takedowns (this April) — which is set to start applying to online platforms from next year.

But it’s interesting to note that, on the perhaps more controversial issue of hate speech (which can deeply intersect with freedom of expression), the Commission wants to maintain a self-regulatory channel alongside incoming legislation — as Reynders’ remarks underline.

Brussels evidently sees value in having a mixture of ‘carrots and sticks’ where hot button digital regulation issues are concerned. Especially in the controversial ‘danger zone’ of speech regulation.

So, while the DSA is set to bake in standardized ‘notice and response’ procedures to help digital players swiftly respond to illegal content, by keeping the hate speech Code around it means there’s a parallel conduit where key platforms could be encouraged by the Commission to commit to going further than the letter of the law (and thereby enable lawmakers to sidestep any controversy if they were to try to push more expansive speech moderation measures into legislation).

The EU has — for several years — had a voluntary a Code of Practice on Online Disinformation too. (And a spokeswoman for LinkedIn confirmed it has been signed up to that since its inception, also through its parent company Microsoft.)

And while lawmakers recently announced a plan to beef that Code up — to make it “more binding”, as they oxymoronically put it — it certainly isn’t planning to legislate on that (even fuzzier) speech issue.

In further public remarks today on the hate speech Code, the Commission said that a fifth monitoring exercise in June 2020 showed that on average companies reviewed 90% of reported content within 24 hours and removed 71% of content that was considered to be illegal hate speech.

It added that it welcomed the results — but also called for signatories to redouble their efforts, especially around providing feedback to users and in how they approach transparency around reporting and removals.

The Commission has also repeatedly calls for platforms signed up to the disinformation Code to do more to tackle the tsunami of ‘fake news’ being fenced on their platforms, including — on the public health front — what they last year dubbed a coronavirus infodemic.

The COVID-19 crisis has undoubtedly contributed to concentrating lawmakers’ minds on the complex issue of how to effectively regulate the digital sphere and likely accelerated a number of EU efforts.

 

Continue Reading
Advertisement Find your dream job

Trending