Websites will face fines from the European Commission if extremist content remains on their sites for over an hour.

The regulation would affect Twitter, Facebook and YouTube among others.

By Redacción MNN Monday, August 20, 2018 comments

Brussels plans to force companies including Facebook, YouTube and Twitter to identify and delete online terrorist propaganda and extremist violence or face the threat of fines.

The European Commission has decided to abandon a voluntary approach to get big internet platforms to remove terror-related videos, posts and audio clips from their websites, in favour of tougher draft regulation due to be published next month.

The shake-up comes in the wake of high-profile terror attacks across Europe over the past few years.

Julian King, the EU’s commissioner for security, told the Financial Times that Brussels had “not seen enough progress” on the removal of terrorist material from technology companies and would “take stronger action in order to better protect its citizens”.

In March, the EU’s civil service published details of the current voluntary arrangement, which noted that “terrorist content is most harmful in the first hours of its appearance online”.

At the time, it said there was “significant scope for more effective action”.

Mr King told the FT that the law would apply to small social media apps as well as the bigger players.

“Platforms have differing capabilities to act against terrorist content and their policies for doing so are not always transparent,” he added.

A study published last month by the not-for-profit Counter Extremism Project said that between March and June, 1,348 videos related to the Islamic State group were uploaded on to YouTube, via 278 separate accounts, garnering more than 163,000 views.

The report said that 24% of the videos had remained online for more than two hours.

Brussels’ crackdown on extremist activity comes in the wake of high-profile terror attacks in London, Paris, and Berlin over the past two years. But the move to draw up legislation has been contested inside parts of the commission, which believes self-regulation has been a success on the biggest platforms that are most utilised by terrorist groups.

Google said more than 90 per cent of the terrorist material removed from YouTube was flagged automatically, with half of the videos having fewer than 10 views. Facebook said it had removed the vast majority of 1.9m examples of Isis and al-Qaeda content that was detected on the site in the first three months of this year.

If the proposed regulation is approved, it will be the first time the European Commission has explicitly targeted tech firms’ handling of illegal content.