New EU law imposes a time limit on tech giants to remove content
New EU law imposes a time limit of one hour on tech giants to remove terrorist content.
Last month, a new EU law was adopted by the European Parliament, forcing online platforms to remove terrorist content within an hour of receiving a removal order from a competent authority. According to a report from Euractiv, this regulation on preventing the dissemination of terrorist content online has faced some opposition and has been called controversial. The European Commission drafted this law on the basis of several terror attacks across the bloc. This, considered a necessary step in combating the dissemination of terrorist content online, came into effect on April 28th, after being approved by the Committee on Civil Liberties, Justice and Home Affairs in January.
The proposed legislation was adopted without a vote, after approval from the Committee on Civil Liberties, Justice and Home Affairs.
On January 11, the committee on civil liberties justice and home affairs (LIBE) approved this proposed legislation. There were 52 votes in favor of this law, and 14 votes against it. A decision was made to forgo a new debate in the chamber, and the proposed legislation was approved without being put to vote in the plenary. Since then, the law has come under critical eyes and some have expressed discomfort with the implementation of this new EU law, without sufficient opportunity for debate. There are several fears that this law can be abused to silence non-terrorist speech which may be considered controversial, or that tech giants may begin preemptively monitoring posts themselves using algorithms.
Critics claim that such a short deadline placed on tech giants could encourage them to use more algorithms.
This law has been called ‘anti-free speech’ by some critics and MEPs were urged to reject the Commission’s proposed legislation. Prior to the April 28th meeting, 61 organisations collaborated on an open letter to EU lawmakers, asking that this proposal be rejected. While the Commission has sought to calm many of those fears and worries, there remains some lingering criticism of this new EU law. Critics fear that the shortness of the deadline proposed on digital platforms to remove terrorist content may result in platforms deploying automated content moderation tools. They also note that this law could potentially be used to unfairly target and silence non-terrorist groups. The critics of this law also stated that “only courts or independent administrative authority is subject to do dishes with you should have the power to issue deletion orders”.
Provisions have been added to the new EU law taking criticisms into account.
In the face of criticism of the new EU law, lawmakers seem to be taking the feedback seriously and have added a number of safeguards to the proposed legislation. It has been specifically clarified that this law is not to target “material disseminated for educational, journalistic, artistic or research purposes or for awareness-raising purposes against terrorist activity”. This was done in an effort to curb opportunistic efforts to use this law to target non-terrorist groups and silence them due to disagreements or misunderstandings. In addition, the regulation now states that “any requirement to take specific measures shall not include an obligation to use automated tools by the hosting service provider” in an effort to deal with the possibility of platforms feeling the need to use automated filters to monitor posts themselves. Transparency obligations have also been added to the proposed legislation, however many critics remain dissatisfied with the modifications.