New EU law imposes a time limit of one hour on tech giants to remove terrorist content.
Last month, a new EU law was adopted by the European Parliament, forcing online platforms to remove terrorist content within an hour of receiving a removal order from a competent authority. According to a report from Euractiv, this regulation on preventing the dissemination of terrorist content online has faced some opposition and has been called controversial. The European Commission drafted this law on the basis of several terror attacks across the bloc. This, considered a necessary step in combating the dissemination of terrorist content online, came into effect on April 28th, after being approved by the Committee on Civil Liberties, Justice and Home Affairs in January.
The proposed legislation was adopted without a vote, after approval from the Committee on Civil Liberties, Justice and Home Affairs.
On January 11, the committee on civil liberties justice and home affairs (LIBE) approved this proposed legislation. There were 52 votes in favor of this law, and 14 votes against it. A decision was made to forgo a new debate in the chamber, and the proposed legislation was approved without being put to vote in the plenary. Since then, the law has come under critical eyes and some have expressed discomfort with the implementation of this new EU law, without sufficient opportunity for debate. There are several fears that this law can be abused to silence non-terrorist speech which may be considered controversial, or that tech giants may begin preemptively monitoring posts themselves using algorithms.
Critics claim that such a short deadline placed on tech giants could encourage them to use more algorithms.
This law has been called ‘anti-free speech’ by some critics and MEPs were urged to reject the Commission’s proposed legislation. Prior to the April 28th meeting, 61 organisations collaborated on an open letter to EU lawmakers, asking that this proposal be rejected. While the Commission has sought to calm many of those fears and worries, there remains some lingering criticism of this new EU law. Critics fear that the shortness of the deadline proposed on digital platforms to remove terrorist content may result in platforms deploying automated content moderation tools. They also note that this law could potentially be used to unfairly target and silence non-terrorist groups. The critics of this law also stated that “only courts or independent administrative authority is subject to do dishes with you should have the power to issue deletion orders”.
Provisions have been added to the new EU law taking criticisms into account.
In the face of criticism of the new EU law, lawmakers seem to be taking the feedback seriously and have added a number of safeguards to the proposed legislation. It has been specifically clarified that this law is not to target “material disseminated for educational, journalistic, artistic or research purposes or for awareness-raising purposes against terrorist activity”. This was done in an effort to curb opportunistic efforts to use this law to target non-terrorist groups and silence them due to disagreements or misunderstandings. In addition, the regulation now states that “any requirement to take specific measures shall not include an obligation to use automated tools by the hosting service provider” in an effort to deal with the possibility of platforms feeling the need to use automated filters to monitor posts themselves. Transparency obligations have also been added to the proposed legislation, however many critics remain dissatisfied with the modifications.
Social Media (SM) is here to stay, with increasing importance in our day to day lives; 45% of the world population uses social networks (2020). Subsequently, there is an impending need to harness privacy practices, thereby limiting the possibility of negative impact to users. Popular SM networks include Facebook, Twitter, Snapchat, YouTube, and most recently, Clubhouse. As with all Social Media platforms, common privacy concerns include the extensive use of data by advertising companies and third party advertising services, dangers of location-based services, personal data theft and identity theft.
The line has become progressively thinned between effective marketing and privacy intrusions on Social Media. Information gathering for targeted marketing is a guaranteed way of Social Media platforms to monetize on their services, with paying advertising customers incentivizing the need to share data at the detriment of SM users. This is a form of data mining, as the creation of new SM accounts and the provision of personal data grants access to companies, who then collect data on user behaviour for targeted advertising, or worse, sale to third-party entities without the knowledge or consent of users.
When allowing access to their geolocation, SM users risk revealing their current location to everyone within their social networks. Furthermore, the average smartphone will automatically collect location data on a continuous basis, without the knowledge of the owner. Ironically, Social Media applications are the primary users of location data. Aside from the obvious threat of such information being used by malicious actors to stalk or track the user’s movements, it may also provide an open invitation to burglars in instances where the user is abroad on holiday.
Data and Identity Theft
Instances of account hacking and impersonation are fast becoming the norm. Online criminals, hackers and spammers target social networks due to the copious amounts of personal data available, which allow for an almost instant impersonation of the user. Replicating an individual online through the personal data listed on their SM profiles can lead to online fraud, stolen information and forced shares directing their followers to viruses. The appeal of SM as a cyber-attack vector stems from the ease of spreading viruses and malware, rather than by conventional email spam scams. One is much more likely to trust messages from friends and family on Social Media, clicking on links that will infect their device.
Another prevalent threat to the ‘safe space’ of Social Media is the vast spread of Fake News. Examples of this disinformation war have been seen in the U.S. Presidential elections and the U.K.’s Brexit movement. Bot accounts shared specific and polarizing information to targeted preferred audiences with the aim of driving action, in these examples to influence votes.
The new Clubhouse social networking trend and how it works
Clubhouse recently rocketed to global fame overnight, despite being around since March 2020 when it had a mere 1,500 users. The app’s notoriety stemmed from a live audio-chat hosted by Elon Musk, which was live-streamed to YouTube. Clubhouse app takes a slightly different spin on social networking as it is based on audio-chat with about 3.6 million users worldwide (February 2021) and is only available on iPhone. The app is an amalgamation of talkback radio, conference call and houseparty features, meaning users engage solely through audio recordings – either privately or publicly shared. Upon joining, members select topics of interest to engage in live conversations, interviews and discussions via a conference call set up, with the “rooms” closing once the conversation is over. Naturally, the more information given around your preferences, the more conversations and individuals the application recommends you to join and/or follow. Profiles on the app are fully visible to all users, with as much information available as members choose to provide. Most worrying perhaps, is the appearance of who invited you to join Clubhouse being a permanent fixture on your profile. Clubhouse also differentiates itself from other social networking platforms through its exclusive “invite only” characteristic, meaning users cannot simply download it from the app store and create an account. Only existing members can send out invites, which then allow new users to tune in to interesting discussions and interviews on a range of eclectic topics.
With Clubhouse being an invite-only app, what are the specific privacy concerns?
When granted illustrious membership, you are gifted two free invites. This is where the privacy concern begins as users are pressed to allow the app to access their phone contacts for easy connectivity with other users. As seen from the image above, Clubhouse knows who your friends are before you’ve even joined! Furthermore, the app manages to identify the number of friends your contacts already have on the platform, invoking the Fear Of Missing Out (FOMO) syndrome. Upon joining the app, users can see who invited you, with this information staying on your profile forever. The issue of lack of consent arises as Clubhouse uses the information gleaned from existing members’ contact lists to create profiles of people who are yet to become members. This probably occurs by cross-referencing other Clubhouse members’ shared address books, in a bid to encourage members to share the app with those who would already have friends on the platform. Under the GDPR, consent is defined as “any freely given, specific, informed and unambiguous indication of the data subject’s wishes by which he or she…signifies agreement to the processing of personal data relating to him or her”. Since EU Law states the consent of the friend as being a prerequisite prior to sharing personal data with a third-party entity, Clubhouse may unlawfully be using personal data provided by third parties. For people who have no desire to join the platform, their name, mobile number and number of friends on Clubhouse is personal data the app might already have access to.
How you can stay protected on Social Media
First and foremost, users are encouraged to check and update privacy settings on both their devices and Social Media networks on a periodic basis to limit the amount of access to personal data such as location services, microphone et al., which may be used for targeted marketing. Next, avoid using Social Media on public devices, however in such cases, be sure to log out afterwards. To avoid your accounts being infiltrated by malicious actors, be sure to create strong passwords; the stronger the password, the harder to guess. The use of symbols, capital letters, and numbers teamed with the avoidance of common or repeated passwords (birthday, spouse’s name etc.) creates an additional layer of defence. Similarly, two-factor authentication should be employed for all accounts (including email) to make it that much harder for hackers to gain access. From a cybersecurity perspective, users can install antivirus and anti-spyware software on their devices and ensure they are kept up to date in order to be effective. However, all of these protective measures are rendered useless if you post sensitive personal data online as you (or your contacts) may be inadvertently leaking your own data. Once information is posted online, it is automatically rendered public, with the inherent possibility of it falling into the wrong hands – with or without stringent security measures. As such, the strongest recommendation is to take stock of what you post online, and be careful with the amount of personal data you are revealing, keeping the information to a minimum.
From regulation of Big Tech to the upcoming legislative framework for AI, I have identified the key areas to be on the lookout for in 2021 when it comes to regulating ICT.
Regulating digital gatekeepers
Big Tech has been on the regulators’ and legislators’ radar for a while, with earlier EU antirust fines imposed on Google and the introduction of more serious fines by the GDPR. While a number of antitrust cases have been filed against Facebook in the US in late 2019, the latest European Commission proposal signifies more considerable innovation when it comes to regulating Big Tech.
The proposed Digital Markets Act creates something new: asymmetric, market power-based regulation of digital gatekeepers. Whereas the new Digital Services Act continues to build on the existing consumer protection philosophy, which only acknowledges the asymmetry between the consumer and the business, Digital Markets Act only imposes remedies on those digital platforms that have an entrenched, durable intermediary position. This is akin to asymmetric regulation of telecoms operators with Significant Market Power.
In a manner that resembles telecoms infrastructure regulation, the Digital Markets Act seeks to grant access to the Big Tech platforms by means of ‘unbundling’ some of their features. Considering the regulators’ and legislators’ reluctance to regulate Internet ‘content’ since late 1990s, such ex ante measures can be seen as truly historic.
Following the introduction of GDPR rules on profiling and human intervention, the Ethics Guidelines for Trustworthy AI prepared by the EU High-Level Expert Group on Artificial Intelligence (AI HLEG) have provided a strong hint that we can expect a horizontal legislative action in the area of AI.
Following a public consultation in 2020, European Commission expects to unveil a legislative proposal regulating AI in the first quarter of 2021. It remains to be seen to what extent will European legislators transpose ethical principles identified by the AI HLEG, such as human agency and oversight or technical robustness and safety, into mandatory legal obligations.
European Electronic Communications Code (EECC)
The EECC, a new Directive incorporating most of the EU electronic communications legislation, was due for implementation on 21st December 2020. With a few Member States still lagging behind the schedule, often due to COVID-19, European Commission has already adopted Delegated regulation on EU-wide voice-call termination rates for both fixed and mobile calls. This further reduces wholesale prices of voice calls within the EU with the aim of further reduction of retail prices.
The impact of the EECC on telecoms markets remains to be seen. The new Directive gives national regulatory authorities more options to tackle market failure through commitments of dominant players. It modernises and further harmonises the rules on spectrum with a view to 5G and technologies to follow, plus introduces basic regulation protection customers using number-independent OTT communications services. The latter have until now largely been excluded from the regulation of the telecoms sector.
Practical effects will largely depend on the implementation in the Member States. For example, will the regulators be able to leverage regulatory sticks and carrots to foster the emergence of wholesale-only broadband infrastructure players? Implementation at the national level will also be crucial to reap the full benefits of the new spectrum management rules, according to Vesna Prodnik of Vafer, a specialised mobile telecoms consultancy: “Member States still have a wide discretion as to the exact rules on simplifying small-area wireless cells placement, which are crucial for the necessary 5G network density.”
As even larger amount of business and private life moves online because of COVID-19 pandemic, online identity fraud has become even more rampant. Should governments centralise the approach to e-identity? Or should one rely on decentralised, commercially offered identity solutions? Should everyone receive an ID certificate, or another means of verifying who they are in all online environments?
The EU eIDAS Regulation has introduced an interoperability framework for EU citizens using their own national electronic identification schemes (eIDs) to access public services in other Member States. It has further created an internal market for trust services – namely electronic signatures, electronic seals, time stamp, electronic delivery service and website authentication – by ensuring that they will work across borders and have the same legal status as their traditional paper based equivalents.
Despite these developments, we still seem to be far from a uniform and universally accepted electronic IDs, especially at the international level. A further push may come from the European Commision’s review of the eIDAS regulation, which is currently underway following an open public consultation.
Next steps for ICT businesses
Check how your online operations might be affected in the future by the additional obligations proposed by the EU Digital Services Act
If you develop or deploy AI solutions, consider doing an AI ethics impact assessment to ensure their long-term viability
Check if any of the services you provide online might be classified as interpersonal communications services and therefore subject to EECC regulation
At Aphaia, we will continue to keep up with these developments. Please reach out if your business requires assistance with any of them. You can visit us at https://aphaia.consulting to explore our full array of services.
Amazon facing lawsuit in Germany after being accused of breaking EU’s privacy laws against the EU-US Privacy Shield.
The global giant Amazon is currently facing a lawsuit and has been accused of breaking the privacy laws in Europe, according to this recent article from Politico. The company has been accused of using the infamous Privacy Shield despite its previous invalidation in Europe which has led to this lawsuit. The basis is that the Court of Justice of the European Union made clear that transferring data through the Privacy Shield was no longer allowed following July’s Schrems II judgment. This ruling invalidated the EU-US privacy shield. The reason for the invalidation was that the CJEU decided that shipping data outside of the EU put it at risk. According to the CJEU, US surveillance customs are more intrusive than they should be and go beyond what is acceptable for privacy. While Amazon understands that the Privacy Shield is invalid, it appears that they have continued to use this invalidated transfer mechanism.
Standard Contractual Clauses are still a viable option for companies needing to transfer data.
Standard Contractual Clauses (SCCs) are another option for the technological giants and are used by the likes of Google and Facebook. The difference is that exporting data from the EU using the SCC requires more supervision, and better ensures the safety of the data. While the SCC gives these companies an alternative, the clauses come with caveats, and are not entirely free of problems. Right now, the giant Facebook stands against the Irish data regulators regarding their use of the clauses.
EuGD takes legal action against Amazon.
EuGD (Europäische Gesellschaft für Datenschutz) decided to take action putting forth the formal legal complaint that escalated the conflict. The recent article by Vincent Manancourt, features a statement from Johann Hermann, the current head of EuGD, the group behind the legal complaint. “The [Court of Justice of the European Union] has made it clear that data transfers to the U.S. on the basis of the Privacy Shield are no longer permitted. If the world’s leading cloud company and largest e-commerce provider remains inactive for more than two months and ignores consumer rights, that is unacceptable,” said Mr Hermann, head of Europäische Gesellschaft für Datenschutz (EuGD). Moreover, the founder of EuGD, Thomas Bindl, said that taking the legal route was a decision made taking into consideration similar conflicts.
Despite the noise and controversy surrounding the conflict and impending lawsuit, it is still necessary to wait and see the developments in court. However, regardless of the result in the ruling, this will likely inspire greater vigilance and compliance on the part of companies who may also be transferring data out of Europe.