New EU law

New EU law imposes a time limit on tech giants to remove content

New EU law imposes a time limit of one hour on tech giants to remove terrorist content. 


Last month, a new EU law was adopted by the European Parliament, forcing online platforms to remove terrorist content within an hour of receiving a removal order from a competent authority. According to a report from Euractiv, this regulation on preventing the dissemination of terrorist content online has faced some opposition and has been called controversial. The European Commission drafted this law on the basis of several terror attacks across the bloc. This, considered a necessary step in combating the dissemination of terrorist content online, came into effect on April 28th, after being approved by the Committee on Civil Liberties, Justice and Home Affairs in January. 


The proposed legislation was adopted without a vote, after approval from the Committee on Civil Liberties, Justice and Home Affairs. 


On January 11, the committee on civil liberties justice and home affairs (LIBE) approved this proposed legislation. There were 52 votes in favor of this law, and 14 votes against it. A decision was made to forgo a new debate in the chamber, and the proposed legislation was approved without being put to vote in the plenary. Since then, the law has come under critical eyes and some have expressed discomfort with the implementation of this new EU law, without sufficient opportunity for debate. There are several fears that this law can be abused to silence non-terrorist speech which may be considered controversial, or that tech giants may begin preemptively monitoring posts themselves using algorithms. 


Critics claim that such a short deadline placed on tech giants could encourage them to use more algorithms. 


This law has been called ‘anti-free speech’ by some critics and MEPs were urged to reject the Commission’s proposed legislation. Prior to the April 28th meeting, 61 organisations collaborated on an open letter to EU lawmakers, asking that this proposal be rejected. While the Commission has sought to calm many of those fears and worries, there remains some lingering criticism of this new EU law. Critics fear that the shortness of the deadline proposed on digital platforms to remove terrorist content may result in platforms deploying automated content moderation tools. They also note that this law could potentially be used to unfairly target and silence non-terrorist groups. The critics of this law also stated that “only courts or independent administrative authority is subject to do dishes with you should have the power to issue deletion orders”. 


Provisions have been added to the new EU law taking criticisms into account. 


In the face of criticism of the new EU law, lawmakers seem to be taking the feedback seriously and have added a number of safeguards to the proposed legislation. It has been specifically clarified that this law is not to target “material disseminated for educational, journalistic, artistic or research purposes or for awareness-raising purposes against terrorist activity”. This was done in an effort to curb opportunistic efforts to use this law to target non-terrorist groups and silence them due to disagreements or misunderstandings. In addition, the regulation now states that “any requirement to take specific measures shall not include an obligation to use automated tools by the hosting service provider” in an effort to deal with the possibility of platforms feeling the need to use automated filters to monitor posts themselves. Transparency obligations have also been added to the proposed legislation, however many critics remain dissatisfied with the modifications. 


Does your company have all of the mandated safeguards in place to ensure compliance with the GDPR and Data Protection Act 2018 in handling customer data? Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, and Data Protection Officer outsourcing. We can help your company get on track towards full compliance.

Social Media platforms

Social Media platforms and inherent privacy concerns

Is Social Media a safe space?


Social Media (SM) is here to stay, with increasing importance in our day to day lives; 45% of the world population uses social networks (2020). Subsequently, there is an impending need to harness privacy practices, thereby limiting the possibility of negative impact to users. Popular SM networks include Facebook, Twitter, Snapchat, YouTube, and most recently, Clubhouse. As with all Social Media platforms, common privacy concerns include the extensive use of data by advertising companies and third party advertising services, dangers of location-based services, personal data theft and identity theft. 


The line has become progressively thinned between effective marketing and privacy intrusions on Social Media. Information gathering for targeted marketing is a guaranteed way of Social Media platforms to monetize on their services, with paying advertising customers incentivizing the need to share data at the detriment of SM users. This is a form of data mining, as the creation of new SM accounts and the provision of personal data grants access to companies, who then collect data on user behaviour for targeted advertising, or worse, sale to third-party entities without the knowledge or consent of users. 


When allowing access to their geolocation, SM users risk revealing their current location to everyone within their social networks. Furthermore, the average smartphone will automatically collect location data on a continuous basis, without the knowledge of the owner. Ironically, Social Media applications are the primary users of location data. Aside from the obvious threat of such information being used by malicious actors to stalk or track the user’s movements, it may also provide an open invitation to burglars in instances where the user is abroad on holiday. 

Data and Identity Theft

Instances of account hacking and impersonation are fast becoming the norm. Online criminals, hackers and spammers target social networks due to the copious amounts of personal data available, which allow for an almost instant impersonation of the user. Replicating an individual online through the personal data listed on their SM profiles can lead to online fraud, stolen information and forced shares directing their followers to viruses. The appeal of SM as a cyber-attack vector stems from the ease of spreading viruses and malware, rather than by conventional email spam scams. One is much more likely to trust messages from friends and family on Social Media, clicking on links that will infect their device. 

Fake News

Another prevalent threat to the ‘safe space’ of Social Media is the vast spread of Fake News. Examples of this disinformation war have been seen in the U.S. Presidential elections and the U.K.’s Brexit movement. Bot accounts shared specific and polarizing information to targeted preferred audiences with the aim of driving action, in these examples to influence votes. 


The new Clubhouse social networking trend and how it works


Clubhouse recently rocketed to global fame overnight, despite being around since March 2020 when it had a mere 1,500 users. The app’s notoriety stemmed from a live audio-chat hosted by Elon Musk, which was live-streamed to YouTube. Clubhouse app takes a slightly different spin on social networking as it is based on audio-chat with about 3.6 million users worldwide (February 2021) and is only available on iPhone. The app is an amalgamation of talkback radio, conference call and houseparty features, meaning users engage solely through audio recordings – either privately or publicly shared. Upon joining, members select topics of interest to engage in live conversations, interviews and discussions via a conference call set up, with the “rooms” closing once the conversation is over. Naturally, the more information given around your preferences, the more conversations and individuals the application recommends you to join and/or follow. Profiles on the app are fully visible to all users, with as much information available as members choose to provide. Most worrying perhaps, is the appearance of who invited you to join Clubhouse being a permanent fixture on your profile. Clubhouse also differentiates itself from other social networking platforms through its exclusive “invite only” characteristic, meaning users cannot simply download it from the app store and create an account. Only existing members can send out invites, which then allow new users to tune in to interesting discussions and interviews on a range of eclectic topics.

Social Media platforms
Social Media platforms


With Clubhouse being an invite-only app, what are the specific privacy concerns? 


When granted illustrious membership, you are gifted two free invites. This is where the privacy concern begins as users are pressed to allow the app to access their phone contacts for easy connectivity with other users. As seen from the image above, Clubhouse knows who your friends are before you’ve even joined! Furthermore, the app manages to identify the number of friends your contacts already have on the platform, invoking the Fear Of Missing Out (FOMO) syndrome. Upon joining the app, users can see who invited you, with this information staying on your profile forever. The issue of lack of consent arises as Clubhouse uses the information gleaned from existing members’ contact lists to create profiles of people who are yet to become members. This probably occurs by cross-referencing other Clubhouse members’ shared address books, in a bid to encourage members to share the app with those who would already have friends on the platform. Under the GDPR, consent is defined as “any freely given, specific, informed and unambiguous indication of the data subject’s wishes by which he or she…signifies agreement to the processing of personal data relating to him or her”. Since EU Law states the consent of the friend as being a prerequisite prior to sharing personal data with a third-party entity, Clubhouse may unlawfully be using personal data provided by third parties. For people who have no desire to join the platform, their name, mobile number and number of friends on Clubhouse is personal data the app might already have access to.


How you can stay protected on Social Media


First and foremost, users are encouraged to check and update privacy settings on both their devices and Social Media networks on a periodic basis to limit the amount of access to personal data such as location services, microphone et al., which may be used for targeted marketing. Next, avoid using Social Media on public devices, however in such cases, be sure to log out afterwards. To avoid your accounts being infiltrated by malicious actors, be sure to create strong passwords; the stronger the password, the harder to guess. The use of symbols, capital letters, and numbers teamed with the avoidance of common or repeated passwords (birthday, spouse’s name etc.) creates an additional layer of defence. Similarly, two-factor authentication should be employed for all accounts (including email) to make it that much harder for hackers to gain access. From a cybersecurity perspective, users can install antivirus and anti-spyware software on their devices and ensure they are kept up to date in order to be effective. However, all of these protective measures are rendered useless if you post sensitive personal data online as you (or your contacts) may be inadvertently leaking your own data. Once information is posted online, it is automatically rendered public, with the inherent possibility of it falling into the wrong hands – with or without stringent security measures. As such, the strongest recommendation is to take stock of what you post online, and be careful with the amount of personal data you are revealing, keeping the information to a minimum. 


Does your company have all of the mandated safeguards in place to ensure compliance with the GDPR and Data Protection Act 2018 in handling customer data? Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, and Data Protection Officer outsourcing. We can help your company get on track towards full compliance.

Privacy and ethical concerns of social media

Privacy and ethical concerns have become more relevant in social media due to the prevalence of “explore”, “discover” or “for you” tabs and pages.


“Discover” pages on social media deliver content that the app thinks that the user would likely be interested in. This is based on several factors including user interactions, video information, account settings and device settings. These are individually weighted based on the social media algorithms. This has raised some ears regarding profiling and related privacy concerns, particularly with regard to the processing of personal data of minors. 


While automated decisions are allowed, as long as there are no legal ramifications, specific care and attention needs to be applied to the use of the personal data of minors. 


The decisions made which cause specific content to show up on “explore” and “discover” pages are by and large automated decisions based on profiling of individuals’ personal data. While this may benefit several organizations and individuals allowing large volumes of data to be analyzed and decisions made very quickly, showing only what is considered to be the most relevant content to the individual, there are certain risks involved. Much of the profiling which occurs is inconspicuous to the individual and may quite possibly have adverse effects. GDPR Article 22 does not prohibit automated decisions, not even regarding some minors, as long as those decisions do not have any legal or similarly significant effect on the individual. Working Party 29, now known as the EDPB states that “ solely automated decision making which influences a child’s choices and behavior, could potentially have a legal or similarly significant effect on them, depending upon the nature of the choices and behaviors in question.“ As a requirement of the GDPR, specific protection needs to be applied to the use of personal data when creating personality or user profiles specifically for children or to be used by children. 


Much of the data processed by social media apps require consent, however most minors are not able to provide their own consent. 


According to the latest updates of the EU ePrivacy rules much of the data processed by social media apps and websites may require consent. In many parts of the world, most minors are not legally able to provide their own consent. The age of consent in this regard varies around the world, and in some countries it can even reach up to 16 years old. However in the UK specifically, children aged 13 or over are able to provide their own consent. The parents or guardians of children younger than this are the ones who must provide consent on their behalf. As a data controller, it is important to know which data requires consent, from whom, and how this consent will be collected, and which data can be processed based on another lawful basis different to consent.


In developing social media apps and features it is important to consider several ethical principles. 


Trustworthy AI should be lawful, ethical and robust. In developing social media apps and features, it is important to ensure that the data is kept secure, the algorithms are explainable and that the content delivered to the user does not include any biases. Ethical principles like technical robustness, privacy, transparency and non-discrimination are considered paramount. Because social media algorithms serve up content to users on explore and discover pages, it is imperative that the decisions made by these AI systems are incredibly transparent and that attention is paid to whether, or how these systems may possibly be discriminatory. An AI ethics assessment can provide incredible insight into how fair these AI decisions may actually be, and how to ethically and lawfully go about developing the algorithms for social media apps and platforms. 


We recently published a short vlog on our YouTube channel exploring the privacy and ethical concerns in social media. Be sure to check it out, like, comment and subscribe to our channel for more AI ethics and privacy content. 

Does your company have all of the mandated safeguards in place to ensure compliance with the ePrivacy, GDPR and Data Protection Act 2018 in handling customer data? Aphaia provides ePrivacy, GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, EU AI Ethics Assessments and Data Protection Officer outsourcing. We can help your company get on track towards full compliance.

GDPR and social media

GDPR and social media : EU Court on fan pages on Facebook

Earlier this month the ECJ published a preliminary ruling finding the fan page admin jointly responsible with Facebook for the personal data of the visitors. Although the decision refers to the previously enforceable EU Data Protection Directive, the new rule paves the way for GDPR and social media practice, since the definition of the processor has not been altered.

GDPR and social media

The dispute had arisen in 2011 when the the data-protection authority of Schleswig-Holstein ordered an educational academy, under the name of Wirtschaftsakademie, to delete its facebook fan page because it failed to inform its users that personal data had been collected and processed via cookies. In particular,  Wirtschaftsakademie used the Insights tool provided by Facebook which provided demographic data of its audience following the processing of personal information such as age, sex, relationships, occupation, information on the lifestyles and centres of interests etc. Based on the anonymised demographic data the admin is able to customise its Facebook content targeting the relevant audience.

Wirtschaftsakademie argued before the German administrative courts that it was not responsible for the data collected by Facebook without its instructions. However, the ECJ after being asked by the national court decided that the fan page admin and facebook are jointly responsible as controllers of the personal data. The fact that the platform used to process the personal data was provided by Facebook cannot justify an exemption of the joint liability.

Nonetheless, in this dispute with crucial GDPR and social media implications, the European Court clarified that the responsibility of the two controllers, who are involved in different stages of the process, may not be equal. Therefore the level of responsibility of each operator should be assessed after taking all relevant circumstances of the case into consideration.

Do you require assistance understanding GDPR and social media ? Aphaia provides both GDPR adaptation consultancy services and Data Protection Officer outsourcing.