LinkedIn users’ data for sale

LinkedIn users’ data for sale on hacking forum – 700 million affected

The details of 700 million LinkedIn users were recently posted for sale on a notorious hacking forum. 


The details of 700 million LinkedIn users were recently posted for sale on a popular hacking forum. Last month, a user put information for sale on RaidForums, where it was spotted by Privacy Sharks, a news site. The seller provided a sample of 1 million records, which Privacy Sharks viewed and investigated, confirming the validity of the records which included names, gender, phone numbers, email addresses and work information. This is the second instance this year of LinkedIn user information being scraped and posted for sale online. In April, a total of 500 million LinkedIn users were affected in a similar event. 


LinkedIn’s investigation revealed that the data was scraped from LinkedIn as well as other other sources. 


LinkedIn maintains that this compilation of information of 700 million users was not the result of a data breach, and that the information is all publicly available. The company reported that no private LinkedIn member data was exposed. The ongoing investigation has so far uncovered in an initial analysis, that the data includes information scraped from LinkedIn as well as other sources. LinkedIn has released a statement, stating that they determined that the information which was posted for sale was “an aggregation of data from a number of websites and companies.” The company also states that scraping, and other misuse of members’ data violates its terms of service, and that it will work to stop any entities misusing LinkedIn members’ data, and hold them accountable. 


LinkedIn has sought legal action in the past for violation of its terms of service, by data scraping. 


While no one has been named as being responsible in this case, LinkedIn is currently in an almost 2-year legal battle to protect its user data and terms of service by seeking litigation over data scraping. In September of 2019, LinkedIn sought legal action against data analytics organization hiQ Labs in the United States Court of Appeals for the Ninth Circuit. At the time, hiQ Labs was found to have been using automated bots to scrape information from public LinkedIn profiles, at which time LinkedIn served them with a cease and desist, claiming that this violated their terms of service. In this case the court ruled that data scraping was legal. The information was all publicly available and was being collected by this data analytics organization. However, LinkedIn once again brought this case before the courts last month, in this instance, going to The Supreme Court. The Supreme Court threw out the lower court’s original ruling, giving LinkedIn another opportunity to plead its case in the 9th circuit. No statement has been made as to whether legal action will also be taken in this instance. 

Does your company have all of the mandated safeguards in place to ensure compliance with the GDPR, Law Enforcement Directive and Data Protection Act 2018? Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.

emergency measures for children’s protection

EU approves emergency measures for children’s protection

Temporary emergency measures for children’s protection have just been adopted by European Parliament.


Temporary emergency measures for children’s protection were adopted by European Parliament on July 6th. This regulation will allow electronic communication service providers to scan private online messages containing any display of child sex abuse. The European Commission reported that almost 4 million visual media files containing child abuse were reported last year. There were also 1,500 reports of grooming of minors by sexual predators. Over the past 15 years, reports of this kind have increased by 15,000%. 


This new regulation, which is intended to be executed using AI, has raised some questions regarding privacy. 


Electronic communication service providers are being given the green light to voluntarily scan private conversations and flag content which may contain any display of child sex abuse. This scanning procedure will detect content for flagging using AI, under human supervision. They will also be able to utilize anti-grooming technologies once consultations with data protection authorities are complete. These mechanisms have received some pushback due to privacy concerns. Last year, the EDPB published a non-binding opinion which questioned whether these measures would threaten the fundamental right to privacy. 


Critics argue that this law will not prevent child abuse but will rather make it more difficult to detect and potentially expose legitimate communication between adults. 


This controversial legislation drafted in September 2020, at the peak of the global pandemic, which saw a spike in reports of minors being targeted by predators online, enables companies to voluntarily monitor material related to child sexual abuse. However, it does not require companies to take action. Still, several privacy concerns were raised regarding its implementation, particularly around exposing legitimate conversation between adults which may contain nude material, violating their privacy and potentially opening them up to some form of abuse. During the negotiations, changes were made to include the need to inform users of the possibility of scanning their communications, as well as dictating data retention periods and limitations on the execution of this technology. Despite this, the initiative was criticized, citing that automated tools often flag non relevant material in the majority of cases. Concerns were raised about the possible effect this may have on channels for confidential counseling. Ultimately, critics believe that this will not prevent child abuse, but will rather make it harder to discover it, as it would encourage more hidden tactics. 


This new EU law for children’s protection is a temporary solution for dealing with the ongoing problem of child sexual abuse. 


From the start of 2021, the definition of electronic communications has been changed under EU law to include messaging services. As a result private messaging, which was previously regulated by the GDPR, is now regulated by the ePrivacy directive. Unlike the GDPR, the ePrivacy directive did not include measures to detect child sexual abuse. As a result, voluntary reporting by online providers fell dramatically with the aforementioned change. Negotiations have stalled for several years on revising the ePrivacy directive to include protection against child sexual abuse. This new EU law for children’s protection is but a temporary measure, intended to last until December 2025, or until the revised ePrivacy directive enters into force. 


Does your company have all of the mandated safeguards in place to ensure compliance with the GDPR, Law Enforcement Directive and Data Protection Act 2018? Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.

New EU law

New EU law imposes a time limit on tech giants to remove content

New EU law imposes a time limit of one hour on tech giants to remove terrorist content. 


Last month, a new EU law was adopted by the European Parliament, forcing online platforms to remove terrorist content within an hour of receiving a removal order from a competent authority. According to a report from Euractiv, this regulation on preventing the dissemination of terrorist content online has faced some opposition and has been called controversial. The European Commission drafted this law on the basis of several terror attacks across the bloc. This, considered a necessary step in combating the dissemination of terrorist content online, came into effect on April 28th, after being approved by the Committee on Civil Liberties, Justice and Home Affairs in January. 


The proposed legislation was adopted without a vote, after approval from the Committee on Civil Liberties, Justice and Home Affairs. 


On January 11, the committee on civil liberties justice and home affairs (LIBE) approved this proposed legislation. There were 52 votes in favor of this law, and 14 votes against it. A decision was made to forgo a new debate in the chamber, and the proposed legislation was approved without being put to vote in the plenary. Since then, the law has come under critical eyes and some have expressed discomfort with the implementation of this new EU law, without sufficient opportunity for debate. There are several fears that this law can be abused to silence non-terrorist speech which may be considered controversial, or that tech giants may begin preemptively monitoring posts themselves using algorithms. 


Critics claim that such a short deadline placed on tech giants could encourage them to use more algorithms. 


This law has been called ‘anti-free speech’ by some critics and MEPs were urged to reject the Commission’s proposed legislation. Prior to the April 28th meeting, 61 organisations collaborated on an open letter to EU lawmakers, asking that this proposal be rejected. While the Commission has sought to calm many of those fears and worries, there remains some lingering criticism of this new EU law. Critics fear that the shortness of the deadline proposed on digital platforms to remove terrorist content may result in platforms deploying automated content moderation tools. They also note that this law could potentially be used to unfairly target and silence non-terrorist groups. The critics of this law also stated that “only courts or independent administrative authority is subject to do dishes with you should have the power to issue deletion orders”. 


Provisions have been added to the new EU law taking criticisms into account. 


In the face of criticism of the new EU law, lawmakers seem to be taking the feedback seriously and have added a number of safeguards to the proposed legislation. It has been specifically clarified that this law is not to target “material disseminated for educational, journalistic, artistic or research purposes or for awareness-raising purposes against terrorist activity”. This was done in an effort to curb opportunistic efforts to use this law to target non-terrorist groups and silence them due to disagreements or misunderstandings. In addition, the regulation now states that “any requirement to take specific measures shall not include an obligation to use automated tools by the hosting service provider” in an effort to deal with the possibility of platforms feeling the need to use automated filters to monitor posts themselves. Transparency obligations have also been added to the proposed legislation, however many critics remain dissatisfied with the modifications. 


Does your company have all of the mandated safeguards in place to ensure compliance with the GDPR and Data Protection Act 2018 in handling customer data? Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, and Data Protection Officer outsourcing. We can help your company get on track towards full compliance.

Social Media platforms

Social Media platforms and inherent privacy concerns

Is Social Media a safe space?


Social Media (SM) is here to stay, with increasing importance in our day to day lives; 45% of the world population uses social networks (2020). Subsequently, there is an impending need to harness privacy practices, thereby limiting the possibility of negative impact to users. Popular SM networks include Facebook, Twitter, Snapchat, YouTube, and most recently, Clubhouse. As with all Social Media platforms, common privacy concerns include the extensive use of data by advertising companies and third party advertising services, dangers of location-based services, personal data theft and identity theft. 


The line has become progressively thinned between effective marketing and privacy intrusions on Social Media. Information gathering for targeted marketing is a guaranteed way of Social Media platforms to monetize on their services, with paying advertising customers incentivizing the need to share data at the detriment of SM users. This is a form of data mining, as the creation of new SM accounts and the provision of personal data grants access to companies, who then collect data on user behaviour for targeted advertising, or worse, sale to third-party entities without the knowledge or consent of users. 


When allowing access to their geolocation, SM users risk revealing their current location to everyone within their social networks. Furthermore, the average smartphone will automatically collect location data on a continuous basis, without the knowledge of the owner. Ironically, Social Media applications are the primary users of location data. Aside from the obvious threat of such information being used by malicious actors to stalk or track the user’s movements, it may also provide an open invitation to burglars in instances where the user is abroad on holiday. 

Data and Identity Theft

Instances of account hacking and impersonation are fast becoming the norm. Online criminals, hackers and spammers target social networks due to the copious amounts of personal data available, which allow for an almost instant impersonation of the user. Replicating an individual online through the personal data listed on their SM profiles can lead to online fraud, stolen information and forced shares directing their followers to viruses. The appeal of SM as a cyber-attack vector stems from the ease of spreading viruses and malware, rather than by conventional email spam scams. One is much more likely to trust messages from friends and family on Social Media, clicking on links that will infect their device. 

Fake News

Another prevalent threat to the ‘safe space’ of Social Media is the vast spread of Fake News. Examples of this disinformation war have been seen in the U.S. Presidential elections and the U.K.’s Brexit movement. Bot accounts shared specific and polarizing information to targeted preferred audiences with the aim of driving action, in these examples to influence votes. 


The new Clubhouse social networking trend and how it works


Clubhouse recently rocketed to global fame overnight, despite being around since March 2020 when it had a mere 1,500 users. The app’s notoriety stemmed from a live audio-chat hosted by Elon Musk, which was live-streamed to YouTube. Clubhouse app takes a slightly different spin on social networking as it is based on audio-chat with about 3.6 million users worldwide (February 2021) and is only available on iPhone. The app is an amalgamation of talkback radio, conference call and houseparty features, meaning users engage solely through audio recordings – either privately or publicly shared. Upon joining, members select topics of interest to engage in live conversations, interviews and discussions via a conference call set up, with the “rooms” closing once the conversation is over. Naturally, the more information given around your preferences, the more conversations and individuals the application recommends you to join and/or follow. Profiles on the app are fully visible to all users, with as much information available as members choose to provide. Most worrying perhaps, is the appearance of who invited you to join Clubhouse being a permanent fixture on your profile. Clubhouse also differentiates itself from other social networking platforms through its exclusive “invite only” characteristic, meaning users cannot simply download it from the app store and create an account. Only existing members can send out invites, which then allow new users to tune in to interesting discussions and interviews on a range of eclectic topics.

Social Media platforms
Social Media platforms


With Clubhouse being an invite-only app, what are the specific privacy concerns? 


When granted illustrious membership, you are gifted two free invites. This is where the privacy concern begins as users are pressed to allow the app to access their phone contacts for easy connectivity with other users. As seen from the image above, Clubhouse knows who your friends are before you’ve even joined! Furthermore, the app manages to identify the number of friends your contacts already have on the platform, invoking the Fear Of Missing Out (FOMO) syndrome. Upon joining the app, users can see who invited you, with this information staying on your profile forever. The issue of lack of consent arises as Clubhouse uses the information gleaned from existing members’ contact lists to create profiles of people who are yet to become members. This probably occurs by cross-referencing other Clubhouse members’ shared address books, in a bid to encourage members to share the app with those who would already have friends on the platform. Under the GDPR, consent is defined as “any freely given, specific, informed and unambiguous indication of the data subject’s wishes by which he or she…signifies agreement to the processing of personal data relating to him or her”. Since EU Law states the consent of the friend as being a prerequisite prior to sharing personal data with a third-party entity, Clubhouse may unlawfully be using personal data provided by third parties. For people who have no desire to join the platform, their name, mobile number and number of friends on Clubhouse is personal data the app might already have access to.


How you can stay protected on Social Media


First and foremost, users are encouraged to check and update privacy settings on both their devices and Social Media networks on a periodic basis to limit the amount of access to personal data such as location services, microphone et al., which may be used for targeted marketing. Next, avoid using Social Media on public devices, however in such cases, be sure to log out afterwards. To avoid your accounts being infiltrated by malicious actors, be sure to create strong passwords; the stronger the password, the harder to guess. The use of symbols, capital letters, and numbers teamed with the avoidance of common or repeated passwords (birthday, spouse’s name etc.) creates an additional layer of defence. Similarly, two-factor authentication should be employed for all accounts (including email) to make it that much harder for hackers to gain access. From a cybersecurity perspective, users can install antivirus and anti-spyware software on their devices and ensure they are kept up to date in order to be effective. However, all of these protective measures are rendered useless if you post sensitive personal data online as you (or your contacts) may be inadvertently leaking your own data. Once information is posted online, it is automatically rendered public, with the inherent possibility of it falling into the wrong hands – with or without stringent security measures. As such, the strongest recommendation is to take stock of what you post online, and be careful with the amount of personal data you are revealing, keeping the information to a minimum. 


Does your company have all of the mandated safeguards in place to ensure compliance with the GDPR and Data Protection Act 2018 in handling customer data? Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, and Data Protection Officer outsourcing. We can help your company get on track towards full compliance.