AEPD published guidelines

AEPD published guidelines on data protection and labor relations

AEPD published guidelines on data protection and labor relations in collaboration with the Ministry of Labor and the employers and trade union organizations. 

The AEPD published guidelines recently, aiming at offering a practical tool to aid public and private organizations in upholding their compliance with the legislation in place. The agency collaborated with the Ministry of labor and social economy and the employers and trade union organizations in order to prepare this guide. The guide is centered around compliance with the GDPR and the DPA with specific focus on updates regarding the rights of workers and the collection and use of their data by employers. This guide covers quite a range of issues including employee data protection within the organization and even employer access to social media profiles, internal whistleblowing and privacy for victims and alleged harassers in the workplace. 

The guidelines outline the general bases of legitimate data processing by employers.

The guidelines from the AEPD proposes the data protection rights to be upheld in a working environment. In the document the AEPD addresses the importance of applying the principle of data minimisation. An employment contract does not automatically give employers access to any and all personal information of employees, therefore these guidelines outline what information may or may not be necessary. The document sets the limits for the processing of data in the hiring process, as well as throughout the course of the employment contract. The AEPD explains that due to the duties of secrecy and security, personal data should only be known by the affected party and by those uses within the organization who have the power to use, consult or modify the data.

The AEPD suggests using the least invasive system possible for tracking employee working days. 

According to the guidelines published by the AEPD, with regards to tracking employee workdays, the least invasive system possible should be adopted. This information cannot be publicly accessible or located in a visible place. In addition, the data registered by these systems must not be used for any purposes other than the tracking of the working day. In the example of a worker who travels to perform their role, a working day tracker would be used for the sole purpose of recording when their workday begins and ends, and not to constantly monitor  their location. The processing of geolocation data requires a specific legal basis. 

The guidelines cover access by employers to social media profiles and data from wearable technology like smart watches. 

The AEPD explains that employees are not obligated to allow their employer to access or inquire into their social media profiles. This includes during the hiring process as well as for the execution of the employment contract. Even in cases where a candidate for employment has a social media profile that is publicly accessible, an employer may not process any data obtained in that way, unless there is a valid legal basis for it. In this case it will be necessary for the employer to inform the worker and to demonstrate what the legal basis is including its relevance to the performance of the role. 

The AEPD published guidelines on wearable devices, particularly on the monitoring of health data through devices like smart watches. In general this type of monitoring is prohibited for several reasons. This type of monitoring violates the principle of proportionality as it suggests the constant monitoring of special category data (health), and could allow employers to access data specific to health conditions and not exclusively the data assessing an individual’s ability to perform their job.

The AEPD published guidelines specific to internal whistleblowing and privacy for victims and alleged perpetrators. 

In instances of gender-based violence, or harassment, personal data, particularly identity is generally considered to be special category data. Sensitive data of this nature requires enhanced protection. According to the guidelines, an identification code should be assigned to the alleged victim as well as the alleged perpetrator in these cases. When it is necessary to process data for compliance to legal obligations, an employer may process data of a worker regarding their condition as it relates to gender-based violence or harassment. In cases of harassment at work, both the identity of the alleged harasser and the alleged victim of harassment must be protected. 

The guidelines state that the works council now has the right to information on the parameters of a company’s algorithms and artificial intelligence systems.

As the use of artificial intelligence becomes more prevalent, the guide includes groundbreaking information on the rights of the works council to be informed by companies, on the framework for any algorithms or AI systems used within their company. This includes explanations on profiles which could prossible affect access to, as well as conditions, and maintenance of employment. This condition was newly introduced into law (RD-law 9/2021), modifying the Workers’ Statute, and introducing an additional level of transparency to the process. 

Does your company have all of the mandated safeguards in place to ensure compliance with the ePrivacy, GDPR and national data protection legislation in handling employee data? Aphaia provides ePrivacy, GDPR and data protection consultancy services, including data protection impact assessments, EU AI Ethics Assessments and Data Protection Officer outsourcing. We can help your company get on track towards full compliance.

Facebook loses challenge

Facebook loses challenge as court rules in favor of DPC

Facebook loses challenge as court rules in favor of DPC’s draft decision for an inquiry and suspension of Facebook’s data transfers to the US. 

Following the Schrems II judgement of last July, the Irish Data Protection Commission, launched an inquiry into Facebook Ireland Ltd, and suspended the company’s EU-US data flows. Facebook disagreed with, and decided to challenge the decision. The company asserted that the DPC’s decision, and the procedures subsequently adopted are susceptible to judicial review. This long standing legal battle over Facebook Ireland’s right to continue making data transfers to the US, has now come to an end. This ruling, affirming Ireland’s lead regulator’s decision to suspend their EU-US data flows is likely to have major effects on Facebook’s operations. 

This decision is the culmination of an eight year battle, initiated by a 2013 complaint from Mr. Max Schrems.

Facebook Ireland, a subsidiary of the US company Facebook Inc, provides the social networks Facebook and Instagram to the European region, and houses its central administration and European headquarters in Dublin. In June 2013, Mr Maximilian Schrems filled a complaint with the DPC regarding the transfer of his personal data to the US by Facebook Ireland, claiming that it was unlawful under national and EU law, and in October 2013, the DPC stated that the matter would be “investigated promptly with all due diligence and speed”. In May 2016, the DPC wrote to Facebook Ireland and Mr Schrems with a draft decision that Standard Contractual Clauses could not lawfully be relied upon in respect to transfers of EU citizens’ personal data to the US. After this judgment, in July 2020, the CJEU gave a judgment. The court ruled that according to the GDPR, EU residents whose personal data is transferred to a third country using Standard Contractual Clauses must be afforded the same level of protection guaranteed within the European Union and the GDPR. Since the authorities in the United States cannot be bound by Standard Contractual Clauses, data transferred there may not be effectively protected. As a result of last year’s judgment, the Irish DPC launched an inquiry, and came to a preliminary decision to halt Facebook’s data transfers to the US, a decision that was subsequently challenged by Facebook. 

Facebook challenged the draft decision by the DPC claiming that they should have awaited guidance from the EDPB. 

Facebook challenged the draft decision, as well as the inquiry, claiming that the Data Protection Commission should have waited for guidance from the European Data Protection Board before proceeding with an inquiry and ordering suspension of its data transfers. The company asserted that as a member of the EDPB, the DPC would have received imminent guidance from the EDPB, and should not have acted prior to receiving that. This guidance was eventually published in November 2020, and as of May 14th 2021, the High Court has ruled that Facebook Ireland “ has not established any basis for impugning the DPC decision or the PDD of the procedures for the inquiry adopted by the DPC.” The judge rejected claims by Facebook that the DPC was in breach of its duty in how the case was handled. Justice David Barnaville also stated however, that the DPC should have responded to certain questions that Facebook raised in their October 2020 correspondence.

Facebook loses challenge as high court ruling gives the Irish DPC the right to open a second “own volition“ investigation against Facebook.

This long standing battle has now come to an end, resulting in an inevitable suspension of Facebook’s data transfers to the US. A second, “own volition” investigation has also been opened and is running simultaneously with the original complaint dating back to 2013, which led to the CJEU’s “Schrems II” decision. Regarding Facebook’s appeal of the DPC’s decision, the High Court, in its 127 page document outlining its judicial review of this case, rejected Facebook’s claims against the DPC. Eight years after the initial complaint, it is now certain that the DPC will have to act to stop Facebook‘s EU-US data transfers. This decision is likely to heavily impact Facebook’s operations. Regardless, the company said it looked forward to defending its compliance to the Data Protection Commission.

Do you make international data transfers to third countries? We can help you. Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, transfer impact assessments and Data Protection Officer outsourcing.  Contact us today.

The CNIL issues it’s opinion

The CNIL issues it’s opinion on vaccine passes for mass gatherings

The CNIL issues it’s opinion on the implementation and use of vaccine passes for admittance to mass crowd events in France. 

 

 As the world aims to resume somewhat normal activity during the global COVID-19 pandemic, France is considering the use of the vaccine passes or  green passes for admission to mass gatherings of at least 1000 persons. This suggestion comes in an effort to re-open certain establishments and resume certain activities, while minimizing the risk of contamination from the virus. These green passes, as with the ones for travel, will include information related to the COVID-19 vaccine, a negative COVID-19 test, or proof of recovery from the virus. While they were originally developed to facilitate travel with more ease during the pandemic, the Government of France seeks to take the opportunity to use them for access to mass crowd events, in an effort to resume those activities much sooner. 

 

The CNIL makes it clear that these passes are not to be used beyond the health crisis. 

 

The CNIL wishes that it be made clear that these passes are intended only for use during the pandemic and it will definitely be of a temporary nature. In acknowledging the unprecedented nature of an initiative like this and the implications that it may have for the lives of individuals, the Authority wants it to be made clear that this measure is meant for the specific purpose of dealing with the current health crisis and should only be used for as long as its purpose is applicable to the COVID-19 pandemic. In addition, the CNIL requests that the impact of this system on the health situation be monitored, studied and documented at regular intervals and on the basis of objective data, in order to determine whether public authorities should continue its use. 

 

The CNIL would like guarantees that the use of these passes is limited to mass crowd events. 

 

While the authority acknowledges the functionality of these passes for admittance into mass crowd events, CNIL would like to make it clear that in the interest of respect for the fundamental rights and freedoms of persons, these passes should be limited to those mass crowd events for which they are intended. The Authority wants to ensure that the use of these passes excludes places that relate to the daily activities of the population like restaurants, workplaces, shops, etc. In addition these passes should not be used for admission to any venue linked to certain usual manifestations of fundamental freedoms (in particular the freedom to demonstrate, to organize political or trade unionists and to freedom of religion). The CNIL notes that the particular exclusion of these passes and the prohibition of their use in these spheres is likely to minimize any implications of the use of this system on the rights and freedoms of individuals. CNIL also believes that there should be further clarification and transparency on the qualification of the events where the use of these passes would be considered appropriate, and measures ensuring that the passes are not used in places and events which do not meet those qualifications. 

 

The CNIL would like to ensure that the use of these passes does not result in discrimination, and protects the personal data of individuals. 

 

In order to avoid discrimination, the CNIL is stressing the need that these passes be accessible to all. This includes ensuring that passes are available on paper as well as in digital format. It is also important to ensure that there is no discrimination based on the type of evidence presented in these passes, whether it be evidence of vaccination, a negative COVID-19 test, or recovery from the virus. Due to the sensitive nature of the information used for these passes, it is very important to make special considerations for limiting the disclosure of health information of individuals. The CNIL therefore suggests the implementation of a solution which would make it possible to limit access to persons authorized to verify the certificates. In addition, the Authority believes that these verifications should result in a color code (green or red color), along with the identity of their holder, so as not to reveal whether the individual has been vaccinated, tested, or recovered from a previous infection with COVID-19.

 

Does your company have all of the mandated safeguards in place to ensure compliance with the ePrivacy, GDPR and Data Protection Act 2018 in handling customer data? Aphaia provides ePrivacy, GDPR and Data Protection Act 2018 consultancy services, including data protection impact assessments, EU AI Ethics Assessments and Data Protection Officer outsourcing. We can help your company get on track towards full compliance.

Fintech and AI Ethics

Fintech and AI Ethics

As the world of Fintech evolves, the need for governance and ethics in that arena is of particular importance. 

 

Financial Technology, or “Fintech” refers to new technology that seeks to improve and automate financial services. This technology aids in the smooth running of financial aspects of business or personal finances through the integration of AI systems. Broadly speaking, this term refers to any innovation through which people can transact business, from keeping track of finances, to the invention of digital and cryptocurrency. With crypto-trading and digital platforms for wealth management becoming more popular than ever before, an increasing number of consumers are seeing the practical application and value of Fintech in their lives. As with any application of AI and technology however, certain measures should be in place for the smooth, and more importantly, safe integration of this technology into our daily lives, allowing the everyday user to feel more secure in the use of this tech. 

 

Legislation and guidance have been implemented and communicated guiding Fintech and AI ethics. 

 

Some pieces of legislation, such as the Payment Services Directive 2 (PSD2), an EU regulation governing electronic payment services, already target Fintech. PSD2 harmonizes two services which have both become increasingly widespread in recent times; Payment Initiation Services (PIS) and Account Information Services (AIS). PIS providers facilitate the use of online banking to make online payments, while AIS providers facilitate the collection and storage of information from a customer’s different bank accounts in a single place. With the increasing popularity and use of these innovations and other forms of Fintech, and as experience provides further insight into the impact of the various implications and the true impact of its use, new regulations are expected in the future. 

 

To most people, their financial data is considered to be among their most sensitive and valuable data and as such, most people are very keen on ensuring the safety of their data. Legislation and guidance have been implemented and communicated in order to aid in the pursuit of principles like technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness. These are all imperative to ensuring that the use of Fintech is safe and beneficial for everyone involved. 

 

Technical robustness and safety

 

The safety of one’s personal and financial information is, simply put, of the utmost importance when making decisions about what tools an individual will use to manage their finances. A personal data breach involving financial information could be very harmful for the affected data subjects due to its sensitive nature. Financial institutions and Fintech companies put several measures in place to ensure safe and secure money management through tech. Security measures such as, inter alia, data encryption, role-based access control, penetration testing, tokenization, 2FA, multi-step approval or verification processes  and backup policy all can and should be applied, where necessary and feasible. These measures all aid in helping users feel more secure, but intimately they aid in protecting users from far more than they can imagine including malware attacks, data breaches, digital identity risks and much more. 

 

Privacy and data governance

 

Article 22 of the EU GDPR prohibits a data subject from being subject to a decision based solely on automated processing, except where some circumstances apply. Automated decisions in the Fintech industry may produce legal effects concerning the individuals or similarly significantly affect them. Any decision with legal or similar effects needs special considerations in order to comply with the UK GDPR requirements. A data protection impact assessment may be necessary to determine the risks to individuals and determine how best to deal with them. For special categories of data, automated processing can only be carried out with the individual’s explicit consent or if necessary for reasons of substantial public interest. Robotic process automation (or RPA) could be very useful to businesses and help increase their revenue and save them money. However, it is imperative to ensure compliance with the GDPR and ensure that automated decision making does not result in dangerous profiling practices. 

 

Diversity, non-discrimination and fairness

 

Several studies have been performed exploring the overall fairness of current Fintech, and possible discrimination in consumer lending and other aspects of the industry. Algorithms can either perpetuate widespread human biases or develop their own biases. Common biases in the financial sector arise around gender, ethnicity and age. AI technology, especially in Fintech, where biases can affect an individual’s access to credit and the opportunities that it affords, must prevent discrimination and protect diversity. The use of quality training data, choosing the right learning model and working with an interdisciplinary team may help reduce the bias and maintain a sense of fairness in the world of Fintech and AI in general. 

 

Transparency. 

 

While the use of AI has brought much positive transformation to the financial industry, the question of AI ethics in everything that we do is unavoidable. Transparency provides an opportunity for introspection regarding ethical and regulatory issues, allowing them to be addressed. Algorithms used in Fintech should be transparent and explainable. The ICO and The Alan Turing Institute have produced their guidance “Explaining decisions made with AI ” to help businesses with this. They suggest developing a ‘transparency matrix’ to map the different categories of information against the relevant stakeholders. Transparency enables and empowers businesses to demonstrate trustworthiness. Trustworthy AI is AI that will be more easily adopted and accepted by individuals. Transparency into the model and processes of Fintech and other AI allows biases and other concerns to be raised and addressed. 

 

Check out our vlog exploring Fintech and AI Ethics:

https://youtu.be/7nj2616bq1s

You can learn more about AI ethics and regulation in our YouTube channel.

 

Do you have questions about how AI works in Fintech and the related guidance and laws? We can help you. Aphaia provides both GDPR and Data Protection Act 2018 consultancy services, including Data Protection Impact AssessmentsAI Ethics Assessments and Data Protection Officer outsourcing. We can help your company get on track towards full compliance. Contact us today.