Self-driving cars and GDPR

Self-Driving Cars and GDPR

Driverless cars are on track to becoming a reality, but what of privacy and data protection? In this blog we explore self-driving cars and GDPR.

When you think of self-driving cars what comes to mind? Im willing to go on a limb here and say that its not just Knight Riders KITT, the Batmobile or other cool movie screen concepts. After all, in todays technological era, driverless cars are no longer confined to our favourite TV shows and movies.

Tesla already asserts that all of their new cars come standard with advanced hardware capable of providing Autopilot features today, and full self-driving capabilities in the futurethrough software updates designed to improve functionality over time.Meanwhile reports indicated that earlier this year Waymoa self-driving technology development companysigned a deal to make self-driving cars for use in France and Japan.

This certainly gives new meaning to the phrase the future is now.Doesnt it? Yet as exciting as it all is, there is an undeniable dark side which cannot be ignoreddata protection concerns, privacy risks and ethical issues.

In one of our recent Youtube  vlogs we explored some potential ethical concerns and it really gets one thinking.

Self-driving cars and GDPR

As for the data protection concerns and privacy risks, heres the skinny. Self-driving carsmuch like todays connected carswill rely heavily on data collection, analysis and sharing. Since this data will revolve around our individual lives, some will fall within the purview of personal and sensitive data. Understandably, considering the cloud connectedness of it all, hacking will also be a security risk. This all renders self-driving cars and their technology subject to compliance with the GDPR and the Data Protection Act 2018.

Driverless cars are also a good example of the use of IoT, Big Data and AI all together.

While self-driving cars and GDPR is a concern itself due to its implications for privacy, one should also consider ethical and Telecoms issues,comments Aphaia Partner Cristina Contero. On one hand because of the implementation of AI and, on the other, because self-driving cars will be one of the interconnected devices in the smart network of IoT. This also means that they will be part of smart cities, so making the difference between personal data and non-personal data becomes essential too. It might be clear that data gathered from audio or GPS can be considered part of the self-driving cars and GDPR context, but this is not that clear when it comes to other data like weather or road-conditions information.

If your organization is moving in the direction of autonomous technology like self-driving cars or related artificial intelligence, conducting a data protection impact assessment is key to ensuring GDPR and and  Data Protection Act 2018 compliance. Additionally Aphaia provides  GDPR adaptation consultancy services,  AI ethics assessments and Data Protection Officer outsourcing.

Data sharing code

ICO launches new Data Sharing Code in line with GDPR and DPA 2018

The ICO’s updated Data Sharing Code will provide companies with practical guidelines about how to share personal data in compliance with data protection legislation.

In today’s highly digital, increased-efficiency focused era, data sharing undoubtedly plays a significant role. Indeed major technological shifts in how organizations do business present pretty persuasive arguments for the need for data sharing. Just as prevalent however are the related privacy concerns.

For public and private organizations alike, the balancing act of sharing data without compromising sensitive personal information is vital. Not to mention the need to ensure compliance with GDPR and the Data Protection Act 2018.

The good news is that the update to the ICO data sharing code  of practice is well on its way to being finalized.

Prepared under section 121 of the Data Protection Act 2018, the updated ICO data sharing code—currently in draft—will serve as a practical guide for organisations about how to share personal data in compliance with data protection legislation.

As noted in the draft code summary, the code explains the law and provides good practice recommendations. As such, “following it along with other ICO guidance will help companies manage risks; meet high standards; clarify any misconceptions organisations may have about data sharing; and give confidence to share data appropriately and correctly.”

According to the ICO the code will also address many aspects of the new legislation including transparency, lawful bases for processing, the new accountability principle and the requirement to record processing activities.

It is also important to note that in accordance with section 127 of the DPA the ICO will take the code into account when considering whether organisations have complied with their data protection obligations in relation to data sharing. In particular, the Commissioner will take the code into account when considering questions of fairness, lawfulness, transparency and accountability under the GDPR or the DPA. The code can also be used in evidence in court proceedings, and the courts must take its provisions into account wherever relevant.

Public consultations of the draft data sharing code was launched in July and came to an end on September 9th. The draft code is now expect to be approved by Parliament before it becomes a statutory code of practice.

 

Do you require assistance with GDPR and Data Protection Act 2018 compliance? Aphaia provides both GDPR adaptation consultancy services, including data protection impact assessments, and Data Protection Officer outsourcing.

Voiceprinting Privacy

Voiceprinting Privacy

Voiceprinting is becoming a widespread identification and authentication tool for banks and even public authorities. Voiceprinting privacy concerns and GDPR compliance need to be discussed too.

As technology advances and the world shifts more and more towards electronic and online platforms, new means of digitally identifying individuals are constantly being introduced.

One such digital identification tool which has been experiencing a surge of use over the last three to five years is voiceprintingtechnology which authenticates individuals with voice alone.

In fact across the globe, several organizations including banks, credit unions and government agencies are already making use of this technology.

In 2016, for instance, Citi was reported to have launched a project to automatically verify a customers identity by voice within the first few seconds of the conversation. Citis adoption of voice printing was presented as a means of reducing time to service by eliminating the manual authentication processpotentially cutting a typical call center call by a minute or more.

Yet while voiceprint technology is being lauded as a security game-changer and a customer-service home run there are undoubtedly privacy and data protection concerns.

Just four months ago the Information Commissioners Office (ICO) issued a final enforcement notice to HM Revenue & Customs (HMRC) to delete millions of unlawful voiceprints after an investigation revealed that the UK tax office had collected biometric data without giving customers sufficient information about how their biometric data would be processed and had also failed to give customers the chance to give or withhold consent. The May 2019 final enforcement notice gave HRMC 28 days to complete the deletion of all biometric data held under the Voice ID system for which it does not have explicit consent.

“Since a voiceprint is regularly used to re-identify a person, it needs to be processed based on a lawful processing basis, just like any other personal data. This basis may be the individual’s consent or legitimate interest, subject to legitimate interest assessment in line with GDPR,” comments Dr Bostjan Makarovic, Aphaia managing partner.

 

Do you require assistance with GDPR and Data Protection Act 2018 compliance? Aphaia provides both GDPR adaptation consultancy services, including data protection impact assessments, and Data Protection Officer outsourcing.

GDPR Challenges For Artificial Intelligence

Data protection in algorithms

Technological development is enabling the automation of all processes, as Henry Ford did in 1914; The difference is that now instead of cars we have decisions about privacy. Since GDPR came into force on 25th May 2018, lots of questions have arisen regarding how the Regulation may block any data-based project.

In this article, we aim to clarify some of the main GDPR concepts that may apply to the processing of large amounts of data and algorithm decision-making. It has been inspired by the report the Norwegian Data Protection Authority -Datatilsynet- published in January this year: “Artificial Intelligence and Privacy”.

Artificial intelligence and the elements it comprises like algorithms and machine/deep learning are affected by GDPR for three main reasons: the huge volume of data involved, the need of a training dataset and the feature of automated decision-making without human intervention. These three ideas reflect four GDPR principles: fairness of processing, purpose limitation, data minimisation, and transparency. We are briefly explaining all of them in the following paragraphs – the first paragraph of each concept contains the issue and the second one describes how to address it according to GDPR.

One should  take into account that without a lawful basis for automated decision making (contract/consent), such processing cannot take place.

Fairness processing: A discriminatory result after automated data processing can derive from both the way the training data has been classified (supervised learning) and the characteristics of the set of Data itself (unsupervised learning). For the first case, the algorithm will produce a result that corresponds with the labels used in training, so if the training was biased, so will do the output. In the second scenario, where the training data set comprises two categories of data with different weights and the algorithm is risk-averse, the algorithm will tend to favour the group with a higher weight.

GDPR compliance at this point would require implementing regular tests in order to control the distortion of the dataset and reduce to the maximum the risk of error.

Purpose limitation: In cases where previously-retrieved personal data is to be re-used, the controller must consider whether the new purpose is compatible with the original one. If this is not the case, a new consent is required or the basis for processing must be changed. This principle applies either to the re-use of data internally and the selling of data to other companies. The only exceptions to the principle relate to scientific or historical research, or for statistical or archival purposes directly for the public interest. GDPR states that scientific research should be interpreted broadly and include technological development and demonstration, basic research, as well as applied and privately financed research. These elements would indicate that – in some cases – the development of artificial intelligence may be considered to constitute scientific research. However, when a model develops on a continuous basis, it is difficult to differentiate between development and use, and hence where research stops and usage begins. Accordingly, it is therefore difficult to reach a conclusion regarding the extent to which the development and use of these models constitute scientific research or not.

Using personal data with the aim of training algorithms should be done with a data set originally collected for such purpose, either with the consent of the parties concerned or, to anonymisation.

Data minimisation: The need to collect and maintain only the data that are strictly necessary and without duplication requires a pre-planning and detailed study before the development of the algorithm, in such a way that its purpose and usefulness are well explained and defined.

This may be achieved by making it difficult to identify the individuals by the basic data contained. The degree of identification is restricted by both the amount and the nature of the information used, as some details reveal more about a person than others. While the deletion of information is not feasible in this type of application due to the continuous learning, the default privacy and by design must govern any process of machine learning, so that it applies encryption or use of anonymized data whenever possible. The use of pseudonymisation or encryption techniques protect the data subject’s identity and help limit the extent of intervention.

Transparency, information and right to explanation: Every data processing should be subject to the previous provision of information to the data subjects, in addition to a number of additional guarantees for automated decision-making and profiling, such as the right to obtain human intervention on the part of the person responsible, to express his point of view, to challenge the decision and to receive an explanation of the decision taken after the evaluation.

GDPR does not specify whether the explanation is to refer to the general logic on which the algorithm is constructed or the specific logical path that has been followed to reach a specific decision, but the accountability principle requires the subject should be given a satisfactory explanation, which may include a list of data variables, the ETL (extract, transform and load) process or the model features.

A data protection impact assessment carried by the DPO is required before any processing involving algorithms, artificial intelligence or profiling in order to evaluate and address the risk to the rights and freedoms of data subjects.

 

Do you require assistance with GDPR and Data Protection Act 2018 compliance? Aphaia provides both GDPR adaptation consultancy services, including data protection impact assessment, and Data Protection Officer outsourcing.