Real time bidding, programmatic advertising and privacy risks

Our Vlog this week covers adtech and real time bidding (RTB).

Real-Time Bidding is a set of technologies and practices used in programmatic advertising that allow advertisers to compete for available digital advertising space in milliseconds, placing billions of online adverts on webpages and apps by automated means.

In a nutshell, our fingerprint generates a lot of data about our activity on the internet. This data is collected by the advertisers and we are targeted according to it. The website publishers, from their side, auction in real time a space on the page we are viewing, and then the publishers bid for such space in order to display ads we may be interested in.

Does this comply with the GDPR? The ICO has recently launched a report about this aiming at addressing the main challenges that come from the use of RTB.

  • Most of them are related to transparency and consent:
    • identifying a lawful basis for the processing of personal data in RTB remains challenging, as the scenarios where legitimate interests could apply are limited, and methods of obtaining consent are often insufficient in respect of data protection law requirements;
    • the privacy notices provided to individuals lack clarity and do not give them full visibility of what happens to their data;
    • the scale of the creation and sharing of personal data profiles in RTB appears disproportionate, intrusive and unfair, particularly when in many cases data subjects are unaware that this processing is taking place; and
    • it is unclear whether RTB participants have fully established what data needs to be processed in order to achieve the intended outcome of targeted advertising to individuals.
    • In many cases there is a reliance on contractual agreements to protect how bid request data is shared, secured and deleted. This does not seem appropriate given the type of personal data sharing and the number of intermediaries involved.

RTB carries a number of risks. These include:

  • profiling and automated decision-making;
  • large-scale processing (including of special categories of data);
  • use of innovative technologies;
  • combining and matching data from multiple sources;
  • tracking of geolocation and/or behaviour; and
  • invisible processing. Beyond these, many individuals have a limited understanding of how the ecosystem processes their personal data.

These issues make the processing operations involved in RTB of a nature likely to result in a high risk to the rights and freedoms of individuals. Many of the above factors constitute criteria that make data protection impact assessments (DPIAs) mandatory.

In our view, and especially considering the new ICO guidance on cookies, controllers should take some actions previous to the processing, as putting in place a DPIA and gathering consent for RTB. RTB should have a separated explanation and toggle in the pop-up and settings, the same as it is required for non-essential cookies.

If you need advice on your AI product, Aphaia offers both AI ethics and Data Protection Impact Assessments.

Fingerprinting and what it means for privacy?

This week we discuss device fingerprinting.

Firstly though we want to know do you feel safe against online identifiers? Do you frequently delete cookies?

It’s time to up your game and here’s why…

what is fingerprinting?

Beyond cookies or pixels, there are other techniques of identification and monitoring on the Internet. While it can be done for a legitimate purpose such as enabling multiple-authentication mechanisms, it can also be used for tracking and profiling, with the ultimate goal of exploiting such data, although initially the information is collected with a technical purpose.

Privacy is affected by fingerprinting and here is how:

-Given that people usually tend not to share their devices, singling out  a device allows the identification of an individual, which points out the need for applying Data Protection rules.

-An additional concern comes from the possibility to reassign the linked information to the user even when cookies have been deleted.

An individual can be identified using fingerprinting and there are 3 main elements, which allow the identification of a singular device, which are:

-Gathering data.

-The Global nature of the Internet.

-A Unique ID.

Fingerprint risks are covered by GDPR under recital 30, which generically refers to online identifiers, which means data protection rules apply directly.

Tips for users:

 -Set up your preferences in the browser settings.

-Opt-in to the Do Not Track mechanism, which will allow you to disable web tracking on the device.

Tips for data controllers using fingerprinting:

-Check DNT preferences before processing any data.

-Gather users’ consent even where DNT is disabled

-Include fingerprinting in the record of processing activities.

We advise you to:

-Carry out a risk analysis and Data Protection Impact Assessment where relevant, considering the impact of the disclosure of profiling information contained in the database.

-Avoid the use of social, cultural or racial bias leading to automatic decisions.

-Create access controls for employees or third parties to specific users’ data.

-Avoid the excessive collection of data and retention for excessive periods.

-Consider the impact on the perception of the freedom of use of profiling information.

-Avoid the manipulation of user’s wishes, beliefs and emotional state.

-Lastly in relations to the above, consider the risk of re-identification.

If you need advice on your AI product, Aphaia offers both AI ethics and Data Protection Impact Assessments.

 

AI and GDPR fines

In this week’s vlog, we went through the fines under the GDPR and the role of Artificial Intelligence in this context.

One of the most talked about fines under GDPR so far, has been Facebook’s £500,000 fine from the Information Commissioner’s Office & this was for serious breaches of data protection law. The ICO’s investigation found that Facebook processed personal information of users unfairly by allowing application developers’ access to data subjects’ information without gaining clear & informed consent. Facebook also failed to keep the personal information secure because it did not do checks on apps and developers using its platform. This meant that Facebook data of up to 87 million people worldwide was harvested without their knowledge.

A subset of this data was then shared with other organisations, including the parent company of Cambridge Analytica who were involved in political campaigning in the US. The ICO found that the personal information of at least 1 million UK users was amongst the harvested data & consequently, put at risk of further misuse.

The Federal Trade Commission also formally announced its $5 billion settlement with Facebook this summer, after a long investigation into Cambridge Analytica scandal and other privacy breaches. The $5 billion fine is the 2ndlargest fine ever levied by the FTC.

Now crucially, GDPR fines are designed to make non-compliance a costly mistake for businesses regardless of its size. The most serious infringements may result in a penalty of up to £17m, or 4% of the business’ global turnover from preceding of the financial year & this is based on whichever fine is greater.

These include any violations of the articles that govern:

  • The basic principles for processing.
  • The conditions for consent.
  • The data subjects’ rights.
  • The transfer of data to an international organization or a recipient in a third country.

A misuse of AI systems may trigger most of them, because there are only two valid bases for processing when it comes to automated decision making and profiling: contract and consent. Consent must be explicit.

Automated decision making and profiling is regulated in article 22 of GDPR, which is included within Chapter III: “Rights of the data subject”.

So, Applying all GDPR security measures become an essential step in order to avoid data breaches and fines by:

  • using pseudonymisation and encryption of personal data;
  • ensuring the ongoing confidentiality, integrity, availability, resilience of processing systems and services;
  • restoring the availability and access to personal data in a timely manner in the event of a physical or technical incident;
  • Lastly  adding a process for regularly testing, assessing as well as evaluating the effectiveness of technical and organisational measures for ensuring the security of the processing.

However, The use of AI requires some additional measures to safeguard the data subject’s rights and freedoms as well as legitimate interests, which are the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.

The ICO, intends to impose a fine of over £180 million on British Airways for the theft of customers’ personal and financial information and £100 million to Marriott hotel for accidentally exposing 339 million guest records globally. British Airways and Marriott data breaches have nothing to do with AI. Could you imagine what the fine amount would have been if AI was involved? Feel free to share your thoughts with us.

If you need advice on your AI product, Aphaia offers both AI ethics and Data Protection Impact Assessments.

AI Auditing and Ethical issues

Auditing is one of the main challenges that faces the Regulation of the AI.

It’s important to note that audits can be internal or external.

An internal AI audit helps an organisation evaluate, understand and communicate the degree to which AI will have an effect (either negative or positive) on the organisation’s ability to create value in the short, medium, or long term, while an external audit assesses if the company is actually complying with rules and standards.

According to the Institute of Internal Auditors, an AI auditing framework should be comprised of three overarching components — AI Strategy, Governance, and the Human Factor — and seven elements: Cyber Resilience; AI Competencies; Data Quality; Data Architecture & Infrastructure; Measuring Performance; Ethics; and The Black Box-elaborate on Black Box.

As for external audits, DPAs (Data Protection Authorities) and other bodies are still working on reaching an agreement on what the standards should be.

The ICO, from its side, aims at building a reference framework, and is gathering feedback from organisations in order to come up with a solid methodology to audit AI applications and ensure they are transparent, fair; and that the necessary measures to assess and manage data protection risks arising from them, are in place. Their proposed structure includes:

1.- Governance and accountability.

  • Risk appetite.
  • Leadership engagement and oversight.
  • Management and reporting structures.
  • Compliance and assurance capabilities.
  • Data protection by design and by default.
  • Policies and procedures.
  • Documentation and audit trails.
  • Training and awareness.

2.- AI-specific risk areas.

  • Fairness and transparency in profiling – including issues of bias and discrimination, interpretability of AI applications, and explainability of AI decisions to data subjects.
  • Accuracy – covering both accuracy of data used in AI applications and of data derived from them.
  • Fully automated decision making models – including classification of AI solutions (fully automated vs. non-fully automated decision making models) based on the degree of human intervention, and issues around human review of fully automated decision-making models.
  • Security and cyber – including testing and verification challenges, outsourcing risks, and re-identification risks.
  • Trade-offs – covering challenges of balancing different constraints when optimising AI models (e.g. accuracy vs. privacy).
  • Data minimisation and purpose limitation.
  • Exercising of rights.
  • Impact on broader public interests and rights.

The CNIL (France’s data protection authority) from its end, considers that  countries should set up a national platform for auditing algorithms, but in order to reach this goal there is a prior need to identify what resources the State has available, as well as the different needs-, and pool the expertise and means to hand within a national platform.

According to the CNIL, in practice, these audits could be performed by a public body of algorithm experts who would monitor and test algorithms. Given the size of the sector to be audited, another solution could involve the public authorities accrediting private audit firms on the basis of a frame of reference. Companies and public authorities would be well advised to adopt certification-type solutions.

If you need advice on your AI product, Aphaia offers both AI ethics and Data Protection Impact Assessments.