Loading

Blog details

AI Auditing and Ethical issues

AI Auditing and Ethical issues

Auditing is one of the main challenges that faces the Regulation of the AI.

It’s important to note that audits can be internal or external.

An internal AI audit helps an organisation evaluate, understand and communicate the degree to which AI will have an effect (either negative or positive) on the organisation’s ability to create value in the short, medium, or long term, while an external audit assesses if the company is actually complying with rules and standards.

According to the Institute of Internal Auditors, an AI auditing framework should be comprised of three overarching components — AI Strategy, Governance, and the Human Factor — and seven elements: Cyber Resilience; AI Competencies; Data Quality; Data Architecture & Infrastructure; Measuring Performance; Ethics; and The Black Box-elaborate on Black Box.

As for external audits, DPAs (Data Protection Authorities) and other bodies are still working on reaching an agreement on what the standards should be.

The ICO, from its side, aims at building a reference framework, and is gathering feedback from organisations in order to come up with a solid methodology to audit AI applications and ensure they are transparent, fair; and that the necessary measures to assess and manage data protection risks arising from them, are in place. Their proposed structure includes:

1.- Governance and accountability.

  • Risk appetite.
  • Leadership engagement and oversight.
  • Management and reporting structures.
  • Compliance and assurance capabilities.
  • Data protection by design and by default.
  • Policies and procedures.
  • Documentation and audit trails.
  • Training and awareness.

2.- AI-specific risk areas.

  • Fairness and transparency in profiling – including issues of bias and discrimination, interpretability of AI applications, and explainability of AI decisions to data subjects.
  • Accuracy – covering both accuracy of data used in AI applications and of data derived from them.
  • Fully automated decision making models – including classification of AI solutions (fully automated vs. non-fully automated decision making models) based on the degree of human intervention, and issues around human review of fully automated decision-making models.
  • Security and cyber – including testing and verification challenges, outsourcing risks, and re-identification risks.
  • Trade-offs – covering challenges of balancing different constraints when optimising AI models (e.g. accuracy vs. privacy).
  • Data minimisation and purpose limitation.
  • Exercising of rights.
  • Impact on broader public interests and rights.

The CNIL (France’s data protection authority) from its end, considers that  countries should set up a national platform for auditing algorithms, but in order to reach this goal there is a prior need to identify what resources the State has available, as well as the different needs-, and pool the expertise and means to hand within a national platform.

According to the CNIL, in practice, these audits could be performed by a public body of algorithm experts who would monitor and test algorithms. Given the size of the sector to be audited, another solution could involve the public authorities accrediting private audit firms on the basis of a frame of reference. Companies and public authorities would be well advised to adopt certification-type solutions.

If you need advice on your AI product, Aphaia offers both AI ethics and Data Protection Impact Assessments.

Prev post
A fine of 180K euros for GDPR data breach imposed by CNIL
July 31, 2019
Next post
FACEBOOK LIKE BUTTON ECJ RULING
August 14, 2019

Leave a Comment