Loading

Blog details

Is effective AI regulation possible?

Is effective AI regulation possible?

On our Youtube channel this week, we are discussing how AI is currently regulated, challenges and what the future could look like!

One of the main challenges of AI is avoiding discrimination in its application and results, which requires one to apply several controls on the datasets used for training and the ethical and general principles that should be part of the design, plus perform regular checks.

An additional issue the legislator should tackle relates to the person or persons who should be liable where the AI breaks the law (Liability). This is also linked to international law, as Courts will have to discern not only who is responsible but also what the jurisdiction or applicable law ought to be.

For example, one could argue this could be: the country where the AI was programmed or the country where the device that includes the AI was manufactured. The severity of these decisions dials up a notch when it comes to life-changing decisions, such as the choice of who should live or die in a car accident. How do you decide?

What’s more, critics are concerned with the issue of self-development or ‘deep learning’ when it comes to AI. What if we cannot control our AI? Should we impose some limits? If so, what limits?

And what about the issue of AI being used as a weapon? Suggestions to limit some very specific applications of AI seem to merit much closer examination and action. A major case in point is the development of autonomous weapons that employ AI to decide when to fire, how much force to apply, and on what targets.

In our video this week we answer the question, is effective AI regulation possible?

We would also love to hear what you think, share your thoughts with us!

If you need advice on your AI product, Aphaia offers both AI ethics and Data Protection Impact Assessments.

Prev post
Artificial Intelligence in H2020 overview
July 17, 2019
Next post
ICO new cookies guidance
July 24, 2019

Leave a Comment