Google announces an AI advisory board – only to dissolve it
Google creates advisory board to monitor the ethical use of AI
In line with the draft set of AI Ethics Guidelines produced by the European Commission’s High-Level Expert Group (AI HLEG) last December, Google and other Big Tech like Amazon and Microsoft are taking steps to adopt an ethical use of AI. Google, from their side, have created an external advisory board to monitor AI ethics in the company.
GDPR states that the data controller shall implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests in relation to the use of AI, which makes necessary unbiased algorithms and balanced training datasets. This is an example of privacy by design that requires a privacy expert to monitor the process from the very first stage of the project.
Google also announced their AI Principles last June, with the aim of assessing AI applications in view of seven main objectives: be socially beneficial, avoid creating or reinforcing unfair bias, be built and tested for safety, be accountable to people, incorporate privacy design principles, uphold high standards of scientific excellence and be made available for uses that accord with these principles.
Kent Walter, Senior Vice President of Global Affairs in Google, pointed out facial recognition and fairness in machine learning as some of the most relevant topics that will be addressed by the advisory board. The advisory board is comprised of international experts in the fields of technology, ethics, linguistics, philosophy, psychology and politics.
UPDATE: However, due to some of its members receiving wide criticism, Google has scrapped the initial board composition and gone back to the drawing board.