The European Commission has published an update outlining the next steps for its efforts to build trust in AI and machine learning by creating what it terms “a European approach to artificial intelligence”. The plans have attracted criticism from the Center for Data Innovation, a non-profit research institute.
In 2018 the EC established a High-Level Expert Group (HLEG), which was tasked with creating policy and investment recommendations to help the Commission deal with the technological, ethical, legal and socio-economic challenges that can arise from the broader use of AI. Upon launch, the EC said it wanted to strengthen the EU’s competitiveness in this area.
Building on the work of the HLEG, the Commission launched a pilot phase to ensure that the ethical guidelines for AI development and use could be implemented in practice. The plans announced this week are a deliverable under the AI strategy of April 2018, which aims at increasing public and private investments to at least €20 billion annually over the next decade, making more data available, fostering talent and ensuring trust.
The EC says it is taking a three-step approach: setting-out the key requirements for “trustworthy” AI; launching a large scale pilot phase for feedback from stakeholder; and working on international consensus building for “human-centric artificial intelligence”.
The report sets out seven “essentials” for achieving “trustworthy” AI, noting that it should respect all applicable laws and regulations. It says that AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy. Algorithms are required be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
The EC also says citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them and that AI systems traceability should be ensured. They should also be required to consider the whole range of human abilities, skills and requirements, and ensure accessibility, it adds.
AI systems should be used to “enhance positive social change and enhance sustainability and ecological responsibility”, it continues, adding mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.
The EC says it will launch a pilot phase in the summer of 2019 with a wide range of stakeholders, this will include HLEG members helping to present and explain the guidelines to the relevant stakeholders in member states. The pilot phase will also involve companies from other countries and international organisations.
The EC also says it wants to build consensus for “human-centric” AI, which will bring “AI ethics” to the global stage “because technologies, data and algorithms know no borders”. To this end, the Commission says it will strengthen cooperation with like-minded partners such as Japan, Canada or Singapore and continue to play an active role in international discussions and initiatives including the G7 and G20.
Whilst welcoming “a number of improvements from the draft released in December and the approach of the EC, which it says normally, “regulates first and asks questions later”, senior policy advisor at the Center for Data Innovation, Eline Chivot, says in a statement, that the document “falls short in a number of areas”
Chivot highlights that the plan acknowledges the trade-off between enhancing a system’s explainability and increasing its accuracy, and also that it “rightly” acknowledges that its principles remain abstract,does away with the poorly defined “principle of beneficence,” and no longer associates “nudging” with “risks to mental integrity.”
She adds, however, that the document incorrectly treats AI as inherently untrustworthy and argues the principle of explicability is necessary to promote public trust in AI systems, a claim which is unsupported by evidence. These areas should be changed in the next report.
“Most importantly, the belief that the EU’s path to global AI dominance lies in beating the competition on ethics rather than on value and accuracy is a losing strategy,” Chivot says. “Pessimism about AI will only breed more opposition to using the technology, and a hyper focus on ethics will make the perfect the enemy of the good.
“The HLEG’s report does not reflect the official view of the European Commission, although it does reflect the conventional wisdom of many European policymakers,” she adds. “Therefore, we encourage the Commission to move past this report and take concrete steps to meaningfully support the development and deployment of AI with additional policy and investment decisions.”