After careful preparation, the High-Level Expert Group on Artificial Intelligence, appointed by the European Commission, published its Ethics Guidelines for Trustworthy AI on April 8.
Throughout the project, I have had the opportunity to exchange ideas with several of the experts in this group, and I think that we can be very proud of the Finnish contribution and of the outcomes of the group’s work.
The report has already been summarised and commented on, mostly with a positive note, however opposing views have also been expressed; some of these have come from inside the group. I will try not to repeat the previous analyses and will instead focus on a few conclusions that should be acknowledged when discussing the implementation of the report.
I see the following points to be the key accomplishments of the report:
- It outlines an extensive, core value-based framework for conceptualising the ethics of artificial intelligence and identifying trustworthy AI as lawful, ethical and technically robust.
- It gives concrete guidance on how to operationalise Trustworthy Artificial Intelligence with the implementation of seven key requirements. All 7 requirements are of equal importance and apply to all stakeholders. Appropriate steps must be taken to ensure they remain in effect throughout their entire lifecycle.
- The assessment list provides a tangible checklist for organisations to evaluate how well their own applications meet the requirements.
- The high-level expert group invites European AI stakeholders to test the assessment list in practice and to provide feedback on its workability and feasibility.
Whilst reading the report, I identified themes that the guidelines do not address directly but that are crucial for the future. These are the main topics that I hope will stimulate broader discussion:
We must be able to identify those AI applications that due to their nature and impact require special attention. It is also essential to understand that not all applications have the same level of impact or risk. Hence, imposing one-size-fits-all requirements on all AI applications will only result in a regulatory monstrosity.
Canada has adopted a different approach. Applications using automated decision-making are organised into four levels depending on the scope and irreversibility of their impacts. Similarly, in the United States, a proposed bill on impact assessments would limit the applicability of the requirements to only include applications that affect a significant amount of people. For example, making it easier for a customer service chatbot to find information or to optimise bus routes is an entirely different matter than it is to assess the need for child protection or to identify risk groups for serious diseases. It is therefore essential that similar methods, to those being deployed in North America, are also taken into use in Europe so that we can better direct our focus towards the right areas.
The implementation of the ethical requirements calls for sector-specific practices. While the report offers solid, cross-cutting guidelines, and sets a foundation for discussion, its implementation calls for sector-specific application practices. As pointed out in the report, general guidelines can never replace context-specific ethical assessments. On the one hand, this has to do with the sector-specific features of AI applications, and with the sector-specific legislation on the other.
It is impossible to make a full distinction between lawfulness and ethics. In fact, the report highlights how the two are strongly intertwined, but it sides with ethics instead of focusing on lawfulness.
One way of examining the two is by connecting legislation with ethical requirements. An ethics study, conducted as part of the preliminary report for the national Aurora AI project, showed that ethical requirements in the public sector are already being regulated with different legislations, from general acts to the Personal Data Act and the Information Management Act. Whilst the role of ethical guidelines is to outline general principles for issues that are currently not governed by legislation, their core role is also to reinforce the importance of lawfulness and ensure consistent interpretation of law within the context of artificial intelligence. Like most of our legislation, ethical practices should also be sector-specific. It is only in this way that horizontal instructions can be translated into practices that will relevantly guide the ethical assessment and risk management of applications characteristic for the sector.
Public sector must lead the way in the EU’s pilot program on AI ethics. The report’s foundations lie in international human rights laws and in the EU Charter of Fundamental Rights. We must construct, and build on, the age of artificial intelligence via means that respect human dignity, freedom, democracy, justice, equality, solidarity and citizens’ rights. This requires that citizens must be informed about the ways the public sector makes use of data and the processes that monitor and ensure their ethical use.
The topic has been widely discussed outside Europe, too. For instance, in New York there has been strong debate over the city’s use of artificial intelligence in numerous applications without disclosing sufficient information about the nature and purpose of these applications to the citizens and unbiased assessors.
Finland should tread carefully when acting on the matter. Several examples from different parts of the world have proven that lack of information is not the right way to build trust in artificial intelligence. This is an opportunity for Finland to take the lead in Europe: How can we include citizens and create shared best practices for informing the citizens about AI and algorithmic decision-making? The EU ethics pilots provide a great platform for testing and practising our approach: We should implement pilot projects in ways that include citizens and make Finland an internationally-recognised example in taking human-centric AI and ethics into practice.
The expert group, consisting of 52 independent experts and led by Pekka Ala-Pietilä, started to work on the Ethics Guidelines for Trustworthy AI last summer. The group’s other Finnish member is Leo Kärkkäinen from the Aalto University.
Chair of the Ethics Working Group of the Artificial Intelligence Programme
IEEE ECPAIS Chair