AI Outlook: Europe initiates AI regulation introducing the principle of trustworthy AI – Lexology
On February 19, 2020, the European Commission presented its White Paper on Artificial Intelligence – A European Approach to Excellence and Trust, a much-anticipated policy document setting out concrete measures and proposed regulation with the objective of promoting the development, uptake and use of AI applications, while also addressing the resulting fundamental rights challenges.
The document has raised concerns among companies about whether new rules on AI will negatively impact businesses developing or deploying AI solutions across the EU. Feedback on the white paper can be provided until May 19, 2020.
Key elements of the White Paper on AI
The white paper proposes a dual approach. It aims to establish an “ecosystem of excellence” on the one hand, and “an ecosystem of trust” on the other hand.
1. Ecosystem of excellence
To promote the development and uptake of AI applications by European citizens, businesses and public institutions, the Commission proposes to:
- encourage synergies among centers for research and innovation to reduce fragmentation (including the creation of testing and experimentation sites for AI applications);
- further enhance cooperation between EU Member States;
- support upskilling initiatives for the workforce;
- promote adoption of AI solutions across sectors and organizations of various sizes (particularly SMEs); and
- expand investment in AI development and deployment.
The European Data Strategy, published alongside the white paper, aims to create a European single market for data, to facilitate access to data and computing infrastructures – an essential requirement for the development and use of AI applications.
2. Ecosystem of trust
To address the challenges development and deployment of AI applications may pose in relation to fundamental rights, safety and other obligations, the white paper suggests concrete regulatory options. Importantly, the document states that the relevant EU legal framework should be principles-based and target so-called “high-risk AI systems.”
The document underlines that existing EU laws and regulations already apply to AI solutions, including rules on data protection (GDPR), consumer protection, safety and liability. The Commission asserts, however, that the AI-related aspects of these existing rules may be difficult to enforce due to the typical characteristics of AI systems, such as opacity, unpredictability, complexity, and autonomous behavior. Against this background, the Commission proposes to evaluate individually the need for legislative amendments or the introduction of new legislation, on the basis of the identification of specific risks.
Envisaged scope of application
With a view to such potential future regulation of AI, the white paper takes a risk-based approach (similar to that of GDPR), particularly by singling out “high-risk applications,” which could become subject to the most stringent requirements. The Commission says that an AI application should be distinguished as “high-risk” if two cumulative criteria apply:
- The sector involves significant risk: This will be the case if, given the characteristics of the activities typically undertaken in that sector, significant risks can be expected to occur. The white paper states that these sectors should specifically and exhaustively be listed in the potential future legislation, mentioning the examples of healthcare, transport, energy and parts of the public sector.
- The intended use involves significant risk: This entails that the AI application in the sector in question should be used in such a manner that significant risks are likely to arise. By including this second criterion, the Commission aims to acknowledge that not every use of AI in the selected sectors necessarily involves significant risks that would in turn justify legislative intervention.
Importantly for companies, the white paper proposes to introduce certain exceptions, irrespective of the sectors concerned, where the use of AI applications would be considered as “high-risk as such” (e.g. facial recognition technology or for recruitment purposes).
Envisaged obligations for high-risk AI applications
The Commission proposes that the future regulatory framework for high-risk AI applications could impose mandatory legal requirements on the below key aspects (which echo the requirements set out by the AI HLEG in its Ethics Guidelines):
- Training data: The Commission considers requirements regarding the quality of training data (representative and comprehensive data-sets) and compliance with privacy and data protection rules.
- Data and record-keeping: The paper considers the introduction of requirements to keep records of the selection process and the characteristics of training and testing data, and the methodologies used for programming and training. These records should enable regulatory review and enforcement by allowing AI decisions to be traced back and verified.
- Information to be provided: The document recommends requirements related to transparency (e.g. information provision on capabilities and limitations of AI systems), and a notice requirement when citizens would interact with an AI system rather than a human being.
- Robustness and accuracy: The Commission considers requirements on robustness and (the level of) accuracy, reproducibility of outcomes, ability to react to errors and inconsistencies, and resilience against attacks and manipulation of data or algorithms.
- Human oversight: The white paper considers the imposition of some degree of human oversight and suggests targeted requirements, depending on the specific circumstances, ranging from requiring human review before a decision is implemented, to the possibility of human intervention in real-time or afterwards.
- Specific requirements for remote biometric identification (i.e. facial recognition): A ban of three to five years on facial recognition technology was considered in an earlier, leaked draft version of the white paper, featuring prominently in the public debate over the past weeks. In the official version of the document, however, the Commission took a step back, no longer proposing a concrete ban, but referring to current EU data protection rules and the Charter of Fundamental Rights, which already allow the use of remote biometric identification only in cases where this action is justified and proportionate, and is subject to adequate safeguards. As a next step, the Commission wishes to engage in a public discussion on potential exceptions – if any – that might justify the use of AI for remote biometric identification.
As such requirements would be imposed on “high-risk” AI applications, the Commission suggests a prior conformity assessment, possibly including procedures for testing and inspection of certification of algorithms and data sets. For AI applications that would not be identified as “high-risk,” the Commission is considering a voluntary labeling system that would certify compliance with (parts of) the requirements and allow companies to market their AI products as “trustworthy.”
Expected business implications
The additional policy and regulatory measures considered in the white paper will increase the cost of compliance and the administrative burden on companies operating in a variety of sectors when they develop or deploy AI systems.
The suggested requirement for firms to possibly submit their AI products and services to a conformity assessment before being allowed entry to the EU market would significantly increase the costs and time required for firms to deploy new AI applications, and might pose IP-related difficulties.
By proposing the introduction of the two cumulative requirements of (i) belonging to a certain high-risk sector, and (ii) being intended for a certain high-risk use, the Commission seeks to avoid designating full sectors as “high-risk” while taking into account that the level of risk associated with the use of AI applications in a specific sector may range from low- to high-risk applications. However, this approach may also lead to uncertainty, as some AI-applications may be considered high-risk in one sector but not in another, and in order to determine this, further detailed regulatory guidance will likely be required.
Further, a major uncertainty for companies developing or deploying AI applications lies in the Commission’s consideration of “certain exceptions,” where the use of an AI application would be considered to be “high-risk as such.”
The European Commission’s public consultation on the white paper on AI runs until May 19, 2020.
The European Parliament is also working on a number of AI-related policy and legislative dossiers with major implications for organizations. These files cover aspects of AI in relation to:
The reports may feed into a motion for a resolution, which would allow the European Parliament to direct the Commission’s and Member States’ attention to the matter. The requests for a legislative proposal – if agreed by the Commission – will trigger the Commission to initiate legislation.
Organizations should closely monitor these developments, and engage with the relevant decision-makers to ensure that their interests are represented.
What other Commission initiatives will affect the digital sector over the coming months?
The AI white paper was presented as one of the two initial pillars of a far-reaching EU Digital Strategy that sets out the Commission’s key objectives in the field of digital for 2020-2024. The second pillar is a European Strategy for Data, which aims to enhance the use of data by creating an EU single market for data. In parallel, the Commission published its report on the safety and liability implications of AI, the Internet of Things and robotics.
Moreover, the European Commission’s 2020 Work Programme contains several additional relevant initiatives that will considerably affect companies operating across the EU. The most notable initiatives for launch in Q1 2020 include the publication of a new industrial strategy and a dedicated SME strategy.
Previous EU activities related to AI
The political guidelines for the new European Commission (2019-2024) under Commission President Ursula von der Leyen initially announced the introduction of binding legislation on a “coordinated European approach on the human and ethical implications” of AI in a broad range of sectors. The AI white paper now indicates that the process of regulating AI has been slowed down. Instead of binding legislation, the Commission has put forward a non-binding policy document, which, however, does contain concrete proposals for AI regulation.
The 2020 Commission Work Programme, a document published annually and presenting the Commission’s key policy and legislative initiatives for the upcoming year, reiterates that binding legislation on AI will be proposed in Q4 2020, in particular regarding safety, liability, fundamental rights and data aspects.
These developments follow the AI-related activities of the previous European Commission, undertaken in cooperation with the EU Member States. Since April 2018, all Member States have committed to a Declaration of cooperation on AI, and a European AI Strategy has been concluded.
EU Member States have agreed a Coordinated Action Plan on AI, which will be updated in 2020. Apart from the Ethics Guidelines, the AI HLEG has also put forward policy and investment recommendations for Trustworthy AI. Another expert group compiled a report on liability for AI and other emerging technologies, containing suggestions for updates to EU and national liability regimes.