GNP Guia Naghi & Partners The Legal 500 – The Clients Guide to Law Firms
23948
post-template-default,single,single-post,postid-23948,single-format-standard,theme-stockholm,qode-social-login-1.1.3,qode-restaurant-1.1.1,stockholm-core-1.1,woocommerce-no-js,select-theme-ver-5.1.8,ajax_fade,page_not_loaded,side_area_over_content,wpb-js-composer js-comp-ver-6.0.5,vc_responsive

Raluca Leonte – The Quiet Exit of the AI Liability Directive

The recent withdrawal of the AI Liability Directive (“AILD”) proposed by the European Commission on September 28, 2022, seems to reopen a significant gap in EU AI governance. While the AI Regulation establishes rules for AI development and administrative sanctions, it lacks complementary liability mechanisms, raising concerns about consumer protection and even the legitimate interests of undertakings.

As things stand, European citizens face significant obstacles when seeking compensation in court for damages caused by AI, particularly because most of us still can’t grasp the intricacies of AI technology. This technical opacity makes it particularly difficult for claimants to prove causation and attribute responsibility.

The AILD sought to adapt non-contractual civil liability rules to address the unique challenges posed by AI through the following key provisions:

First of all, the directive introduced a mechanism for the disclosure of evidence related to high-risk AI systems (e.g., AI systems used in employment or AI systems that make decisions regarding people’s access to essential services). It empowered national courts to order the disclosure of relevant evidence when a claimant attempted to gather relevant evidence from responsible parties with no success. This was balanced with safeguards for trade secrets. Moreover, the AILD established a presumption of non-compliance with duties of care if a defendant (provider of AI, deployer of AI or even user of AI) failed to disclose evidence as required by court orders.

Secondly, it introduced a rebuttable presumption of causality between a defendant’s fault and the output or failure of an AI system. This presumption applied when the claimant demonstrated the defendant’s fault, which had to be linked to a breach of national or Union laws, laws which were put in place with the intent to prevent the specific type of damage incurred.

As a novelty compared to the AI Regulation, the AILD also addressed the liability of non-professional users of AI systems if they materially interfered with the operation of the AI system or if they had the ability to determine operational conditions of the AI system and failed to do so.

There seemed to be a consensus on the advantages of the AILD, from the European Commission, the relevant involved EU Committees and the EU research departments to public opinion. So, it might have come as a surprise when the European Commission revealed its intention to withdraw the directive proposal in its 2025 Work Programme, citing “no foreseeable agreement” as a reason.

On its regular path to adoption, the news found the AILD under the consideration of the Internal Market and Consumer Protection Committee (“IMCO”), where it divided its members.

The Committee’s rapporteur supported the withdrawal, arguing that the directive’s adoption at this stage would be premature and redundant, given the recent enactment of the AI Act and the revised Product Liability Directive, which already imposes comprehensive regulations on AI and software liability. Additionally, the opinion expressed concerns that introducing another layer of regulation could hinder innovation, particularly for SMEs.

It is important to note that in September 2024, the European Parliamentary Research Service’s study on the AI Liability Directive challenged this view, stating that the Product Liability Directive does not adequately cover certain types of damage caused by AI and that some AI systems such as ChatGPT might fall outside its scope, a belief that the Commission itself seemed to align with in its initial justification for the adoption of the AILD.

In the most recent IMCO sessions held on February 18 and March 18, 2025, a majority of members supported the withdrawal of the AILD. Still, representatives of the left and center-left parties argued for the continuation of the legislative process and for even more consumer-oriented amendments such as:

  • Explicit inclusion of non-material damages such as psychological harm, discrimination, privacy violations and reputational damage in the definition of damages;
  • AI providers’ or deployers’ obligation to disclose relevant information before the potential claimant considers introducing a claim for compensation in court;
  • Expansion of the directive’s scope to include all AI systems, not just high-risk ones;
  • Allowing, but not requiring claimants to provide initial evidence supporting the plausibility of claims;
  • Eliminating the protection of trade secrets during the evidence disclosure process.

While the European Commission’s withdrawal of the AILD is not yet materialized, it seems that the chances that the directive will be enacted are slim to none. Whether the AI Regulation and revised Product Liability Directive will suffice in protecting individuals from the black box that is AI technology remains to be seen, given that AI is becoming increasingly embedded in everyday life and, with this, new challenges constantly arise.



No Comments

Post a Comment