News and developments

New liability rules for artificial intelligence in the European Union

There’s a lot going on in the EU about Artificial Intelligence - Artificial Intelligence (AI) is at the heart of the EU��s strategy for creating a digital single market, and too this end, a number of EU legal documents have seen the light of day over the last several years. However, in April 2021, the European Commission presented a revolutionary proposal for a regulation on AI (Artificial Intelligence Act), laying the foundations for a legal framework for the use of AI within the European Union. Legislative work on the AI Act is already at an advanced stage and the document is expected to come into force at any moment.

AI system output and civil liability

For some time, the Union has also been working on the issue of regulating civil liability in the context of AI, but these proposals did not ‘take hold’. This includes the White Paper on Artificial Intelligence of February 2020 or the European Parliament’s resolutions on ethical framework, civil liability and intellectual property rights for AI of October 2020.

Regulations apply directly in every Member State, in the exact same way, whereas civil liability regimes vary greatly from one EU country to another. The proposed wording of the regulation was unfortunately completely incompatible with several of these regimes (including Poland’s). A much better method for the EU to regulate the issue of liability is via a directive, setting out certain standards and mechanisms, to be implemented by each Member State in a manner appropriate to its own law. This is the mechanism selected on this occasion.

On 28 September 2022, the European Commission adopted two proposals leading to the regulation of AI liability. One concerns the modernisation of existing rules on the strict liability of manufacturers for defective products, whereas the other proposes a new, separate directive on AI liability.

Artificial Intelligence Liability Directive

By its very title, the Artificial Intelligence Liability Directive (AILD) indicates that it concerns non-contractual liability.

In legal terms, AILD regulates primarily tort liability or, to put it simply, liability for damage arising from random events or incidents between entities not bound by a contract. This is necessary in a situation where we are generally surrounded by AI.

So what torts can AI powered devices commit against us? There are numerous examples such as :

  • an autonomously driven car hits a pedestrian on a zebra crossing
  • an AI-controlled drone destroys a parcel in transit by dropping it from too great a height
  • an AI system handling company debt misidentifies a debtor and denies them access to services
  • an AI system for generating personalised medicines advises us to take a medicine that then causes harm.
  • There will be many similar examples, and AILD regulates liability in these types of situations.

    However, the Directive does not regulate contractual liability. This means that, for example, if an organisation buys an AI system from an IT vendor and that system fails, then (as a general rule) such organisation has nothing to resort to in the AILD and rather must seek redress via a well-written agreement, prepared by a lawyer who understands AI matters.

    Presumption of causality at the core of AILD

    Fundamental to the AILD is its Article 4, in accordance with if an injured person brings a compensation action to court for harm caused by AI, the courts should presume a causal link between the fault of the defendant using AI and the AI system’s output or failure to produce an output which gave rise to damage. So, in simple terms, it is the duty of the entity using AI to show that it should not be held liable for the harm caused by its AI, and not the other way around (because this would be too challenging or too expensive for an injured person to do so).

    AILD alleviates the burden of proof for victims

    Courts hearing cases for compensation for damage caused by AI will be allowed to instruct defendants to disclose relevant evidence even if the injured person (the claimant) did not request disclosure or was not aware of its existence.

    AILD’s overarching goal is to make it as easy as possible for ‘ordinary people’ affected by malfunctioning AI used by businesses, including large corporations, to seek compensation, with the burden of proof shifted squarely onto the shoulders of the beneficiaries of AI.

    Remarkably, the AILD introduces regulations directly in reference to the AI Act and is based on the same concept, differentiating liability according to the type of system risk that we are dealing with (high-risk vs. non-high-risk).

    Thus, in relation to non-high-risk AI systems, the presumption of causality applies only if the court considers it excessively difficult for the claimant to prove a causal link. However, for high-risk AI systems, five requirements are laid down and it is only where any one of them is not complied with that the presumption of causality may be deemed to have been met.

    What next?

    The Commission’s proposals now need to be adopted by the European Parliament and the Council. The publication of the Commission’s draft legislation will open discussions at EU and national levels, and this time around, should lead to a close alignment of legislative solutions with ‘real life’.

    If you have any questions, visit www.kochanski.pl and contact our Technology Practice members.

    The article was authored by Piotr Kaniewski ([email protected]) and Paulina Perkowska ([email protected]).