Now in its twelfth year, Class 46 is dedicated to European trade mark law and practice. This weblog is written by a team of enthusiasts who want to spread the word and share their thoughts with others.
Click here subscribe for free.
Who we all are...
Trade marks and AI: liability and damages – update, part 2
In her 2020 article “Trade marks and AI: Liability and damages”, MARQUES Cyberspace Team Chair Gabriele Engels discussed who could or should be liable when an AI gets something wrong and outlined the main liability issues and possible approaches. In view of the EU Commission’s proposed “strategy package” of September 2022 (the Product Liability Directive and the AI Liability Directive) as well as its 2021 AI Act, which is now in the so-called trilogue procedure, it is time to revisit the subject matter.
In the first part of her update on this topic (available here), Gabriele focused on the AI Act. This is the second part of the article and covers the proposed AI Liability Directive and proposed revised Product Liability Directive, as well as providing some general conclusions.
Proposal for AI Liability Directive
The rise of AI has led to issues surrounding causation and proof and the realization that existing liability regimes may not be equipped to deal with such uncertainties. To tackle these challenges, the Commission published a proposal for a new AI Liability Directive in September 2022.
Its objective is to establish clarity regarding liability for damages resulting from AI-enabled products and services. It seeks to enable users to receive compensation from technology providers for harm suffered while using AI systems. Such harm includes damage to life, property, health, or privacy due to the fault or negligence of AI software developers, providers, users, or manufacturers.
For the sake of consistency, the draft Directive incorporates several essential concepts outlined in the draft AI Act, including terms such as "AI system," "high-risk AI system”, "provider” and "user”.
The proposal encompasses two key measures:
- a rebuttable presumption of causality that establishes a link between the failure of an AI system and the resulting damage, and
- access to information regarding specific high-risk AI systems.
Most current liability schemes demand the claimant evidence the causal link between an action or omission of the other party and the damage suffered by them. However, the opacity of autonomous systems and artificial neural networks usually renders the individual AI user incapable of proving such causality.
Introducing a presumption of causality would help resolve this “black box” issue. In cases where claimants can demonstrate non-compliance of the AI system with the AI Act or other regulatory requirements or if a defendant fails to disclose required evidence, a presumption will arise that the defendant breached its duty and that the damage suffered was caused by this breach.
The defendant will then have the opportunity to rebut the presumption, e.g. by proving that the fault could not have led to the specific damage.
It should be noted that this presumption of causality does not amount to a reversal of the burden of proof. Instead, the affected user must still prove the non-compliance of an AI system, that actual damage was suffered due to the output of said AI and that it is reasonably likely that the defendant's negligent conduct influenced this output.
The second measure establishes a new obligation on the companies behind high-risk AI systems, which have an impact on safety or fundamental rights, to disclose technical documentation, testing procedures, and compliance information. This new responsibility intends to facilitate the identification of the individual accountable for specific damage.
The EU Council and European Parliament must now consider and adopt the draft text. Should the proposal be adopted, tech companies should brace for a rise in claims being brought against them. The introduction of rebuttable presumptions and disclosure obligations will facilitate damaged parties receiving compensation to a considerable degree.
Proposal for revised Product Liability Directive
A Proposal for a revised Product Liability Directive was also introduced by the Commission in September 2022. This proposal complements the EU’s AI strategy by updating the Product Liability Directive of 1985 and making it fit for the digital age.
Whereas the current version of the Directive only applies to tangible products, the amendments envisioned by the draft would expand its applicability to cover intangible products, including software and AI systems. By accounting for cyber vulnerabilities and digital services necessary for the functionality of products as well as software and AI system updates, established liability rules are adapted for new technologies.
As under the AI Liability Directive, consumers will be granted access to information which defendants would not previously have had to disclose. This will help facilitate the enforcement of their claims. This will make it easier for them to clear the burden of proof and increase the chances of a successful compensation claim in complex cases.
The next step is for the European Council and Parliament to consider and adopt their position on the draft legislation.
Conclusion and outlook
The next months and years will see significant changes being made to the legislative landscape currently applicable to artificial intelligence. As the EU strives to become a front-runner in regulating AI and setting an example for the rest of the world, companies must watch developments closely and implement changes accordingly.
This already includes providing for contractual arrangements addressing risk allocation and the distribution of liability between all parties involved and influencing the AI system.
In the absence of particular regulations, the nature of the AI and its exact functioning should be specified in the agreement, i.e. whether it is fully independent or semi-independent. For semi-independent systems, it should also be indicated whether there is a substantial degree of human control “behind” the machine and the importance of this control.
In view of the EU’s risk-based approach it is good to see that they are also seeking to implement substantive measures and processes (presumption rules, access to information and special regulations for certain types of high-risk AI) helping to protect users and in particular an injured party. This is an approach necessary in combatting liability issues and contributing to closing liability gaps for (not only) possible trade mark infringements committed through AI systems.
Gabriele Engels is Partner at D Young & Co and Chair of the MARQUES Cyberspace Team.
Posted by: Blog Administrator @ 09.13Tags: AI, AI Act, Product Liability,
Sharing on Social Media? Use the link below...
Perm-A-Link: https://www.marques.org/blogs/class46?XID=BHA5199