Now in its twelfth year, Class 46 is dedicated to European trade mark law and practice. This weblog is written by a team of enthusiasts who want to spread the word and share their thoughts with others.
Click here subscribe for free.
Who we all are...
Trade marks and AI: liability and damages – an update
In her 2020 article “Trade marks and AI: Liability and damages”, MARQUES Cyberspace Team Chair Gabriele Engels discussed who could or should be liable when an AI gets something wrong and outlined the main liability issues and possible approaches. In view of the EU Commission’s proposed “strategy package” of September 2022 (the Product Liability Directive and the AI Liability Directive) as well as its 2021 AI Act, which is now in the so-called trilogue procedure, it is time to revisit the subject.
In the first part of this article, Gabriele focuses on the AI Act. The second part of the article, to be published tomorrow, will cover the other relevant developments
Background
For the past two years, the EU has been working hard on establishing a new, unified approach to regulating artificial intelligence.
As well as updating existing legal regimes to bring them up to date with the 21st century, this ambitious project includes introducing brand new legislation aimed specifically at artificial intelligence. Whilst the former goal is to be achieved by reforming the Product Liability Directive (COM(2022) 495 final), the focus of the latter objective is the creation of an AI Act (COM(2021) 206 final) and an AI Liability Directive (COM(2022) 496 final).
The AI Act will impose requirements on manufacturers and operators of AI aimed at preventing the violation of rights. The approach used is universally applicable and solely based on risk and separate from any notion of fault.
Regulating liability in cases in which AI is involved and gets something wrong is not the subject matter of the AI Act. These issues are addressed in the AI Liability Directive. Additionally, a revision of the Product Liability Directive would expand existing liability regimes to become applicable to intangible products, such as AI systems.
AI Act
Following the unveiling of the Commission’s proposal in April 2021, the draft was extensively discussed in both the Council and Parliament. After the Council adopted its position in December 2022, the European Parliament agreed on the final text on 14 June 2023. With trilogue negotiations between the three bodies as the next stage to agree on a mutual final draft, the EU draws another step closer to passing a EU Regulation on AI.
The initial Commission proposal encompasses a broad definition of AI, characterising it as a software-based technology that generates outputs through interactions with its surroundings. The draft Regulation establishes four distinct risk categories of AI: unacceptable, high, low, and minimal risk.
AIs categorised as low or minimal risk, such as chatbots and spam filters, are not obligated to fulfil any specific requirements other than transparency obligations.
Conversely, systems posing an unacceptable risk are outright prohibited. Systems are often categorised as posing an unacceptable risk where fundamental rights are significantly impacted (e.g., facial recognition programs deployed in the context of law enforcement which utilise real-time biometric data).
The focus of the Regulation is on high-risk AI systems, which are subjected to stringent requirements. Examples include security components embedded within other products, such as drones.
In its common position, the Council expanded on several key points of the draft Regulation. For instance, it narrowed the definition of AI to only include systems developed through machine learning, as well as logic- and knowledge-based approaches.
Additionally, requirements for high-risk AIs were clarified. Further, an additional layer was added to the classification to safeguard that those systems whose subject-matter initially falls under the high-risk classification, which effectively only pose a minimal risk, are not subject to the same arduous requirements. New provisions to enhance transparency and facilitate user complaints were also introduced.
The text ultimately adopted by the European Parliament on 14 June envisages further amendments. A key addition is the focus on specific rules for generative AI, such as ChatGPT. It is provided that such systems would have to comply with additional transparency obligations. These include the requirement to disclose that content was generated by AI, preventing the generation of illegal content through design precautions, as well as the duty to publish summaries of copyrighted data used for training purposes.
The list of prohibited unacceptable AI systems is also expanded to include inter alia predictive policing systems, emotion recognition systems in a variety of scenarios (including law enforcement and in the workplace), and the indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases.
Additionally, high-risk AIs should be expanded to encompass those which harm people’s health, safety, fundamental rights or the environment, as well as AI which influences political campaigns and which is used in recommender systems by very large social media platforms within the meaning of the Digital Services Act, i.e. platforms with 45 million users in the EU per month.
A compromise must now be reached between the three drafts to produce a final draft which can then be voted into law.
UK approach
This EU approach stands in stark contrast to the one adopted by the UK. The UK has decided to take a different route entirely in a post-Brexit world, opting for a light-handed approach which aims to avoid the stifling of innovation under the burden of an entirely new regulatory regime.
For this reason, the Policy Paper presented to Parliament by the Secretary of State for Science, Innovation and Technology in March 2023 does not propose the creation of new laws or empowering a new regulator. Instead, existing regulators are to be entrusted with responsibility for establishing sector-based approaches tailored to the way that AI impacts their individual sector.
The danger of multiple, highly divergent or even unintentionally overlapping regimes will be mitigated by the implementation of overarching core principles related inter alia to transparency, security, safety and fairness.
In essence, this revised Policy Paper does not differ significantly from the Policy Paper released in 2022 which was met with support from industry players. The revised Policy Paper additionally identifies central support functions to ensure a level of regulatory coherence between sectors.
Gabriele Engels is Partner at D Young & Co and Chair of the MARQUES Cyberspace Team. Read the second part of this article on the Class 46 blog tomorrow
Posted by: Blog Administrator @ 13.14Tags: AI, AI Act, Product Liability,
Sharing on Social Media? Use the link below...
Perm-A-Link: https://www.marques.org/blogs/class46?XID=BHA5198