The article examines the conceptual and normative “riddle” posed by art. 6 of the EU Artificial Intelligence Act (AIA) in defining “high-risk” AI systems (h-AISs). It argues that the combination of a horizontal, tech- nology-neutral framework with a risk-based classification generates sig- nificant interpretative uncertainty and undermines legal certainty. After situating the AIA within the broader EU product-safety regime and the New Legislative Framework, the contribution meticulously examines in detail the critical issues arising under Art. 6 AIA. These range from para. 2 recalling the Annex III list of high-risk AI systems, which does not rest on an objective assessment of risk, to the exceptions in paras. 3 and 4, and the cross-reference to Union harmonisation legislation in Annex I. Par- ticular attention is paid to contested notions such as “safety component” and “third-party conformity assessment required”, illustrated through case studies (e.g. security mobile robots, humanoid robots, drone docking stations). The article concludes that this unstable definitional architecture undermines consistent application, equal treatment across sectors, and ef- fective incentives for innovation.

The Brussels Sphinx’s Riddle. What is a high-risk AI System?

Andrea Bertolini
;
Federica Fedorczyk
;
Marta Mariolina Mollicone
;
Guilherme Migliora
2025-01-01

Abstract

The article examines the conceptual and normative “riddle” posed by art. 6 of the EU Artificial Intelligence Act (AIA) in defining “high-risk” AI systems (h-AISs). It argues that the combination of a horizontal, tech- nology-neutral framework with a risk-based classification generates sig- nificant interpretative uncertainty and undermines legal certainty. After situating the AIA within the broader EU product-safety regime and the New Legislative Framework, the contribution meticulously examines in detail the critical issues arising under Art. 6 AIA. These range from para. 2 recalling the Annex III list of high-risk AI systems, which does not rest on an objective assessment of risk, to the exceptions in paras. 3 and 4, and the cross-reference to Union harmonisation legislation in Annex I. Par- ticular attention is paid to contested notions such as “safety component” and “third-party conformity assessment required”, illustrated through case studies (e.g. security mobile robots, humanoid robots, drone docking stations). The article concludes that this unstable definitional architecture undermines consistent application, equal treatment across sectors, and ef- fective incentives for innovation.
2025
File in questo prodotto:
File Dimensione Formato  
3-25-Bertolini-et-al.pdf

accesso aperto

Tipologia: Documento in Pre-print/Submitted manuscript
Licenza: Dominio pubblico
Dimensione 8.14 MB
Formato Adobe PDF
8.14 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11382/584432
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
social impact