As AI regulation moves towards a product safety approach, including conformity assessment under the AI Act, everyone in AI governance should understand quality infrastructure. The WP.6 publication Basics of Quality Infrastructure for Trade is a fantastic starting point, available here.
It’s not often I say this, but anyone working in AI governance should read this…
AI regulation has moved from addressing fairness, to ethics and now to AI safety. With this shift has come a move towards a product safety regulatory approach, particularly in the EU’s AI Act.
The AI Act refers to the EU’s New Legislative Framework, which aims to achieve product safety through EU-wide quality infrastructure. References to conformity assessments, accreditation bodies and market surveillance (particularly for high risk AI systems) are all terms from product safety regulation: a different regulatory approach to that of pure fundamental rights.
Anyone advising organisations on compliance with the AI Act, or AI safety in general, will have to become familiar with this area of regulation.
Quality infrastructure is the term used to describe the whole regulatory ecosystem for product safety, including metrology, market surveillance, conformity assessment and trade border control. Understanding its core concepts will be vital for any organisation working with AI systems and products with integrated AI in future, as more technical safety regulations are drafted to govern AI.
The UNECE’s Working Party has produced a brilliant guide to the basics of Quality Infrastructure for Trade. Written by world experts, it sets out the core elements of quality infrastructure in accessible terms and real-word examples.
As technical regulations are drafted by the Joint Technical Committee of CEN and CENELEC under the AI Act, and global regulators begin to assess the need for safety controls around AI, this document is essential reading for anyone advising on AI governance, compliance and safety.