top of page
Writer's pictureCaro Robson

Beyond GDPR: product safety and AI governance

25 November 2024



I was at a summit last week to talk about AI ethics when I was blindsided by the speaker before me insisting that ‘ethics’ is just vague moralising, not solid regulation. Only technical standards are concrete.


I responded by explaining that ethics are primarily regulated by fundamental rights legislation, such as the EU’s GDPR, whereas safety is regulated by product safety legislation, such as the EU’s new legislative framework. AI regulation – and the AI Act explicitly – requires both.


I make this humble brag about speaking because it confirmed my sense that anyone involved in AI governance must get to grips with both fundamental rights and product safety approaches to regulating technology.


Today’s blog will set out some of the key elements of product safety regulation, drawn from the UNECE’s Working Party on Regulatory Cooperation and Standardization’s brilliant guide, Basics of Quality Infrastructure in Trade. (Full disclosure: I’m a member of one of the WP.6’s groups of experts.)


Product safety and the EU’s AI Act


The EU’s AI Act explicitly combines product safety and fundamental rights regulation, referring to the group of EU instruments known as the ‘New Legislative Framework’ (see for example, Recital 9 of the AI Act).


According to the EU Commission:

Adopted in 2008, the new legislative framework […] is a package of measures that aim to improve market surveillance and boost the quality of conformity assessments. It also clarifies the use of CE marking and creates a toolbox of measures for use in product legislation.


The AI Act has references to the new legislative framework throughout the text: anytime the words ‘conformity assessment,’ ‘placed on the market’ or ‘market surveillance’ are used, the new legislative framework is behind them.


But what do these ‘product safety’ terms mean?


The whole regulatory ecosystem for product safety is known as ‘quality infrastructure,’ described as “the unsung hero of trade” by the UNECE.  


According to the UNECE: “Quality infrastructure is comprised of regulations, structures and bodies (such as accreditation, metrology, standards development bodies) that exist in a country/economy for supporting trade on a fair market to promote safe products and services in a sustainable society.”


Regulations may take the form of specific legislative instruments, like the EU’s new legislative framework, but often involve setting technical standards for manufacturers or product providers to meet when designing and building their products.


Standardisation is often a joint process between technical authorities, such as national standards-setting bodies like the BSI (British Standards Institute) or international bodies like the European Committee for Standardization (CEN). In the case of AI, the EU’s standards are being drafted by Joint Technical Committee (JTC) 21 of CEN and CENELEC (the European Committee for Electrotechnical Standardization).


Technical regulations are defined by the World Trade Organisation’s Technical Barriers to Trade Agreement as a: “Document which lays down product characteristics or their related processes and production methods, including the applicable administrative provisions, with which compliance is mandatory.”  


Often technical or safety standards (such as ISO/IEC 15445:2000 which defines HTML) are distinguished from risk management standards (such as ISO 42001, ISO’s AI Management System standard). The latter sets out a governance or risk management framework, whilst the former mandates that products meet a particular measurable standard before being placed on the market.


But how do you measure against standards?


Metrology is the science of measurement and its application.  Metrologists specialise in measuring and checking products, food supplies, software…anything requiring measurement against verifiable standards.


Conformity assessment is the act of checking an item against a verifiable standard. It can take place prior to an item being placed on the market, for example to ensure compliance and get a safety certification such as a CE mark, or after a product is placed on the market, to ensure that nothing unsafe is being used.


Conformity assessment can include testing, inspection, certification and verification. Bodies carrying out conformity assessments must themselves be accredited by their national standards bodies, to ensure that their assessors meet national conformity assessment standards. An example at the EU level is EA, the European Cooperation for Accreditation, which oversees national accreditation bodies.


How do metrologists decide which products to check after they are placed on the market?


Market surveillance is the process by which metrologists, standards bodies and any government agencies involved in product safety determine which items to check after they are placed on the market. It involves a high degree of cooperation between authorities, and risk assessment of the likelihood and impact of harm being caused to individuals by unsafe products.


The AI Act


The EU’s AI Act is not just about fundamental rights. Whilst a fundamental rights approach like that of the GDPR is central to the Act (see for example Art 27 on Fundamental Rights Impact Assessments for High Risk Systems), the AI Act goes further. It includes a number of product safety requirements from the new legislative framework, particularly around high-risk AI systems (see for example Arts 16-17 on the obligations of providers of high-risk systems, and the quality management system required).


Safety and risk management standards for AI systems are currently being developed by JTC21 (CEN/CENELEC’s Joint Technical Committee) and are due in April 2025. Complying with those standards will provide a presumption of conformity with the Act. However, anyone involved in AI governance should not wait until April to begin creating quality management processes as part of their AI governance programme.


The EU is not alone in taking a product safety approach to regulating AI. Proposed legislation in Canada, Brazil, China, Singapore, Japan and aspects of US policy all reflect product safety approaches to AI. Following the Bletchley Declaration, the UK is also considering legislating to promote safety in high-risk systems.


AI governance, particularly with high-risk systems, involves understanding both fundamental rights and product safety approaches to regulating technology.


The UNECE’s Basics of Quality Infrastructure in Trade is a great starting point to find out more about product safety regulation, compliance and enforcement. I highly recommend it to anyone working in AI governance, especially where high-risk systems are involved.


Now is the time to start getting to grips with product safety as a core part of AI governance…

 

bottom of page