I have been working on behalf of Equinet, which holds liaison status in the European Committee for Standardization (CEN) and the European Committee for Electrotechnical Standardization (CENELEC) Joint Technical Committee 21 on Artificial Intelligence (JTC21). It is tasked with producing harmonised standards to operationalise the ‘essential requirements’ set out in the EU AI Act. Here are a few of my initial reflections and recommendations:
You’ve got to be in it to win it
The New Legislative Framework (NLP) approach, which limits legislation to broad ‘principles’ developed upon by technical standards, has focused primarily on developing standards for the protection of health and safety. The AI Act takes this approach in a novel direction. By seeking to also protect fundamental rights (including rights to non-discrimination and equality), standards propose a novel form of “soft law”, requiring input from experts with an understanding of the rule of law, human rights law, as well as appreciating the impacts of AI on the lived experience of people in real-world contexts. Therefore, new fields of knowledge and understanding are needed.
Unlike law-making, standards development remains opaque to outsiders. Even for those involved, there are hurdles to overcome: informational barriers, document overload, registration requirements, and navigating meetings. This makes participation difficult, especially for not-for-profit newcomers.
The opacity of the process and the obstacles to participation in the development of European standards raise a serious question regarding legitimacy. Liaison status simply allows you to participate. Only national standards bodies can vote to approve new European standards. This tiered voting risks lacking genuine engagement of representative stakeholders who can provide an all-important (and previously unheard) voice on the impacts of AI on real people in the real world. Only by enabling wider civil society engagement (not mere participation) can these standards acquire greater legitimacy and better meet their objective to protect against and prevent harms to health, safety and fundamental rights.
High Stakes
For Big Tech firms, shaping international standards is a high-stakes endeavour with profound implications for their market dominance, business models, and future profitability. Unsurprisingly, strategic engagement in standards to protect commercial interests and bottom line is high.
This means that civil society actors able to engage and participate in the standards-making process need to contribute with commensurate strategic engagement. They must be armed with a deep understanding of the law and the regulatory obligations that tech firms will face, but also of the European standards-making body rules and regulations. They must be prepared to work with Big Tech to find consensus.
Home grown
Given that there are already many solid working international standards that can be applied to developing AI technologies that are present, we might question the need for European harmonised standards. The key reason is the novel inclusion of fundamental rights. International standards were created without specific regard to the EU AI Act requirements, and thus without consideration of the European charter of fundamental rights.
Pushing towards an adoption of these international standards developed without adequate representation of European democratic and societal interests is not politically palatable. It risks enshrining standards misaligned with European values and EU AI Act requirements.
Time is of the essence – writing homegrown standards independent of already established international standards from scratch is unrealistic. Compromise achieved through rigorous modification of internationally accepted standards and European contextualisation (inclusive of representative voices to uphold ethical, AI Act and fundamental rights-based considerations) is crucial and a challenge, especially given the stakes.
What lies ahead
Following the drafting and consensus-building work, the draft European standards will be at the mercy of National Standards Bodies (and its representatives) as the first working drafts of the European standards will be up for vote.
Adoption of the JTC21 standards will be essential to ensure that the risks embedded in High-Risk AI systems, including the risks of fundamental rights violations, are mapped, managed, monitored and mitigated by providers.
Unless the standards provide meaningful and effective protection from and prevention of health, safety and fundamental rights risks through ex-ante governance measures by design, the position for Europeans will not have improved. European standards making and the New Legislative Framework approach will have failed to secure European values in homegrown standards.
Beyond 2024, the effective oversight by the European Commission of the harmonisation of the European standards to meet AI Act requirements, the ability of Notified Bodies and Market Surveillance Authorities to adequately conformity assess against them, and the holding of operators of AI accountable will be a key challenge.
For more on this topic see also:
How can we guard against AI-generated discrimination? | OHRH
0 Comments