Report cautions that major tech firms have excessive influence over AI benchmarks

The analysis suggests that the AI Act adopts a pro-business stance, leaving the responsibility of addressing intricate and fundamental rights issues to standard-setting bodies.

A study conducted by the advocacy group, Corporate Europe Observatory (CEO), alleges that big tech firms have an undue influence in the formation of EU-recognized standards for artificial intelligence tools. The study found that more than half of the 143 participants in the Joint Technical Committee on AI (JTC21), established by European standardization entities CEN and CENELEC, represent businesses or consulting firms.

Of these participants, nearly a quarter are representatives from US corporations, including Microsoft, IBM, Amazon, and Google. However, civil society representation is minimal, comprising only 9% of JTC21 members. CEO expresses concerns over the lack of inclusivity in the standard-setting process.

The AI Act, a groundbreaking attempt to regulate AI through a risk-based approach, was sanctioned last August, with its provisions set to be implemented gradually. The European Commission directed CEN-CELENEC and ETSI to set the underlying industry standards in May 2023, which will apply to a wide range of products from medical devices to toys. Adherence to these harmonized standards ensures a product’s compliance with essential safety requirements outlined in EU rules.

Bram Vranken, a researcher and campaigner at the Corporate Europe Observatory, criticizes the European Commission’s decision to entrust public AI policymaking to a private entity as deeply problematic. He asserts that for the first time, standard setting is being used to implement requirements related to fundamental rights, fairness, trustworthiness, and bias.

National standard-setting bodies tend to prioritize process over particular outcomes. JTC21 Chair Sebastian Hallensleben expressed that this approach might make enforcing an outcome more challenging. AI systems might comply with harmonized standards and receive a CE mark, but this does not guarantee that they won’t exhibit bias or discrimination.

CEO also examined the membership of national standard-setting bodies working on AI in France, the UK, and the Netherlands. In these countries, the proportion of experts representing corporate interests is approximately 56%, 50%, and 58%, respectively.

In response to CEO’s concerns, the Commission stated that the standards provided by CEN-CENELEC will be assessed by the European Commission and will only be referenced in the Official Journal if they properly address the objectives of the AI Act and adequately reflect the requirements for high-risk AI systems.

A senior official at the Dutch privacy watchdog Autoriteit Persoonsgegevens (AP), cautioned that the process for establishing standards will need to be expedited as time is running out. He emphasized that standardization processes typically take several years and that this pace needs to be accelerated.

Jan Ellsberger, the chair of ETSI, affirmed that standardization is a voluntary industry request and the level of commitment from the industry determines the speed of the process.

Comments are closed.