EU AI Act: 3 Questions for the European Commission’s AI Office

After the negotiations, there’s finally an agreement on the General Purpose AI (GPAI) models, and now the EU AI Office (EU AIO), established in January 2024, is tasked with challenging decisions. These have been raising questions from the media and research organizations, our Experts have 3 questions of their own.

EU Reaches Provisional Agreement On AI Act What Does It Mean

In this article

  1. 1. How will the office be led and staffed?
  2. 2. What is the process for drafting the GPAI codes of practice (CoP)?
  3. 3. How will the EU AIO balance fragmented international and EU level processes?
  4. These issues aren’t insurmountable

The EU AI Office (EU AIO), which was established in January 2024, is tasked with - among other responsibilities - the supervision and enforcement of the AI Act’s (AIA) provisions relating to General Purpose AI (GPAI) models. 

This will likely not be an easy task. Negotiations on GPAI models were highly polarized, with a myriad of stakeholders hotly debating definitions, processes and institutional frameworks.   

Now that we have a final agreement - with GPAI models being categorized into those with and without “high impact capabilities” based on a compute threshold - the real work begins for the AI Office. 

Based on appliedAIs’ exchange in the ecosystem, and in line with media reports and research, there are at least three interrelated questions that the EU AIO will have to answer over the coming few months.

Learn more about appliedAI commitment to Trustworthy AI

1. How will the office be led and staffed?

On April 10, Euractiv reported that MEPs from three parties have written to the Commission calling for transparency about who will lead the new office and how it plans to attract talent.

These questions assume significance because competition for AI talent is heating up around the world. The Financial Times and the Wall Street Journal reported in March that salaries for skilled AI researchers are routinely passing the seven figure mark. These reports, along with research from the MacroPolo institute also show that the US remains the most popular destination for these researchers.

The AIO will have to contend with these pressures as it attempts - with limited resources - to attract staff who are capable of engaging with some of the most complex scientific, technical and policy challenges. Who staffs the EU AIO and will also play a crucial role in providing answers to the next question.

2. What is the process for drafting the GPAI codes of practice (CoP)?

GPAI model providers will rely on the CoP to demonstrate compliance with their obligations under the AI Act. How these codes will be drafted in practice remains unclear.

According to the AIA, the EU AIO is merely tasked with “encouraging and facilitating” the drafting of the CoP. In practice, therefore, it will be the GPAI model providers themselves who will be responsible for drafting them. The role of civil society, academia and other stakeholders also remains unclear, with the AIA stating that they “may support the process.”   

A recent report by Politico revealed that the UK AI Safety Institute (AISI) - which was tasked with testing and evaluating frontier AI models - was unable to access the models due to a breakdown in process. 

To avoid similar failures in the short nine month span available to the AIO to steer this process, it will have to create a framework that is transparent, equitable, and impactful.

AI Office 1

Credits: European AI Office

3. How will the EU AIO balance fragmented international and EU level processes?

In his reflections on the AI Act, Kai Zenner noted that one of the challenges confronting the AI Act is the complex web of authorities responsible for oversight and enforcement along with the interaction of the AI Act with other legislations, like the GDPR. 

These challenges have already manifested, with TechCrunch documenting in April how some GPAI model providers are being investigated by the Data Protection Authorities of several EU member states. In effect, the decision of these, and other authorities, will have as much of an impact on GPAI model providers as the CoP.

These challenges for the EU AIO will compound as it attempts to harmonize its approach with international peer institutions like the US and UK AISI AISI - both of which have different incentives and operating conditions - while simultaneously attempting to create a coherent EU single-market regime for GPAI providers.

These issues aren’t insurmountable

 At appliedAI, we are committed to supporting the AI Office and other European and national institutions. Over the past few months, we have:

  1. Published a report on the state of generative AI in Europe
  2. Gathered experts from European GPAI providers to engage policy makers in Europe 
  3. Put out a call to hire GenAI Regulatory Engineers who are capable of bridging the gap between technical and policy expertise
  4. Supported European enterprises in making decisions about Trustworthy LLM applications

We look forward to supporting and engaging with the AI Office, industry, and civil society - via technical solutions, policy options and partnerships - over the course of the next year as it prepares to undertake this task of supervising GPAI in Europe.