The EU AI Office (EU AIO), which was established in January 2024, is tasked with - among other responsibilities - the supervision and enforcement of the AI Act’s (AIA) provisions relating to General Purpose AI (GPAI) models.
This will likely not be an easy task. Negotiations on GPAI models were highly polarized, with a myriad of stakeholders hotly debating definitions, processes and institutional frameworks.
Now that we have a final agreement - with GPAI models being categorized into those with and without “high impact capabilities” based on a compute threshold - the real work begins for the AI Office.
Based on appliedAIs’ exchange in the ecosystem, and in line with media reports and research, there are at least three interrelated questions that the EU AIO will have to answer over the coming few months.
1. How will the office be led and staffed?
On April 10, Euractiv reported that MEPs from three parties have written to the Commission calling for transparency about who will lead the new office and how it plans to attract talent.
These questions assume significance because competition for AI talent is heating up around the world. The Financial Times and the Wall Street Journal reported in March that salaries for skilled AI researchers are routinely passing the seven figure mark. These reports, along with research from the MacroPolo institute also show that the US remains the most popular destination for these researchers.
The AIO will have to contend with these pressures as it attempts - with limited resources - to attract staff who are capable of engaging with some of the most complex scientific, technical and policy challenges. Who staffs the EU AIO and will also play a crucial role in providing answers to the next question.
2. What is the process for drafting the GPAI codes of practice (CoP)?
GPAI model providers will rely on the CoP to demonstrate compliance with their obligations under the AI Act. How these codes will be drafted in practice remains unclear.
According to the AIA, the EU AIO is merely tasked with “encouraging and facilitating” the drafting of the CoP. In practice, therefore, it will be the GPAI model providers themselves who will be responsible for drafting them. The role of civil society, academia and other stakeholders also remains unclear, with the AIA stating that they “may support the process.”
A recent report by Politico revealed that the UK AI Safety Institute (AISI) - which was tasked with testing and evaluating frontier AI models - was unable to access the models due to a breakdown in process.
To avoid similar failures in the short nine month span available to the AIO to steer this process, it will have to create a framework that is transparent, equitable, and impactful.
3. How will the EU AIO balance fragmented international and EU level processes?
In his reflections on the AI Act, Kai Zenner noted that one of the challenges confronting the AI Act is the complex web of authorities responsible for oversight and enforcement along with the interaction of the AI Act with other legislations, like the GDPR.
These challenges have already manifested, with TechCrunch documenting in April how some GPAI model providers are being investigated by the Data Protection Authorities of several EU member states. In effect, the decision of these, and other authorities, will have as much of an impact on GPAI model providers as the CoP.
These challenges for the EU AIO will compound as it attempts to harmonize its approach with international peer institutions like the US and UK AISI AISI - both of which have different incentives and operating conditions - while simultaneously attempting to create a coherent EU single-market regime for GPAI providers.
These issues aren’t insurmountable
At appliedAI, we are committed to supporting the AI Office and other European and national institutions. Over the past few months, we have:
- Published a report on the state of generative AI in Europe
- Gathered experts from European GPAI providers to engage policy makers in Europe
- Put out a call to hire GenAI Regulatory Engineers who are capable of bridging the gap between technical and policy expertise
- Supported European enterprises in making decisions about Trustworthy LLM applications
We look forward to supporting and engaging with the AI Office, industry, and civil society - via technical solutions, policy options and partnerships - over the course of the next year as it prepares to undertake this task of supervising GPAI in Europe.