Trustworthy AI - Tackling the AI Act with appliedAI

Trustworthy AI with disclaimer
Tackling the AI Act with appliedAI
The AI Act poses a challenge for companies

The AI Act demands that companies address AI governance to comply with its regulations at both the organizational and use-case levels.

As a result, companies need to find answers to a variety of questions:

  • Which of our use cases fall into the high risk category and how to get our AI use cases compliant?
  • How to make sure that the company achieves compliance on an organizational level as quick and cost-efficient as possible and how to maintain it?
  • How to upskill my AI experts and users with the new requirements?
  • How to upskill my AI experts and employees with the new requirements?
  • How to deal with / translate AI Act requirements to other regulations across the world?

appliedAI works on AI Act compliance in all relevant dimensions

Screenshot 2024 02 16 at 15 17 35
Change Management with disclaimer 1

First Steps Checklist: How to deal with the AI Act

  1. Establish a basic understanding of the AI Act
  2. Assess AI use case portfolio in terms of risk categories & create a plan for cases falling under categories “high risk” or “prohibited“
  3. Find out your organizational and MLOps AI Act Readiness though the MLOps Assessment
  4. Decide on maximum risk levels per use case cluster, that you allow in your organization
  5. Initiate a basic governance program for AI Act compliance
  6. Setup MLOps process to fulfill most basic AI Act requirements
Our participation at global AI expert networks

To ensure expert support for our customers appliedAI experts work in and with institutions like

GPAI banner 1140
Nice Png medication png 2640590
Screenshot 2024 02 16 100047
Andreas Liebl
Dr. Andreas Liebl
Global Partnership on AI (GPAI)

Dr. Andreas Liebl, Managing Director and Founder of appliedAI, is part of the expert group for innovation and commercialization within the Global Partnership on AI (GPAI), a global initiative to promote the responsible and human-centric development and use of artificial intelligence.

Authors Till Klein Portrait low res
Dr. Till Klein
OECD Working Party on Artificial Intelligence Governance (AIGO)

Dr. Till Klein, Head of Trustworthy AI at appliedAI Institute for Europe gGmbH, is member of the OECD Network of Experts on AI  and supports trustworthy AI within the OECD Working Party on Artificial Intelligence Governance (AIGO). The working party oversees and gives direction to the DPC work programme on AI policy and governance.

Talk with Germany's top AI Act operationalization experts

Are you looking for more information about how your company can become compliant concering the AI Act? Are you interested to learn more about how appliedAI can support you? Contact us now!

Unfortunately, we cannot display the form without your consent. We use Hubspot forms that set functional cookies. Please accept functional cookies in the settings to be able to use the contact form. Or write us an email: info@appliedai.de.