EU AI Act – The Law on Artificial Intelligence
Recognize. Understand. Implement.
The EU has introduced the AI Act – a comprehensive law for artificial intelligence. Its goal is to combine innovation with safety by setting clear obligations, risk-based classifications, and harmonized rules for AI use across Europe.
The focus is not only on technology, but also on trust, responsibility, and transparency. Companies developing or using AI systems now need to act.
What does the AI Act regulate?
The AI Act defines four levels of risk:
- Prohibited AI – e.g., real-time biometric surveillance in public spaces or social scoring (except in very limited cases)
- High-risk AI – in areas like HR, education, healthcare, justice, or critical infrastructure
- Limited risk – e.g., chatbots or AI systems that must be clearly identified as such
- Minimal risk – like AI in spam filters or games, with no specific obligations
The higher the risk, the more extensive the requirements – especially for companies that develop, operate, or use high-risk AI systems.
Who is affected?
The AI Act applies to:
- AI system providers (e.g., manufacturers, developers)
- Operators, meaning companies using AI commercially
- Importers and distributors of AI systems
- Non-EU companies, if their AI is used within the EU
The regulation thus has a global reach – becoming a de facto international standard.
Training obligation under Article 29 – and training recommendations
A key provision for providers of high-risk AI systems: Employees working with such systems must be appropriately qualified.
This requirement stems from Article 29(4) of the EU AI Act.
Providers must ensure that affected persons have sufficient knowledge, training, and experience – regardless of whether they work in development, application, or operations.
This includes roles such as:
- Data scientists, developers, MLOps
- Business departments using AI in processes (e.g., HR)
- IT administrators responsible for operations
- Project managers overseeing implementation
In addition to this mandatory training, a recommended training is advised for companies and employees applying AI or integrating it into business processes. It helps them understand the regulation and develop skills for responsible and informed use of AI.
Our Training Offer
We provide practical support: Our training is designed for anyone preparing for the responsible use of AI in their workplace.
Participants receive:
- Legal foundations
- Ethical guidance
- Organizational knowledge
to use AI applications responsibly, sustainably, and in compliance.
This training provides essential orientation knowledge but does not replace the mandatory training for providers of high-risk AI systems.