AI Assessment Framework for High-Risk AI Systems
On August 1, 2024, the AI Act officially came into force, introducing specific obligations depending on the type of AI system. The AI Act categorizes AI systems into three distinct types:
- General AI systems
- Specific AI systems
- High-Risk AI systems
Whether an AI system is classified as high-risk is determined by Article 6 of the AI Act. Specifically, Article 6, Section 2, outlines the criteria for classifying systems as high-risk, with further guidance provided in Annex III of the regulation. Each category of AI system is subject to unique requirements based on its risk level.
To assist you in determining if your AI system qualifies as a high-risk AI system, we have developed a decision tree that guides you through the classification process.
If you have questions or need further guidance, please don’t hesitate to contact us.
1. Risk Assessment (Article 6, Section 3 AI Act)
Does the AI system:
- Not pose a significant risk of impacting the health, safety, or fundamental rights of individuals?
- Not substantially influence the outcome of decision-making?
- Aim to perform a narrowly defined procedural task?
- Aim to improve the outcome of a previously completed human task?
- Aim to detect decision patterns or deviations from previous decision patterns without intending to replace or influence a completed human evaluation without appropriate human oversight?
- Aim to conduct a preparatory task for an assessment?
If you can answer "Yes" to any of these questions, proceed to Section 3.
2. High-Risk Areas (Article 6, Section 2 AI Act, Annex III)
Is the system used in any of the following areas? If you can answer "Yes" to any of the questions listed, your system is likely classified as high-risk.
3. Safety Component or Product in Critical Areas (Article 6, Section 1 AI Act, Annex I)
Is the AI system used as a safety component in any of the following products, or is the AI system itself one of these critical products?
- Machinery, interchangeable equipment, safety components, load-handling attachments, chains, ropes, belts, and detachable drive shafts
- Toys
- Recreational craft and personal watercraft
- Lifts
- Equipment and protective systems for explosive atmospheres
- Radio equipment
- Pressure equipment and boilers
- Cableway installations
- Personal protective equipment, including those for self-defense and weather protection
- Appliances burning gaseous fuels
- Medical devices
- In-vitro diagnostic systems
- Civil aviation systems
- Two-, three-, or four-wheeled vehicles
- Agricultural and forestry vehicles
- Marine equipment
- Railway systems
- Motor vehicles and trailers
If you answer "Yes" to any of these questions, your system may qualify as a high-risk AI system, and further assessment is required.
For complete certainty, feel free to contact us for expert guidance.
Contact Us for Customized AI Consulting!
Stay ahead of regulatory changes and unlock AI’s full potential while remaining compliant. Get in touch with our AI experts today for tailored guidance on implementing AI responsibly and in accordance with the AI Act.

Peter Suhren, Lawyer
Managing Director
Email: psuhren@re-move-this.first-privacy.com
Phone: +49 421 69 66 32-822
FIRST PRIVACY GmbH, Bremen

Cihan Parlar, LL.M. (Tilburg), Lawyer
Managing Director
Email: cparlar@re-move-this.first-privacy.com
Phone: +31 20 211 7116
FIRST PRIVACY B.V., Amsterdam
If your inquiry concerns an organization based in Germany, these contacts will help you

Sven Venzke-Caprarese, Lawyer
Managing Director
Email: svenzke-caprarese@re-move-this.datenschutz-nord.de
Phone: +49 421 69 66 32-318
datenschutz nord GmbH, Bremen

Dr. iur.
Christian Borchers, Lawyer
Managing Director
Email: office@re-move-this.datenschutz-sued.de
Phone: +49 931 30 49 76-0
datenschutz süd GmbH, Würzburg