EU AI Act: How to Prepare in 5 Steps
The EU AI Act is already in force, and some of its rules are immediately applicable. If you've been putting off preparation, don't worry -- you haven't missed the boat. But it's time to get started.
In this article, we present 5 concrete, actionable steps you can take as an SMB to prepare for the EU AI Act. You don't need to be a lawyer, and you don't have to do everything at once.
Why Act Now?#
The regulation is being phased in, but two important things are already in effect:
- The ban on prohibited AI practices (from February 2, 2025)
- The AI literacy obligation (from February 2, 2025)
From August 2026, the full set of rules for high-risk systems will also become applicable. Those who start preparing now have plenty of time to proceed systematically.
The August 2, 2026 deadline will arrive faster than you think. Creating an AI inventory, organizing internal training, and developing documentation can take months. Start now.
The 5 Steps at a Glance#
Create an AI inventory for your company
The first and most important step: know what AI tools you're using. In many companies, AI adoption spreads "from the bottom up" -- employees start using ChatGPT, Copilot, or other tools individually, without management being aware.
What to do specifically:
- Ask each team what AI tools they use (even informally)
- List all subscriptions, integrations, and internal tools that contain AI components
- Don't forget about embedded AI: CRMs, email systems, and accounting software increasingly include AI features
What to record for each tool:
- The tool's name and provider
- What you use it for
- Who on the team uses it
- What data it processes
- How frequently it's used
Create a simple spreadsheet (even in Google Sheets). No complex system needed. The point is to have a complete, up-to-date list of where and how you're using AI.
Classify into risk categories
Once you have the AI inventory, the next step is risk classification of each tool. The EU AI Act distinguishes four categories: unacceptable (prohibited), high risk, limited risk, and minimal risk.
How to do it:
- Review the AI inventory and for each tool, think through: what decisions does this AI make or support?
- If the AI makes decisions related to human resources, credit assessment, education, or law enforcement, it's likely high risk
- If the AI directly communicates with customers (e.g., chatbot), it's limited risk and has a transparency obligation
- If the AI is used for internal writing, summarizing, or data analysis, it's typically minimal risk
Typical AI use at most SMBs:
- ChatGPT or Claude for writing and brainstorming: minimal risk
- AI-powered customer service chatbot: limited risk (must inform the customer they're talking to AI)
- AI-based recruitment screening, CV evaluation: high risk (strict requirements)
- Manipulative AI, social scoring: prohibited (you cannot use it)
If you're unsure about a tool's classification, err on the side of a higher risk level. It's better to be over-prepared than to face shortcomings later.
Launch an AI literacy program
Article 4 of the EU AI Act requires from February 2, 2025 that every organization developing or using AI systems ensure adequate AI literacy among their staff. This isn't a recommendation -- it's a legal obligation.
What this means in practice:
- Employees must understand how the AI tools they use work (not at a technical level, but functionally)
- They must know the limitations and risks of AI
- They must be familiar with the company's AI usage policies
How to implement it:
- Organize an internal workshop (2-4 hours is enough for the basics)
- Create a simple internal guide for AI usage
- Ensure that new hires also receive the training
- Document who received AI training and when
AI literacy doesn't mean everyone has to become a programmer. A well-structured 2-3 hour workshop that introduces the tools, their limitations, and internal policies is already a sufficient foundation.
We covered this topic in detail in a separate article.
Create internal AI documentation
Documentation isn't a glamorous topic, but it's crucial for EU AI Act compliance. If an inspection ever comes, or if a customer or partner asks, you need to be able to demonstrate how you use AI.
What to document:
- AI usage policy: which AI tools you may use, for what purposes and what you may not
- Data handling rules: what data you feed into AI systems, what data you don't (e.g., personal data, trade secrets)
- Responsibilities: who is responsible for AI system usage, who approves the adoption of new tools
- AI literacy records: who received training, when, on what topics
Characteristics of good documentation:
- Written in simple, understandable language
- Regularly updated (at least every six months)
- Accessible to everyone on the team
- Includes dates and responsible parties
You don't need a 50-page policy. A 2-3 page internal document covering the basics is better than nothing. You can expand it later as the company's AI usage grows.
Build in ongoing monitoring
AI compliance is not a one-time project but an ongoing task. AI tools change, new ones appear, employees start using new ones, and the regulation continues to evolve.
What to make routine:
- Quarterly AI inventory refresh: review whether the list of AI tools has changed
- Biannual AI literacy update: a short workshop on new developments and new tools
- New tool adoption process: before anyone starts using a new AI tool, have a simple approval process
- Incident tracking: if something doesn't work as expected (e.g., the AI gave incorrect information to a customer), log it and analyze it
Assign a responsible person:
You don't need to create a separate "AI officer" position, but someone on the team should be responsible for AI compliance. This could be the CEO, the IT lead, or anyone who's interested in the topic and willing to stay up to date.
The EU AI Act is a framework regulation that will be supplemented by implementing regulations, standards, and guidelines over the coming years. It's worth following the changes, as the detailed rules are still being shaped.
Summary: Preparation Isn't Complicated, but It Can't Be Postponed#
The 5 steps summarized:
- AI inventory - know what you're using
- Risk classification - know what level of regulation applies
- AI literacy - train your team
- Documentation - write down the rules and processes
- Monitoring - keep it up to date, review regularly
These steps don't require a massive investment. A 10-50 person company can complete the basics in a few days of work. The key is to get started.