The EU AI Act Is Now Law -- Is Your Company Compliant?
The world's first comprehensive AI regulation already has active deadlines. AI literacy training has been mandatory since February 2025 (Article 4). High-risk AI systems must comply by August 2026. Fines reach up to EUR 35M or 7% of global revenue. Only 25% of organizations have an AI governance program in place -- yet 88% already use AI.
What Our Service Includes
From inventory to documented compliance.
EU AI Act Timeline: Deadlines Already Active
Know what applies now and what's next.
February 2025 (ALREADY ACTIVE): Mandatory AI literacy for all staff (Art. 4) and prohibition of unacceptable AI practices (subliminal manipulation, social scoring, mass biometric surveillance). August 2025 (ALREADY ACTIVE): Obligations for general-purpose AI models (GPAI) -- transparency, technical documentation, copyright compliance. August 2026: The main deadline -- full requirements for high-risk AI systems: risk management, data quality, transparency, human oversight, robustness. August 2027: Obligations for regulated products integrating AI (machinery, medical devices, toys).
Executive Summary
What you need to know to take action.
The EU AI Act already has two active deadlines: since February 2025, AI literacy training (Art. 4) is mandatory and unacceptable AI practices are prohibited. Since August 2025, GPAI models have transparency obligations. August 2026 is the main deadline: high-risk AI systems must meet full requirements for risk management, data quality, transparency, and human oversight.
The fines are the most severe in European regulatory history: up to EUR 35M or 7% of global revenue (more than the GDPR). Only 25% of organizations have an AI governance program in place, despite 88% already using AI. The AI regulatory compliance market is estimated at EUR 17 billion by 2030. Failing to act now means exposure to sanctions and losing competitive advantage to organizations that are already preparing.
Summary for CTO / Technical Team
Technical requirements and compliance framework.
The EU AI Act mandates a continuous risk management system for high-risk AI: training dataset documentation (provenance, quality, biases), performance and fairness metrics, automated decision logging, human-in-the-loop oversight mechanisms, and robustness testing against adversarial attacks.
For chatbots and user-facing AI features, the law requires clear transparency: users must know they are interacting with AI, AI-generated content must be labeled as such (including deepfakes), and there must be a mechanism to request human intervention. GPAI models require complete technical documentation including model cards.
Is It Right for You?
The EU AI Act affects every organization that uses or develops AI systems in the European market.
Who it's for
- Companies that use chatbots, virtual assistants, or generative AI in customer service.
- Organizations that use AI for HR decisions (recruiting, performance evaluation).
- Financial companies that use AI for credit scoring or fraud detection.
- Any organization using AI with an AI literacy obligation (Art. 4).
- Software providers that integrate AI models into their products.
- Healthcare companies that use AI for diagnosis, triage, or treatment.
Who it's not for
- AI systems used exclusively for scientific research (exempt).
- Open-source AI with minimal risk and no direct commercial use.
- Purely military or national defense AI systems (outside the EU AI Act scope).
EU AI Act Compliance Services
Packages tailored to each phase of the regulation.
AI Literacy Training
Already-active obligation (Art. 4). Training program tailored to each employee's role: what AI is, how it works, risks, ethical use, and legal obligations. Available as in-person workshop or e-learning. Participation certification for compliance documentation.
Inventory and Risk Classification
Complete mapping of all AI systems in use (in-house and third-party). Risk-level classification per Annex III: unacceptable, high, limited, or minimal. Compliance gap identification and prioritized roadmap.
Impact Assessment and Documentation
For high-risk systems: fundamental rights impact assessment, technical documentation per Annex IV (training data, metrics, biases, mitigation measures), and conformity dossier preparation.
AI Governance and Monitoring
Design and implementation of an AI governance program: internal policies, AI Risk Registry, AI ethics committee, approval workflows for new systems, and continuous performance and fairness monitoring.
Chatbot and AI Feature Compliance
Specific audit of chatbots, virtual assistants, and generative AI features: transparency, AI-generated content labeling, human intervention request mechanism, and compliance with limited-risk system obligations.
Compliance Process
From inventory to documented conformity.
Inventory
We map every AI system in your organization: chatbots, predictive models, HR tools, scoring, automations. We include third-party AI (OpenAI, Google, Azure AI APIs) that you use.
Classification
We classify each system by risk level per Annex III. We identify specific obligations: transparency for limited risk, full requirements for high risk, and verify that no system falls under prohibited practices.
Remediation
We implement the required technical and organizational measures: technical documentation, impact assessment, governance policies, team training, human oversight mechanisms, and decision logging.
Documentation and Monitoring
We prepare the complete conformity dossier: technical documentation (Annex IV), impact assessment, EU database registration (for high-risk), and a continuous post-deployment monitoring system.
Risks We Mitigate
From regulatory exposure to verified conformity.
Non-compliance fines (up to EUR 35M / 7% revenue)
Full compliance program: inventory, classification, documentation, and monitoring. Conformity dossier ready for regulatory inspection.
AI literacy non-compliance (already-active obligation)
Role-adapted training program with participation certification. Content updated per the European AI Office guidelines.
Undocumented high-risk AI systems
Exhaustive inventory + Annex IV technical documentation: training data, performance metrics, bias analysis, and mitigation measures.
Chatbots and generative AI lacking transparency
Audit of all AI interfaces: clear AI interaction identification, generated content labeling, and human intervention request mechanism.
Real-World Experience in Applied AI and Compliance
At Kiwop, we've spent years building and integrating AI solutions for businesses: intelligent chatbots, enterprise RAG, LLM integration, and AI-powered automation. This hands-on technical experience lets us understand the regulation from a developer's perspective -- not just a legal consultant's. We know what compliance means because we build the systems that must comply.
EU AI Act vs GDPR: What Changes and What Stacks
The EU AI Act doesn't replace the GDPR -- it builds on top of it.
If your organization already complies with the GDPR, you have a foundation, but the EU AI Act adds AI-specific requirements the GDPR doesn't cover: risk classification, model technical documentation, algorithmic bias assessment, mandatory human oversight, and AI-specific transparency. EU AI Act fines exceed the GDPR: up to 7% of global revenue vs 4% under the GDPR. Both regulations apply simultaneously -- a chatbot that processes personal data must comply with both.
Frequently Asked Questions
What companies ask about the EU AI Act.
What is the EU AI Act and when does it apply?
The EU AI Act (Regulation 2024/1689) is the world's first comprehensive AI regulation. It takes effect in stages: February 2025 (AI literacy + prohibited practices), August 2025 (GPAI), August 2026 (high-risk AI), and August 2027 (regulated products). It applies to any organization that provides or uses AI in the EU market.
Does my company already have active obligations?
Yes, since February 2025. Article 4 requires all organizations using AI systems to ensure AI literacy among their staff. In addition, prohibited AI practices (social scoring, subliminal manipulation, mass biometric surveillance) can no longer be used. If you use ChatGPT, Copilot, or any AI tool -- the training is mandatory.
Which AI systems are considered high-risk?
Annex III defines the categories: AI in HR recruitment and management (CV filtering, performance evaluation), credit and insurance scoring, administration of justice, border control, critical infrastructure, education (admissions, assessments), and healthcare (diagnosis, triage). If your AI influences decisions that affect people's fundamental rights, it's likely high-risk.
What about chatbots and generative AI?
Chatbots are classified as limited risk with transparency obligations: users must know they are interacting with AI, AI-generated content must be labeled (including deepfakes), and there must be a mechanism to request human intervention. If the chatbot processes personal data, the GDPR also applies.
How much are the fines?
The fines are the most severe in European regulatory history: up to EUR 35M or 7% of global revenue for prohibited practices. For non-compliant high-risk systems: up to EUR 15M or 3% of revenue. For incorrect information: up to EUR 7.5M or 1%. For SMEs, fines are calculated proportionally.
How much does it cost to prepare for compliance?
AI literacy training: from EUR 3,000 (program tailored to the organization). Inventory + classification: from EUR 5,000. Full compliance program (inventory + documentation + governance): from EUR 15,000. Cost depends on the number of AI systems, complexity, and organization size. Significantly less than a 7% revenue fine.
Do I need ISO 42001 certification?
Not mandatory, but highly recommended. ISO 42001 is the international standard for AI management systems and aligns with EU AI Act requirements. Having it demonstrates due diligence to regulators and clients. We prepare your organization for certification as part of our AI governance program.
How does it affect companies outside the EU?
Just like the GDPR: the EU AI Act applies to any organization that provides or uses AI systems in the EU market, regardless of headquarters location. If your company is in the US or elsewhere but has customers or users in the EU, you must comply. Importers and distributors also have specific obligations.
What is the difference between a provider and a deployer?
The provider develops or commissions the AI system (more obligations: technical documentation, conformity, monitoring). The deployer uses it in their business (obligations around human oversight, transparency, impact assessment). Many companies are deployers of third-party AI (OpenAI, Google, Microsoft) and have their own obligations that cannot be delegated to the provider.
Investment
Proposal tailored to the scope of your AI systems.
August 2026 Is 6 Months Away -- AI Training Is Already Mandatory
75% of organizations using AI have no governance program. Fines reach 7% of global revenue. Don't wait for the inspection: assess your compliance now.
Assess EU AI Act compliance Technical
Initial Audit.
AI, security and performance. Diagnosis with phased proposal.
Your first meeting is with a Solutions Architect, not a salesperson.
Request diagnosis