Back to blog
Artificial Intelligence

EU AI Act 2026: Complete Guide for Spanish Businesses

Featured image: eu ai act 2026 guia empresas

EU AI Act 2026: What Your Company Needs to Know to Comply

The European Artificial Intelligence Regulation is now in effect. Since February 2025, the first obligations are enforceable, and in August 2026, full application for high-risk systems will begin. If your company develops, deploys, or uses AI systems, this article is your essential roadmap to avoid penalties of up to 35 million euros or 7% of your global turnover.

We are not talking about future regulation: the EU AI Act is current law. Companies that do not act now face exemplary fines and, worse, the prohibition of operating their AI systems in the European market.

In this comprehensive guide, we break down everything you need to know: from critical dates to specific obligations according to your role and sector, including a practical compliance checklist that you can implement today.

What Is the EU AI Act and Why Should It Matter to You in 2026?

The Regulation (EU) 2024/1689, known as the EU AI Act or Artificial Intelligence Law, is the world's first comprehensive regulatory framework for artificial intelligence. Approved on June 13, 2024, and published in the Official Journal of the EU on July 12, 2024, it came into force on August 1, 2024.

But here's the crucial part: its application is staggered, and the most important deadlines are happening right now or will arrive in the coming months.

Why 2026 Is the Decisive Year

The EU AI Act is not like the GDPR, which provided a broad grace period. The phased implementation means that:

  • February 2025: Certain AI systems considered "unacceptable risk" are already banned
  • August 2025: Obligations for general-purpose AI models (GPAI) are enforceable
  • August 2026: Full regulation for high-risk systems comes into effect
  • August 2027: Total application, including systems integrated into regulated products

If your company uses ChatGPT, predictive analytics tools, facial recognition systems, customer service chatbots, or any other AI-based technology, you are within the scope.

Territorial Scope: Who Is Affected?

The EU AI Act has extraterritorial application, similar to the GDPR:

  • Providers of AI systems that market or put into service systems in the EU, regardless of where they are established
  • Deployers (business users) of AI systems located in the EU
  • Providers and deployers outside the EU whose AI output is used within the European territory
  • Importers and distributors of AI systems in the European market
  • Authorized representatives of providers not established in the EU

In summary: if your AI touches Europe in any way, you need to comply.

What Are the Key Dates of the EU AI Act You Cannot Ignore?

The implementation schedule of the EU AI Act is complex but critical. Here is the complete timeline with what each date implies for your business:

Past Dates (Mandatory Compliance)

August 1, 2024 – Entry into Force The regulation became law. The transition period begins.

February 2, 2025 – First Wave of Obligations

  • Ban on Unacceptable Risk AI Systems (social scoring, subliminal manipulation, exploitation of vulnerable, mass biometric surveillance in public spaces)
  • AI Literacy Obligation (Art. 4): Companies must ensure their staff has sufficient competencies to operate and supervise AI systems
  • Notification to Authorities of banned AI systems that were in use

August 2, 2025 – General-Purpose Models (GPAI)

  • Transparency obligations for foundational model providers
  • Mandatory technical documentation
  • Compliance with copyright standards
  • Additional requirements for models with "systemic risk" (those trained with more than 10²⁵ FLOPs)

Upcoming Dates (Urgent Preparation)

August 2, 2026 – General Application This is the most important date for most companies:

  • 🔴 High-Risk AI Systems: Full compliance obligations
  • 🔴 Transparency Obligations of Article 50: disclosure of AI interactions, synthetic content labeling, deepfake identification
  • 🔴 Mandatory Registration in the EU database for high-risk systems
  • 🔴 Full Activation of the Sanctions Regime

Final Date

August 2, 2027 – Total Application

  • High-risk AI systems integrated into products already regulated by sectoral legislation (medical devices, machinery, toys, etc.)
  • Closure of all transitional periods

How Does the EU AI Act Classify AI Systems by Risk?

The EU AI Act's approach is based on a risk pyramid with four levels. Each level entails different obligations, from total prohibition to the absence of specific requirements.

Level 1: Unacceptable Risk (PROHIBITED)

These systems are completely prohibited since February 2025:

Cognitive Manipulation

  • Systems using subliminal or manipulative techniques to distort people's behavior
  • AI exploiting vulnerabilities of specific groups (age, disability, economic situation)

Social Scoring

  • Social scoring systems by public authorities
  • Classification of citizens based on social behavior or personal characteristics with detrimental consequences

Mass Biometric Surveillance

  • Real-time remote biometric identification in public spaces (with very limited exceptions for security)
  • Biometric categorization systems inferring sensitive data (race, sexual orientation, political affiliation, religious beliefs)

Other Prohibitions

  • Non-selective scraping of facial images from the internet or CCTV for facial recognition databases
  • Emotion recognition in workplaces or educational institutions (except for medical or security reasons)
  • Predictive criminal behavior AI based solely on profiles or personality traits

Level 2: High Risk (STRICT REGULATION)

High-risk systems are subject to extensive obligations. They are divided into two categories:

AI Systems as Safety Components in Regulated Products:

  • Medical devices
  • Vehicles and machinery
  • Toys
  • Personal protective equipment
  • Lifts and pressure equipment
  • Aviation and rail

Independent AI Systems in Critical Sectors (Annex III):

Level 3: Limited Risk (TRANSPARENCY)

Systems with specific transparency obligations:

  • Chatbots and Virtual Assistants: Must clearly inform that the user is interacting with AI
  • Synthetic Content Generation: Texts, images, audio, and video generated by AI must be labeled as such
  • Deepfakes: Obligation to disclose that the content has been artificially generated or manipulated
  • Emotion Recognition Systems: Inform people when they are being analyzed

Level 4: Minimal or No Risk

Most AI systems fall into this category and have no specific obligations under the EU AI Act:

  • Spam filters
  • Video games with AI
  • Content recommendation systems
  • Personal productivity tools

However, developers are encouraged to adopt voluntary codes of conduct and responsible AI principles.

What Obligations Does Your Company Have According to Its Role in the Value Chain?

The EU AI Act defines different actors with specific responsibilities. Identify your role to know your exact obligations.

If You Are a PROVIDER of High-Risk AI Systems

Providers are those who develop or have developed AI systems and market them under their name or brand. Your obligations include:

Before Marketing:

  1. Risk Management System (Art. 9): Iterative process throughout the lifecycle to identify, analyze, evaluate, and mitigate risks
  2. Data Governance (Art. 10): Ensure quality, representativeness, and absence of biases in training datasets
  3. Technical Documentation (Art. 11): Complete description of the system, its purpose, operation, capabilities, and limitations
  4. Event Logging (Art. 12): Automatic traceability capability during operation
  5. Transparency (Art. 13): Clear instructions for deployers
  6. Human Oversight (Art. 14): Design allowing effective human supervision
  7. Accuracy, Robustness, and Cybersecurity (Art. 15): Appropriate levels throughout the lifecycle

Conformity Procedures:

  • Conformity assessment (internal or by third parties depending on the system type)
  • CE marking before marketing
  • EU declaration of conformity
  • Registration in the EU database

Post-Marketing Obligations:

  • Continuous market surveillance
  • Notification of serious incidents within 15 days
  • Withdrawal or recall if non-conformities are detected
  • Cooperation with surveillance authorities

If You Are a DEPLOYER (Business User) of High-Risk AI Systems

Deployers are companies that use high-risk AI systems under their authority. Even if the provider complies, you also have obligations:

Main Obligations:

  1. Use According to Instructions: Use the system according to the provider's instructions
  2. Human Oversight: Assign competent individuals to supervise the system
  3. Input Data: Ensure that input data is relevant and sufficiently representative
  4. Monitoring: Monitor operation and notify the provider of any risks or incidents
  5. Record Keeping: Maintain logs generated for at least 6 months
  6. Information to Workers: Inform employees and their representatives before using systems that affect them
  7. Impact Assessment: For public bodies and private entities in essential public services

Specific Transparency Obligations:

  • Inform individuals that they are subject to emotion recognition or biometric categorization systems
  • In decisions affecting rights (credit, insurance, employment): inform about AI use and provide meaningful explanations

If You Are an IMPORTER or DISTRIBUTOR

Importers (introduce products from third countries into the EU market):

  • Verify that the provider has conducted the conformity assessment
  • Ensure CE marking and documentation
  • Do not introduce non-compliant products

Distributors (make AI systems available already on the market):

  • Verify CE marking and accompanying documentation
  • Do not make non-compliant products available
  • Proper storage and transportation

Cross-Cutting Obligations: AI Literacy

All companies, regardless of their role, must comply with Article 4 on AI literacy since February 2025:

"Providers and deployers of AI systems shall take measures to ensure that their staff and other persons dealing with the operation and use of AI systems on their behalf have a sufficient level of AI literacy."

This implies:

  • Training adapted to technical knowledge, experience, and context of use
  • Continuous updating as technology evolves
  • Documentation of training actions carried out

What Is the Sanctions Regime and Fines of the EU AI Act?

The EU AI Act establishes a severe and deterrent sanctions regime, with fines that even exceed those of the GDPR in proportional terms.

Penalty Structure by Type of Infringement

Very Serious Infringements (Prohibited Systems):

  • Up to 35 million euros
  • Or up to 7% of the previous year's global annual turnover
  • The higher amount applies

Serious Infringements (Non-Compliance with High-Risk Obligations):

  • Up to 15 million euros
  • Or up to 3% of the global annual turnover
  • The higher amount applies

Minor Infringements (Incorrect Information to Authorities):

  • Up to 7.5 million euros
  • Or up to 1.5% of the global annual turnover
  • The higher amount applies

Reduced Penalties for SMEs and Startups

The EU AI Act recognizes the need to protect innovation and establishes proportional limits for small businesses:

  • Fines are calculated by applying lower percentages or smaller fixed amounts
  • Resources and economic viability are considered
  • There is a more correction-oriented approach than sanction for first minor infringements

Sanctions Regime Application Schedule

Aggravating and Mitigating Factors

Authorities will consider:

Aggravating Factors:

  • Recurrence
  • Intentional non-compliance
  • Lack of cooperation with authorities
  • Duration of the infringement
  • Harm caused to affected individuals

Mitigating Factors:

  • First infringements
  • Immediate adoption of corrective measures
  • Active cooperation with authorities
  • Implementation of codes of conduct
  • Small company size

How to Prepare Your Company to Comply with the EU AI Act? Complete Checklist

This checklist will guide you step-by-step towards compliance. Prioritize according to your most immediate deadlines.

Phase 1: Diagnosis (Complete Urgently)

  • Inventory of AI Systems: Document ALL AI systems you use, develop, or market
  • Risk Classification: Assign each system to a category (prohibited, high, limited, minimal)
  • Role Mapping: Identify if you act as a provider, deployer, importer, or distributor for each system
  • Gap Analysis: Compare your current situation with applicable obligations
  • Identification of Prohibited Systems: Verify that you do not use unacceptable risk AI

Phase 2: Governance (Q1 2026)

  • Designate AI Officer: Appoint an AI Officer or assign clear responsibilities
  • Establish AI Committee: Multidisciplinary group (legal, technical, business, HR)
  • Define AI Policy: Principles, procedures, and internal controls
  • Integrate with Existing Compliance: Coordination with DPO, compliance officer, security
  • Compliance Budget: Allocate resources for implementation

Phase 3: Literacy (Already Mandatory)

  • Evaluate Current Competencies: Level of AI knowledge of staff
  • Training Plan: Programs adapted by role and level of interaction with AI
  • Executive Training: Specific awareness for decision-making
  • Technical Training: For developers and integrators
  • Document Training: Attendance records and content delivered

Phase 4: High-Risk Systems (Before August 2026)

For Providers:

  • Risk Management System: Implement documented process
  • Training Data Audit: Verify quality, representativeness, absence of biases
  • Complete Technical Documentation: According to Annex IV requirements
  • Implement Logging: Automatic event logging capability
  • Design Human Oversight: Intervention and override mechanisms
  • Robustness Testing: Verify accuracy and resistance to attacks
  • Conformity Assessment: Prepare for certification
  • Register in EU Database: Complete before marketing

For Deployers:

  • Review Provider Instructions: Ensure compliant use
  • Assign Human Supervisors: Competent personnel with real authority
  • Monitoring Process: System to detect anomalies
  • Notification Protocol: Procedure to report incidents
  • Impact Assessment on Fundamental Rights: If applicable
  • Employee Communication: Prior information on AI use in HR

Phase 5: Transparency (Before August 2026)

  • AI Interaction Notices: Implement in chatbots and assistants
  • Synthetic Content Labeling: Visible marks on AI-generated content
  • Deepfake Policy: Procedure for synthetic audiovisual content
  • Information to Affected Individuals: Communications when AI affects decisions about people

Phase 6: Documentation and Records

  • Documented Policies: All AI policies in writing
  • Risk Assessment Records: Retain according to legal deadlines
  • High-Risk System Logs: Minimum 6 months, preferably longer
  • Conformity Evidence: Declarations, certificates, audits
  • Incident Records: AI incident management system

Phase 7: Audit Preparation

  • Simulate Inspection: Internal preparation exercise
  • Designate Contact Person: Authority contact person
  • Documentation Access: Organized and accessible system
  • Response Plan: Procedure for authority requests

Which Sectors Will Be Most Affected by the EU AI Act?

Although the EU AI Act affects all companies using AI, some sectors face especially demanding obligations:

Financial and Insurance Sector

  • Credit scoring classified as high risk
  • Solvency assessment for loans
  • Life and health insurance pricing based on AI
  • Fraud detection with implications for customers

Recommended Action: Audit all automated decision models, document scoring criteria, implement explainability.

Human Resources and Personnel Selection

  • Automatic CV filtering = high risk
  • Candidate evaluation systems
  • Employee performance monitoring
  • Talent management tools with AI

Recommended Action: Review all HR software with AI components, inform candidates and employees, ensure human oversight in hiring decisions.

Health Sector

  • Medical devices with AI (dual regulation)
  • Assisted diagnostic systems
  • Automated triage
  • Medical image analysis

Recommended Action: Verify compliance with medical product regulation and the EU AI Act simultaneously.

Education

  • Automated admission systems
  • Automatic student evaluation
  • Plagiarism detection with AI
  • Online exam proctoring

Recommended Action: Review educational software, inform students and families, maintain teacher oversight.

Public Administration

  • Benefit application assessment
  • Service prioritization systems
  • AI in law enforcement
  • Border and asylum management

Recommended Action: Mandatory impact assessments on fundamental rights, maximum transparency towards citizens.

Does Your Company Need Specialized Advice on the EU AI Act?

The complexity of the EU AI Act, with its multiple deadlines, risk categories, and differentiated obligations, makes specialized advice not a luxury but a necessity for most companies.

Signs You Need Professional Help

✅ You use AI in processes affecting people (HR, credit, services) ✅ You develop or market products with AI components ✅ You are unclear about the risk category of your systems ✅ You operate in especially regulated sectors ✅ You have no previous experience in technological regulatory compliance

How Kiwop Can Help You

At Kiwop we have been accompanying companies in their digital transformation and responsible adoption of artificial intelligence for years. Our AI consulting services include:

  • EU AI Act Compliance Diagnosis: Complete audit of your AI systems
  • Risk Classification: Expert analysis to determine your exact obligations
  • Implementation Roadmap: Personalized plan with milestones and resources
  • Technical Documentation: Preparation of documentation required by the regulation
  • AI Literacy Training: Programs adapted to comply with Art. 4
  • Continuous Support: Support throughout the adaptation process

The time to act is now. With August 2026 around the corner, companies that start their adaptation today will have a competitive advantage over those who wait until the last moment.

Contact our team for an initial assessment of your situation regarding the EU AI Act without obligation.

Conclusion: The EU AI Act as an Opportunity, Not Just an Obligation

The EU AI Act represents the world's most ambitious regulatory framework for artificial intelligence. Although it involves significant obligations, it also offers opportunities:

  • Competitive Differentiation: Companies certifying their compliance will generate additional trust
  • Access to the European Market: Compliance allows operation without restrictions in 450 million consumers
  • Process Improvement: Documentation and risk management obligations improve system quality
  • Anticipation of Global Regulation: Other markets will follow the European model

The key dates are immovable: February 2025 has already passed, August 2025 is imminent, and August 2026 will arrive sooner than it seems. Preparation starts today.

Frequently Asked Questions About the EU AI Act

Does the EU AI Act Affect Companies Outside the EU?

Yes, the EU AI Act has extraterritorial application. It affects any company that markets AI systems in the EU or whose AI outputs are used within the European territory, regardless of where the company is established.

When Does the EU AI Act Fully Come into Effect?

The EU AI Act came into force on August 1, 2024, but its application is staggered. Prohibitions apply from February 2025, GPAI obligations from August 2025, and full regulation for high-risk systems from August 2026. Total application, including regulated products, will be in August 2027.

What Happens If My Company Uses ChatGPT or Other Generative AI Models?

If you use general-purpose models (GPAI) like ChatGPT, the main obligations fall on the model provider (OpenAI). However, as a deployer, you have obligations of transparency (informing users they are interacting with AI) and responsible use. If you use these models for high-risk decisions (HR, credit), you fall into the high-risk category.

How Much Are the Fines for Non-Compliance with the EU AI Act?

Fines can reach 35 million euros or 7% of global turnover for very serious infringements (use of prohibited systems). For non-compliance with high-risk obligations, up to 15 million or 3%. For SMEs and startups, reduced proportional limits apply.

What Is "AI Literacy" and Why Is It Mandatory?

Article 4 of the EU AI Act requires all companies using AI to ensure their staff has sufficient competencies to operate and supervise these systems. This implies adapted training, continuous updating, and documentation of training actions. This obligation has been in effect since February 2025.

How Do I Know If My AI Systems Are "High Risk"?

A system is high risk if: (1) it is a safety component in already regulated products (medical devices, machinery, etc.), or (2) it is included in Annex III of the Regulation, listing sectors such as biometrics, critical infrastructures, education, employment, essential services, law enforcement, migration, and justice.

Can I Continue Using AI to Filter CVs in Recruitment Processes?

Yes, but with strict obligations. AI systems for personnel selection are classified as high risk. You must use systems compliant with the EU AI Act, ensure effective human oversight, inform candidates of AI use, and maintain the ability to explain decisions made.

What Is the Difference Between "Provider" and "Deployer" in the EU AI Act?

The provider is the one who develops or has developed the AI system and markets it under their name. The deployer is the one who uses an AI system under their professional authority (business user). Both have obligations, but the provider's are more extensive (design, documentation, conformity). The deployer must use the system correctly, supervise it, and report problems.

Article updated to January 2026. AI regulation is constantly evolving. Always consult official sources and professional advice for specific decisions for your company.

{
  "@context": "https://schema.org",
  "@type": "FAQPage",
  "mainEntity": [
    {
      "@type": "Question",
      "name": "Does the EU AI Act Affect Companies Outside the EU?",
      "acceptedAnswer": {
        "@type": "Answer",
        "text": "Yes, the EU AI Act has extraterritorial application. It affects any company that markets AI systems in the EU or whose AI outputs are used within the European territory, regardless of where the company is established."
      }
    },
    {
      "@type": "Question",
      "name": "When Does the EU AI Act Fully Come into Effect?",
      "acceptedAnswer": {
        "@type": "Answer",
        "text": "The EU AI Act came into force on August 1, 2024, but its application is staggered. Prohibitions apply from February 2025, GPAI obligations from August 2025, and full regulation for high-risk systems from August 2026. Total application will be in August 2027."
      }
    },
    {
      "@type": "Question",
      "name": "What Happens If My Company Uses ChatGPT or Other Generative AI Models?",
      "acceptedAnswer": {
        "@type": "Answer",
        "text": "If you use general-purpose models (GPAI) like ChatGPT, the main obligations fall on the model provider. However, as a deployer, you have obligations of transparency and responsible use. If you use these models for high-risk decisions (HR, credit), you fall into the high-risk category."
      }
    },
    {
      "@type": "Question",
      "name": "How Much Are the Fines for Non-Compliance with the EU AI Act?",
      "acceptedAnswer": {
        "@type": "Answer",
        "text": "Fines can reach 35 million euros or 7% of global turnover for very serious infringements (use of prohibited systems). For non-compliance with high-risk obligations, up to 15 million or 3%. For SMEs and startups, reduced proportional limits apply."
      }
    },
    {
      "@type": "Question",
      "name": "What Is AI Literacy and Why Is It Mandatory?",
      "acceptedAnswer": {
        "@type": "Answer",
        "text": "Article 4 of the EU AI Act requires all companies using AI to ensure their staff has sufficient competencies to operate and supervise these systems. This implies adapted training, continuous updating, and documentation of training actions. This obligation has been in effect since February 2025."
      }
    },
    {
      "@type": "Question",
      "name": "How Do I Know If My AI Systems Are High Risk?",
      "acceptedAnswer": {
        "@type": "Answer",
        "text": "A system is high risk if: (1) it is a safety component in already regulated products (medical devices, machinery, etc.), or (2) it is included in Annex III of the Regulation, listing sectors such as biometrics, critical infrastructures, education, employment, essential services, law enforcement, migration, and justice."
      }
    },
    {
      "@type": "Question",
      "name": "Can I Continue Using AI to Filter CVs in Recruitment Processes?",
      "acceptedAnswer": {
        "@type": "Answer",
        "text": "Yes, but with strict obligations. AI systems for personnel selection are classified as high risk. You must use systems compliant with the EU AI Act, ensure effective human oversight, inform candidates of AI use, and maintain the ability to explain decisions made."
      }
    },
    {
      "@type": "Question",
      "name": "What Is the Difference Between Provider and Deployer in the EU AI Act?",
      "acceptedAnswer": {
        "@type": "Answer",
        "text": "The provider is the one who develops or has developed the AI system and markets it under their name. The deployer is the one who uses an AI system under their professional authority (business user). Both have obligations, but the provider's are more extensive (design, documentation, conformity). The deployer must use the system correctly, supervise it, and report problems."
      }
    }
  ]
}

Technical
Initial Audit.

AI, security and performance. Diagnosis with phased proposal.

NDA available
Response <24h
Phased proposal

Your first meeting is with a Solutions Architect, not a salesperson.

Request diagnosis