
Artificial Intelligence Act
The purpose of the Artificial Intelligence Act (“AI Act”) is to regulate the development, placing on the market, and use of AI technologies. The AI Act introduces certain technology-neutral requirements applicable to the design and development of AI systems before they can be placed on the market.
What does this mean?
- Certain AI practices are prohibited to ensure that all products placed on the EU market are safe.
- Both providers and deployers of AI systems in the EU will need to comply with obligations laid down in the rules regardless of where the system is established. Further, importers and distributors will be subject to obligations relating to product safety.
- The AI Act presents a framework for AI systems based on their risk level between unacceptable risk, high risk and limited risk. In addition, the AI Act lays down various obligations for general-purpose AI (“GPAI”) systems.
- AI practices with an unacceptable risk level are prohibited from being introduced to or used on the EU market. These are regarded as harmful uses of AI that contravene EU values, e.g., AI systems which manipulate individuals through subliminal techniques, exploit the vulnerabilities of a specific group of individuals, or systems used for social scoring.
- High-risk AI systems that create adverse impacts on safety or fundamental rights will be required to undergo a conformity assessment and meet certain requirements throughout their life cycle, such as record keeping and mandatory AI incident reporting to the market surveillance authorities. The high-risk AI systems may be marked by CE marking indicating that the AI system is in compliance with the respective requirements. Such a marking should be affixed visibly, legibly and indelibly. Where that is not possible due to the nature of the high-risk AI system, CE marking should be affixed to the packaging or to the accompanying documentation, as appropriate.
- Providers and deployers of certain other AI systems posing limited risks will be made subject to transparency obligations (e.g., ‘deep fakes’ require disclosure that the content has been manipulated).
- GPAI systems are subject to certain transparency requirements, EU copyright compliance, and maintenance of various technical documentation, such as detailed training data summaries. In addition, the more powerful GPAI models, posing systemic risks, for example by having a significant impact on the EU internal market due to their scope, are subject to additional obligations such as model evaluations, risk assessments, and incident reporting.
- The rules will be applicable to all AI systems offered by EU system providers, or systems established in third countries which affect users within the EU.
Consequences
- Non-compliance of obligations related to prohibited AI systems in the AI Act may lead to fines of up to 35 million euros or, if the offender is an undertaking, 7% of the total annual turnover for the preceding financial year, whichever is higher. The equivalent thresholds for non-compliance with rules on AI systems other than the prohibited AI systems (e.g., high-risk AI systems) are 15 million and 3%.
- Monitoring and enforcement are the responsibility of the named authorities in each respective Member State.
- Additionally, the Commission has established the AI Office, which is tasked with ensuring harmonized implementation of the Act through collaboration with member states and guidance. The AI Office also directly enforces the rules for general-purpose AI models.
Timeline
- The AI Act was published in the Official Journal on 20 July 2024 and entered into force on 1 August 2024. Most key provisions of the Act will become applicable twenty-four (24) months after it has entered into force, i.e. on 2 August 2026, with some exceptions applicable to high-risk AI systems under Annex I, AI systems used by public authorities, and AI systems which are components of large-scale IT systems. However, certain provisions of the AI Act have already become applicable, such as the prohibited AI practices (posing unacceptable risks), which began to apply on 2 February 2025 and governance requirements, which began to on 2 August 2025.
- National legislation supplementing the AI Act entered into force in Finland on 1 January 2026, and a second piece of legislation implementing the rules that will enter into force on 2 August 2026 is currently being prepared by the Finnish Government. The legislative process to adopt supplementary legislation in Sweden is ongoing.
Last updated 14 January 2026.