Home | Knowledge Center | 2024 | The Artificial Intelligence Act Regulation EU 2024:1689 

The EU Artificial Intelligence Act 2024

The EU Artificial Intelligence Act 2024 blog

What is The Artificial Intelligence Act?

On 12th July 2024, the European Parliament and Council’s Regulation (EU) 2024/1689, known as the Artificial Intelligence Act, was published in the Official Journal of the European Union. Emphasising transparency, accountability, and data protection, this regulation introduces stringent obligations for high-risk AI applications to ensure safety and compliance with fundamental rights.

The Act, which took effect on 1st August 2024, sets out harmonized rules for AI development, market placement, and use within the EU. It adopts a proportionate risk-based approach, focusing on transparency, accountability, and data protection to ensure that AI systems are auditable and trustworthy.

High-risk AI applications are subject to more stringent controls, with the Act imposing rigorous obligations on AI providers to ensure safety and adherence to existing legislation protecting fundamental rights throughout the AI lifecycle.

Why was the Regulation Introduced?

The fast advancement of Artificial Intelligence has brought substantial benefits, alongside significant risks. As AI technologies increasingly impact various aspects of life, the need for a comprehensive regulatory framework became clear. The AI Act addresses these concerns by establishing a clear set of guidelines to ensure the ethical and safe deployment of AI systems.

What is the Impact on Businesses?

The Act’s impact varies by sector. It primarily affects businesses working with AI systems in areas such as finance, health, transport, and security. These sectors must adhere to the highest standards of the law due to the significant potential impact their technologies could have on security and individual rights.

Furthermore, the regulation fosters innovation by allowing businesses, including SMEs and startups, to develop, train, validate, and test AI systems within AI regulatory sandboxes. By 2nd August 2026, each EU Member State must establish at least one sandbox, which can be set up jointly with other Member States. These sandboxes provide a controlled environment for testing AI systems under regulatory supervision, facilitating innovation while ensuring adherence to regulatory standards.

Responding to Legal Requirements with ISO 42001:2023

ISO 42001:2023 offers a structured approach to managing AI systems and supports compliance with the AI Act. Here’s how:

Data Protection: The standard emphasizes data security and privacy, aiding compliance with data protection laws such as GDPR. 

Adherence to ISO 27001 ensures a robust information security management system (ISMS), while ISO 27701 provides a framework for managing personally identifiable information (PII), enhancing your organisation’s ability to meet GDPR requirements and other privacy regulations.

Governance and Structure: ISO 42001:2023 helps establish a clear governance structure for managing AI, facilitating adherence to oversight and transparency requirements.

Risk Management: It provides a systematic approach to identifying and mitigating risks, essential for meeting the law’s risk assessment requirements for high-risk AI applications.

Continuous Improvement: ISO 42001:2023 promotes ongoing review and enhancement of AI practices, crucial for adapting to legal and technological changes.

By implementing a management system based on ISO 42001:2023, organizations not only ensure compliance with current AI laws but also prepare themselves for future regulations and technological challenges.

A Quick overview on ISO/IEC 42001:2023 | Artificial Intelligence Management System

This standard offers a certifiable framework tailored for the growing landscape of AI adoption. It’s designed to support the development of AI products within a responsible ecosystem, ensuring both businesses and society reap the full benefits of AI while maintaining stakeholder confidence through transparency and trust.

ISO/IEC 42001 is the world’s first AI management system standard, providing valuable guidance for this rapidly changing field of technology.

It’s a valuable resource for governments, academia, and businesses worldwide involved in AI development and deployment across various sectors.

From IT and telecommunications to retail, healthcare, manufacturing, and automotive industries, this standard addresses the diverse needs of AI stakeholders.

To understand the organisation and its context, it can be helpful for the organisation to determine its role relative to the AI system. These roles can include, but are not limited to, one or more of the following:

AI providers, including AI platform providers, AI product or service providers

AI producers, including AI developers, AI designers, AI operators, AI testers and evaluators, AI deployers, AI human factor professionals, domain experts, AI impact assessors, procurers, AI governance and oversight professionals

AI customers, including AI users

AI partners, including AI system integrators and data providers

AI subjects, including data subjects and other subjects

Relevant authorities, including policymakers and regulators

If you find this interesting; share the article 

Picture of Eng. Karam Malkawi

Eng. Karam Malkawi

Global Standards | CEO & Food Safety Expert

Our Services

Training Academy

GlobalSTD Academy learning helps you understand, implement, and improve your business with management systems and compliance training.

Global Reach and Support 24/7

Global Reach

At GlobalSTD , we're committed to personalized solutions and unwavering support. Thank you for choosing us as your ally.

Unfortunately You cannot copy contents for intellectual properties reasons :(