Tablet

Navigating AI risks and regulations - Staying compliant with the EU AI Act

Ethical AI development paves the way forward within the European Union

Artificial Intelligence is transforming industries and daily life, but its rapid adoption raises ethical and security concerns. The EU's AI Act sets the first global framework to regulate AI, forcing organisations to improve the security of their AI models and applications.

Artificial Intelligence (AI) is the buzzword being discussed within all companies today, from startups to tech giants, and even at grandma’s dinner table. Lauded as the gateway to the future, AI is transforming both industries and everyday life by automating tasks and solving a wide range of complex problems, using human-like intelligence and decision-making.

By harnessing the power of AI, organisations can develop leading edge solutions, optimise processes, and reach new innovation heights in an efficient and cost-saving manner. Reflective of the trend in the past decade toward AI development, according to Statista, the current global market value of artificial intelligence technology is over 185 billion US dollars—an increase of almost 50 billion compared to 2023 (Statista, 2024).

But the world’s embrace of AI is not without controversy. Debates often pit the need for ethical usage and development against the need to maintain an unhindered spirit of innovation.

People moving in a city landscape

The EU's AI Act is pioneering ethical innovation and regulation

The European Union has responded to this controversy by introducing regulation— the first of its kind — to address the inherent risks of AI technology and establish a framework for ethical AI development, while encouraging innovation and protecting against the harmful effects of AI systems in the Union.

The EU Artificial Intelligence Act (AI Act) was published in the Official Journal of the European Union on July 12, 2024, and entered into force on August 1, 2024. Its aim is to set out harmonised rules regarding the placement, usage, and service initiation of AI systems (Commission, 2024). It also includes prohibitions on certain AI practices and specifies requirements for high-risk AI systems, along with obligations for their operators.

Additionally, the regulation harmonises transparency rules for specific AI systems and rules for placing general-purpose AI models on EU Market. The Act further establishes rules for market monitoring, surveillance, governance, and enforcement, while also setting measures to support innovation, particularly amongst small and medium-sized enterprises and start-ups.

The AI Act is expected to be fully implemented and operational before December 31, 2030. Further provisions of the Act, such as Codes of Practice for GPAI and European Commission guidelines on High-Risk AI compliance and its obligations are to be further defined by 2026.

Security cameras mounted on a wall outside

Navigating AI risks: Ethical, societal, and cyber security challenges

While the EU Commission further defines and establishes guidelines, it is crucial that we examine and understand the risks and concerns surrounding today’s AI technologies. There are risks related to ethical concerns, such as bias in AI algorithms, lack of transparency, and potential misuse of AI technology. Additionally, there are societal risks including job displacement, increased social disparity due to inequitable AI access, emotional AI manipulation, and reaffirmation of social bias/discrimination.

Undoubtedly, those risks pertaining to security, namely data breaches, unauthorised access, and potential weaknesses that may lead to cyberattacks are paramount to securing society’s infrastructure.

In response to these challenges, an organisation may implement cyber security strategies, by combining knowledge from AI research, adversarial machine learning, and general cyber security.

Man walking in server hall

Strengthen your AI security with best practices for safe and ethical AI implementation

By implementing the following security best practices as baseline, organisations can harden and improve the security of their AI models and applications.

  • Incorporating zero trust for AI by only allowing access to models/data for specific users for a limited time, thereby applying the principles of least-privilege access, rigorous authentication, and continuous monitoring.
  • Create an AI Bill of Materials (AIBOM) including training data sources, pipelines, model development, training procedures and performance to improve transparency, reproducibility, accountability, and ethical AI considerations.
  • Implementing a comprehensive data supply chain using enterprise AI pipelines and MLOps solutions to automate and simplify machine learning workflows and deployments since clean, comprehensive data is vital in AI models.
  • Provide cyber security training regularly to people building and supporting AI systems to ensure continuous improvements to AI processes and models.
  • Always align cyber security strategy with business priorities.

Moreover, organisations can use frameworks such as NIST: Artificial Intelligence Risk Management Framework and ISO/ IEC 42001: 2023 Information technology - Artificial intelligence - Management system to obtain guidance in EU AI act compliance.

Tablet

Staying ahead of AI risks with compliant cyber security solutions

Though the timeline stipulated in the act spans over a few years, the risks of AI system usage are expanding. Therefore, to proactively adopt and maintain the cyber security requirements in AI act, we recommend organisations to start early with:

  • Implementing/updating policies and procedures to strengthen AI security governance.
  • Conducting gap assessments/ audits to comprehend current maturity in AI usage.
  • Institutionalising the AI-related responsibilities through risk assessments, incident reporting, AIBOM, training and awareness.
  • Training the organisational internal stakeholders on documentation requirements.
  • Testing technical requirements for secure AI system usage.

AFRY can help businesses navigate AI regulations to stay compliant

AFRY consultants are ready to help organisations navigate the complex landscape of AI regulations to ensure compliance and foster trust in AI usage. We understand the challenges that businesses face to navigate the complexities of regulations while driving growth and efficiency through innovation using AI technology.

From the planning and requirements stage to design and implementation, our team of experts have helped clients to integrate the requirements of the upcoming EU AI Act into each phase of its latest AI projects.

team of people working together at large office desk

Our competencies include:

  • Regulatory compliance: Ensuring that AI systems align with the Act’s legal and ethical standards.
  • Risk management: Supporting organisations in identifying and mitigating risks associated with AI system deployment, including bias, transparency, and data privacy.
  • Technical implementation: Developing AI systems for organisations by applying best practices in AI development, including data collection, algorithm design, modelling, and testing/validation.
  • Training and support: Guiding and coaching organisations and their personnel in understanding the nuances of the Act and implementing AI technologies in compliance with the new regulations.

Contact us for more information

Nicklas Täck - Section Manager, Product & Info security

Nicklas Täck

Section Manager, Product & Info security

Contact Us

Please complete the form and send us your proposal. For career enquiries, please visit our Join us section.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.