The EU’s AI act: essential compliance strategies

AI

The EU AI Act will come into full effect in August 2026, but certain provisions are set to come into force earlier, such as the ban on systems that perform a range of prohibited functions.

It’s important for business leaders and decision-makers of organisations to make sure they allow themselves plenty of time and resources to meet all aspects of the AI Act’s implementation deadlines. 

Having penned a series of blog posts exploring the Act in more detail, a team of experts specialising in AI compliance and DPO as a service close their journey with an exploration into some of the key strategies that can be implemented to keep your business ahead of the curve and compliant with the EU’s AI Act. 

What is The AI Act?

The AI Act establishes a regulatory and legal framework for the deployment, development and use of AI systems within the EU. Taking a risk-based approach, the legislation categorises AI systems according to their potential impact on safety, human rights and societal well-being. Some systems are banned entirely, while systems deemed 'high-risk' are subject to stricter requirements and assessments before deployment.

AI systems are categorised into different risk levels based on their potential impact, with the burden of compliance increasing proportionate to the risk. The three main categories are prohibited, high risk and low risk.

AI applications falling into the prohibited systems category are banned entirely, due to the unacceptable potential for negative consequences. High-risk systems - those deemed to have a significant impact on people’s safety, wellbeing and rights - are allowed, but are subject to stricter requirements. Low-risk systems pose minimal dangers, therefore have fewer compliance obligations.

Who Must Comply With The AI Act?

Similar to the General Data Protection Regulation (GDPR), the AI Act has extra-territorial reach. This makes it a significant law with global implications that can apply to any organisation marketing, deploying or using an AI system in the EU, even if the system is developed or operated outside the EU. 

There are different classifications of use, which dictate the responsibilities and expectations relating to different use cases.. The two most common classifications are likely to label businesses as either Providers or Deployers, but there are also classifications for Distributors, Importers, Product Manufacturers and what will be known as an Authorised Representative.

How Can My Business Prepare?

For organisations developing or deploying AI systems, preparing for compliance is likely to be a complex and demanding task, especially for those managing high-risk systems. But there’s an opportunity to set the foundations of what responsible AI innovation looks like, so businesses would do well to approach this as more than a simple box-ticking exercise, and instead as a chance to lead in the building of trust with users and regulators alike. 

By embracing compliance as a catalyst for more transparent AI usage, businesses can turn regulatory demands into a competitive advantage. To harness this “edge”, there are some essential strategies you can implement to support your AI Act compliance journey.  

Staff awareness and training

Organisations intending to use AI systems in any capacity should take the time to consider the potential impact of those systems, and engage in an appropriate level of staff awareness training and upskilling. This is an essential element of ensuring team members recognise their roles in compliance and are equipped to implement the AI Act’s requirements. 

A complete, detailed training programme should address the key requirements of the AI Act, including any role-specific details. For example, AI developers may need more in-depth technical training, whilst compliance officers will focus on documentation and regulatory obligations. 

Training programmes should aim to address the specific risks associated with the type of data being processed, as well as the system’s intended use. For example, employees working with systems that have a greater impact on individuals, such as those making credit decisions, may require more extensive training than those handling non-sensitive functions. 

Establish strong corporate governance

For organisations who provide or deploy systems classified as high-risk or General Purpose AI (GPAI), a foundation of strong corporate governance is essential to demonstrate and maintain compliance. Without certain elements in place, organisations may struggle to meet specific requirements of the Act and maintain the necessary compliance documentation. 

To build and maintain this foundation of strong corporate governance, organisations should aim to pay attention to a few key areas: 

  • Implement effective risk and quality management systems, which are critical for overseeing and mitigating risks and help to ensure any issues are identified early and can be addressed 

  • Ensure robust cybersecurity and data protection practices are in place to safeguard sensitive personal data and protect against data breaches 

  • Develop accountability structures with clear lines of responsibility to ensure compliance efforts are coordinated and effective 

  • Monitor AI systems on a regular and ongoing basis, reporting on their performance and compliance status 

Ensure robust cybersecurity and data protection practices

When it comes to establishing strong corporate governance, it’s vital to recognise the importance of robust cybersecurity and data protection, and in fact these elements are crucial for meeting the requirements of the AI Act.

For cybersecurity aspects, practices should include implementing robust infrastructure security with strict access controls, having a detailed incident response plan, and ensuring regular security audits to identify vulnerabilities. 

The data protection requirements of the AI Act overlap with the General Data Protection Regulation (GDPR) in several areas, particularly around transparency and accountability. 

Whilst the GDPR focuses on the protection of personal data, the AI Act covers the broader development and regulation of AI systems. This includes not only safeguarding personal data but also managing overall AI risks to ensure fairness, prevent harm, and promote transparency.  

You can use the principles of the GDPR and current data protection practices to support compliance with the AI Act by integrating ‘Privacy by Design’ into your AI systems., conducting Impact Assessments for high-risk AI applications and maintaining clear documentation of data protection activities. 

Prepare for upcoming guidelines and templates

The EU is developing specific codes of practice and templated documentation to help organisations meet their compliance obligations. Any business dealing with AI systems should be keeping an eye out for updates and further information on these documents, as they will undoubtedly prove useful in compliance measures.

Adhere to ethical AI principles and practices

Although guidelines and practical applications of the AI Act are still to be defined, its core principles are well understood, and reflected in a number of responsible and ethical AI frameworks.

Organisations considering significant AI use – especially in ways that involve personal data or affect individuals – need to understand how an AI system works, its intended use, and its limitations. Aside from aligning with best practice, documenting these aspects also fosters accountability. 

Organisation must also ensure compliance with transparency requirements under existing data protection laws in addition to the specifics of the AI Act. 

Finally, conducting a risk assessment of how the AI system may impact both individuals who interact with it and the organisation’s liability and reputation if anything should go wrong is essential. This proactive approach to AI governance is highly beneficial and can mostly be implemented without needing to tailor it for specific regulations. 

Seek expert guidance

The AI Act is complex in nature, and with good reason. Any organisations who find themselves unsure of the extent of their obligations should seek professional advice now, in the early stages of the Act’s lifespan, in order to support your compliance journey. One tool that can be used now is the EU AI Act Compliance Checker, designed to help organisations verify that an AI system aligns with regulatory requirements. 

Conclusion

The AI Act will change the way businesses interact with and make use of artificial intelligence systems, but demonstrating compliance with the Act needn’t seem like a chore or simply another box-ticking exercise. As mentioned, this is an opportunity for businesses to put themselves ahead of the curve, demonstrate a “future-proofing” attitude and put the individual at the centre of concern which, in turn, may be rewarded with greater trust and confidence from customers.



Previous
Previous

How to improve accessibility within educational premises: practical tips and strategies

Next
Next

Dr. Lissy Hu Named CEO of Ascend Learning to drive healthcare workforce development and education growth