The EU AI Act and the public sector

The EU AI Act is European legislation regulating the development, deployment and oversight of artificial intelligence. For governments and regulators, the Act places AI explicitly within the scope of public accountability.

The regulation introduces a risk-based framework and sets binding obligations for both providers and users of AI systems. Public organisations must be able to explain, justify and govern their use of AI.

What changes under the EU AI Act?

  • AI systems must be classified according to risk
  • High-risk AI systems are subject to strict obligations
  • Transparency and documentation become mandatory
  • Senior management becomes accountable for AI use

The EU AI Act does not prescribe which AI systems may or may not be used. Instead, it requires public organisations to make explicit and defensible governance choices.

Why the EU AI Act is primarily a governance challenge

While the regulation contains legal and technical provisions, it deliberately leaves room for organisational design. This means public organisations must decide for themselves:

  • who is responsible for AI systems
  • how decisions on AI use are taken and documented
  • how oversight and accountability are organised

Without clear AI governance, compliance risks becoming fragmented and reactive rather than deliberate.

From regulation to organisational practice

Implementing the EU AI Act requires more than legal interpretation. It requires alignment with existing governance structures, risk management and accountability mechanisms.

  • integration with internal control frameworks
  • alignment with transparency obligations
  • preparation for external supervision

Support with AI governance and the EU AI Act

Dutch Governance Advisory supports public organisations in translating the EU AI Act into workable governance, clear responsibilities and accountable decision-making.

View our services or schedule an exploratory conversation.

← Back to overview