ETSI Releases New European Standard to Strengthen AI Cybersecurity

AI News Hub Editorial
Senior AI Reporter
January 15th, 2026
ETSI Releases New European Standard to Strengthen AI Cybersecurity

Europe just took a big step toward getting serious about AI security.

In December, the European Telecommunications Standards Institute (ETSI) published an updated standard called “Securing Artificial Intelligence (SAI); Baseline Cyber Security Requirements for AI Models and Systems,” officially designated EN 304 223. The goal is pretty straightforward: create a shared set of baseline cybersecurity expectations for AI models and AI-powered systems, so organizations aren’t left guessing what “secure” is supposed to mean.

This isn’t happening in a vacuum, either. EN 304 223 builds on two major pieces of global work that have already been shaping conversations about secure AI development: the NCSC/CISA Guidelines for Secure AI System Development and the UK Government’s Code of Practice on AI Cyber Security. That UK Code went through extensive international consultation in 2024, and the UK is expected to update it again soon so it lines up with ETSI’s standard. The message is clear: the push here isn’t just “more guidance,” it’s alignment across borders so the rules don’t splinter depending on where you operate.

What makes this standard worth paying attention to is the kind of threats it focuses on. ETSI isn’t treating AI like ordinary software. EN 304 223 addresses risks that are especially relevant to AI systems, including things like data poisoning, model manipulation, and indirect prompt injection. These aren’t theoretical issues—these are real ways attackers can compromise AI behavior while the system still appears to function normally. And because AI systems are increasingly used in decisions people rely on, a security failure doesn’t always look like a crash or outage. Sometimes it looks like a confident answer that’s wrong, biased, manipulated, or unsafe.

One of the most practical parts of the new standard is how it’s structured. Rather than looking at security as a single moment in time, EN 304 223 lays out minimum security measures across the full AI lifecycle. It maps its requirements to the four lifecycle stages described in ISO/IEC 22989: design and development, deployment, operations and monitoring, and retirement. In other words, this isn’t just about building something securely—it’s about deploying it securely, maintaining it securely, and even handling end-of-life responsibly.

ETSI is also building this standard into a larger framework. EN 304 223 builds on its earlier technical specification TS 104 223 and guidance document TR 104 128, and ETSI is currently developing a related conformity assessment specification (TS 104 216). That matters because standards only become truly powerful once organizations can measure and prove whether they meet them.

Bigger picture, this is part of the broader shift happening across the AI industry right now. For years, speed and capability have driven everything. But as AI becomes more widely adopted—and more deeply embedded into business workflows—the market can’t afford to treat security as an afterthought. Standards like EN 304 223 are a sign that AI security is moving into a new phase: less improvisation, more consistency, and more pressure to show real resilience against high-risk threats.

The UK’s Department for Science, Innovation and Technology is now working to raise awareness of the standard and encourage adoption across industry, alongside wider efforts to promote good cyber practices and explore international collaboration. If that adoption spreads the way ETSI intends, EN 304 223 could become one of the key reference points companies lean on when they’re trying to build AI systems people can actually trust.

This analysis is based on reporting from techuk.

This article was generated with AI assistance and reviewed for accuracy and quality.

Last updated: January 15th, 2026

About this article: This article was generated with AI assistance and reviewed by our editorial team to ensure it follows our editorial standards for accuracy and independence. We maintain strict fact-checking protocols and cite all sources.

Word count: 580Reading time: 0 minutesLast fact-check: January 15th, 2026

AI Tools for this Article

Trending Now

📧 Stay Updated

Get the latest AI news delivered to your inbox every morning.

Browse All Articles
Share this article:
Next Article