top of page

ISO/IEC 27090

ISO/IEC 27090 — Cybersecurity — Artificial Intelligence — Guidance for addressing security threats and compromises to artificial intelligence systems

[DRAFT]

Abstract

ISO/IEC 27090 “addresses security threats and compromises specific to artificial intelligence (AI) systems. [ISO/IEC 27090] aims to provide information to organizations to help them better understand the consequences of security threats specific to AI systems, throughout their life cycle, and descriptions of how to detect and mitigate such threats. [ISO/IEC 27090] is applicable to all types and sizes of organizations, including public and private companies, government entities, and not-for-profit organizations, that develop or use AI systems.”


[Source: ISO/IEC 27090 Final Draft International Standard]

Introduction

The rampant proliferation of ‘smart systems’ means ever greater reliance on automation: computers are making decisions and reacting or responding to situations that would previously have required human beings. Currently, however, the tech smarts have limited intelligence, so systems utilising Artificial Intelligence don’t always react or behave as they should, or as expected.  Furthermore, there are numerous potential threats in the operating environments, presenting numerous risks.


Since smart systems provide their AI capabilities using conventional computer systems and networks, the AI-related risks add to those already present - the usual gamut of information confidentiality, integrity and availability concerns, plus risks relating to the way the 'systems' (as a whole) are designed, developed, tested, implemented (integrated into existing infrastructures and processes), used, monitored, managed, maintained and eventually decommissioned. There are governance, management and procedural aspects to this with strategic, tactical and operational implications, aside trom the CIA/technical ones. 


Bottom line: AI security is complex and difficult!

Scope

ISO/IEC 27090 will guide organisations on addressing [some] security threats to Artificial Intelligence systems. It will:

  • Discuss the potential organisational consequences of security threats that might compromise AI systems at various points in their lifecycles, drawing on ISO/IEC 22989 and ISO/IEC 5338*; and

  • Explain how to detect and mitigate such threats (risks), drawing on ISO/IEC 42001 and ISO/IEC 38507*.


* Several other references are noted in the text, reflecting the huge amount of interest in this area and the proliferation of guidance. 

Structure

The main clauses are likely to be:

  • 5: Application of information security

  • 6: Threats to AI systems

  • 7: Mitigations and their interactions with threats and other mitigations

  • Annex A: Mapping attack to AI system life cycle and to assets

  • Annex B: AI-specific versions of conventional attacks


The standard will cover at least a dozen AI 'threats' (scenarios or types of incident involving deliberate attacks) such as:

  • Poisoning - data and model poisoning e.g. deliberately injecting false information to mislead and hence harm a competitor’s AI system;

  • Evasion - deliberately misleading the AI algorithms using carefully-crafted training or prompt inputs;

  • Membership inference and model inversion - methods to distinguish [and potentially manipulate] the data points used in training the system;

  • Model stealing - theft of the valuable intellectual property in a trained AI system/model, such as the model itself plus its training data and inputs/prompts; 

  • Prompt injection and output injection - downstream attacks exploiting vulnerabilities in operational AI systems.


For each 'threat', the standard will offer about a page of advice:

  • Describing/characterising the threat;

  • Discussing the potential consequences of an attack;

  • Explaining how to detect and mitigate attacks.

An extensive list of references will direct readers to further information including relevant academic research and more pragmatic advice, including other ISO and non-ISO standards.

Status

ISO/IEC JTC 1/SC 27/WG 4 started developing this standard in 2022.


The standard is now at Final Draft International Standard stage, likely to be published later in 2026.

Commentary

Unfortunately it appears that the published standard will make imprecise, unclear and sometimes inappropriate use of terminology relating to information risk and security. For example, are ‘security failures’ vulnerabilities, control failures, events, incidents or compromises maybe? Are ‘threats’ attacks, information risks, threat agents, incidents, scenarios, some sort of blend of those or something else entirely?


Detecting ‘threats’ (which generally refers to impending or in-progress attacks) is a focal point for the standard, perhaps implying that security controls cannot respond to undetected attacks ... which may be generally true for active responses but not for passive, general purpose controls.


As so often with ‘cybersecurity’, the standard is primarily concerned with active, deliberate, malicious, focused attacks on AI systems by motivated and capable adversaries, largely disregarding the possibility of natural and accidental threats such as design flaws, bugs and power issues, and threats from within i.e. insider threats within the organisations developing and using AI systems.


The standard addresses ‘threats’ (attacks) to AI that are of concern to the AI system owner, rather than threats involving AI that are of concern to its users or to third parties e.g. hackers and spammers misusing AI systems to learn new malevolent techniques. The rapid proliferation of publicly-accessible generative AI systems such as ChatGPT during 2023 put a rather different spin on this area.


The scope excludes ‘robot wars’ where AI systems are used to attack and exploit other AI systems. Scary stuff, if decades of science fiction and cinema blockbusters are anything to go by.


The potentially significant value of AI systems in identifying, evaluating and responding to information risks and security incidents is not considered in this standard: the whole thing is quite pessimistic, focusing on the negatives, the problems associated with AI.


However, the hectic pace of progress in the AI field is clearly a factor.  This standard contributes to the field, complementing other AI security standards.  We can expect updates as AI matures.  Experience of actual AI-related incidents is already starting to accumulate and our knowledge concerning the risks is improving all the time.

This page last updated:

11 March 2026

© 2026 IsecT Limited 

 

  • Link
  • LinkedIn
bottom of page