Topic-specific policies
ISO/IEC 27090


Search this site
 

ISMS templates

< Previous standard      ^ Up a level ^      Next standard >

 

ISO/IEC 27090 — Cybersecurity — Artificial Intelligence — Guidance for addressing security threats to artificial intelligence systems [DRAFT]

 

Abstract

“This document provides guidance for organizations to address security threats and failures in artificial intelligence (AI) systems. The guidance in this document aims to provide information to organizations to help them better understand the consequences of security threats to AI systems, throughout their lifecycle, and descriptions of how to detect and mitigate such threats.”
[Source: ISO/IEC JTC 1/SC 27 SD11]
 

Introduction

The proliferation of ‘smart systems’ means ever greater reliance on automation: computers are making decisions and reacting/responding to situations that would previously have required human beings. Currently, however, the smarts are limited, so the systems don’t always react as they should.

 

Scope of the standard

The standard will guide organisations on addressing security threats to Artificial Intelligence systems. It will:

  • Help organisations better understand the consequences of security threats to AI systems, throughout their lifecycle; and
  • Explain how to detect and mitigate such threats.

 

Content of the standard

The Working Draft outlines threats such as:

  • Poisoning - attacks on the system and/or data integrity e.g. feeding false information to mislead and hence harm a competitor’s AI system;
  • Evasion - deliberately misleading the AI algorithms using carefully-crafted training inputs;
  • Membership inference and model inversion - methods to distinguish [and potentially manipulate] the data points used in training the system;
  • Model stealing - theft of the valuable intellectual property in a trained AI system/model;

 

Status

The project started in 2022.

January status update The standard is at Committee Draft stage, progressing nicely and on-track for publication during 2025.

 

Personal notes

Imprecise/unclear use of terminology in the drafts will be disappointing if it persists in the published standard. Are ‘security failures’ vulnerabilities, control failures, events or incidents maybe? Are ‘threats’ information risks, or threat agents, or incidents, or something else?

Detecting ‘threats’ (which I think means impending or in-progress attacks) is seen as a focal point for the standard, hinting that security controls cannot respond to undetected attacks ... which may be generally true for active responses but not for passive, general purpose controls.

As usual with ‘cybersecurity’, the proposal and drafts focused on active, deliberate, malicious, focused attacks on AI systems by motivated and capable adversaries, disregarding the possibility of natural and accidental threats such as design flaws and bugs, and threats from within i.e. insider threats.

The standard addresses ‘threats’ (risks) to AI that are of concern to the AI system owner, rather than threats involving AI that are of concern to its users or to third parties e.g. hackers and spammers misusing AI systems to learn new malevolent techniques. The rapid proliferation (explosion?) of publicly-accessible AI systems during 2023 put a rather different spin on this area.

Even within the stated scope, I see no mention of ‘robot wars’ where AI systems are used to attack other AI systems. Scary stuff, if decades of science fiction and cinema blockbusters are anything to go by.

The potentially significant value of AI systems in identifying, evaluating and responding to information risks and security incidents is evidently out of scope of this standard: the whole thing is quite pessimistic, focusing on the negatives.

However, the hectic pace of progress in the AI field is a factor: this standard will provide a starting point, a foundation for further AI security standards and updates as the field matures.

 

< Previous standard      ^ Up a level ^      Next standard >

Copyright © 2024 IsecT LtdContact us re Intellectual Property Rights