Topic-specific policies
ISO/IEC 27090


Search this site
 

ISMS templates

< Previous standard      ^ Up a level ^      Next standard >

 

ISO/IEC 27090 — Cybersecurity — Artificial Intelligence — Guidance for addressing security threats to artificial intelligence systems [DRAFT]

 

Abstract Status update January

“This document provides guidance for organizations to address security threats and failures specific to artificial intelligence (AI) systems. The guidance in this document aims to provide information to organizations to help them better understand the consequences of security threats specific to AI systems, throughout their lifecycle, and descriptions of how to detect and mitigate such threats.”
[Source: notes on a working draft - likely to change]
 

Introduction

The proliferation of ‘smart systems’ means ever greater reliance on automation: computers are making decisions and reacting/responding to situations that would previously have required human beings. Currently, however, the smarts are limited, so the systems don’t always react as they should.

 

Scope of the standard

The standard will guide organisations on addressing security threats to Artificial Intelligence systems. It will:

  • Help organisations better understand the consequences of security threats to AI systems, throughout their lifecycle; and
  • Explain how to detect and mitigate such threats.

 

Content of the standard

The 2nd Working Draft outlines the following threats:

  • Poisoning (attacks on the system and/or data integrity);
  • Evasion (deliberately misleading the AI algorithms with carefully-crafted inputs);
  • Membership inference and model inversion (methods to distinguish [and potentially manipulate] the data points used in training the system);
  • Model stealing (theft of the valuable intellectual property in an AI system/model);
  • Model misuse (using the AI model for unintended purposes e.g. through insecure APIs);
  • Sensor spoofing (? feeding false input data ?);
  • Scaling (??);
  • Adversarial (? other hacks ?).

 

Status

The project started in 2022.

It is at 2nd Working Draft stage.  Status update January The title seems to have changed a little: it now omits ‘failures’.

 

Personal notes

As usual with ‘cybersecurity’, the project proposal focused on active, deliberate, malicious, focused attacks on AI systems by motivated and capable adversaries, disregarding the possibility of accidental and natural threats and threats from within i.e. internal/insider threats. Even within the area of concern, I see no overt mention of ‘robot wars’ i.e. AI systems attacking other AI systems. Scary stuff, if decades of science fiction are anything to go by.

Detecting ‘threats’ (which I think means impending or in-progress attacks) is seen as an important area for the standard, implying that security controls cannot respond to undetected attacks ... which may be true for active responses but not passive, general purpose controls.

I’m curious about the imprecise use of terminology too. Are ‘security failures’ vulnerabilities, control failures, events or incidents maybe? Are ‘threats’ information risks, or threat agents, or incidents, or something else?

However, the rapid pace of change in this field is acknowledged, with the implication that this standard will provide a basic starting point, a foundation for other standards and updates as the field matures.

 

< Previous standard      ^ Up a level ^      Next standard >

Copyright © 2023 IsecT Ltd.