top of page

ISO/IEC 27091

ISO/IEC 27091 — Cybersecurity and privacy — Artificial Intelligence Privacy protection

[DRAFT]

Abstract

[ISO/IEC 27091] "provides guidance for organizations to address privacy risks in artificial intelligence (AI) systems, including machine learning (ML) models. [ISO/IEC 27091] helps organizations identify privacy risks throughout the AI system lifecycle, and establishes mechanisms to evaluate the consequences and treatment of such risks. ..."


[Source: ISO/IEC 27091 Draft International Sstandard]

Introduction

By gathering and processing substantial quantities of information (maybe even 'big data'), AI/ML systems may erode privacy - for example by linking personal information from disparate sources back to individual people, or inferring sensitive details - unless appropriate privacy arrangements are made.

Scope

The standard applies to all manner of organisations that develop or use AI systems.


The focus is on mitigating privacy risks by integrating suitable privacy controls into the design of AI/Machine Learning systems.  


Business decisions about whether it is even appropriate to design, build, use and connect AI systems and services at all, plus general considerations for information risk and security management (e.g. ensuring data accuracy plus system/services resilience, and dealing with incidents) are largely or completely out of scope. 

Structure

Main sections:

  • 5: Framework for privacy analysis of AI systems - gives an overview of the classical information risk management process i.e. identify, analyse, evaluate and treat privacy risks. 

  • 6: Privacy of AI models - discusses a few well-known AI system 'privacy threats (modes of attack that are relevant to privacy e.g. membership inference, training data extraction, poisoning, model inversion, insider risk ...) with generic advice on mitigating controls (e.g. limiting access, anonymisation and pseudonimysation, input and output filtering). 

  • 7: Privacy in AI system lifecycle - privacy engineering.

  • Annex A: Additional information for privacy analysis of AI systems.

  • Annex B: Use case template

Status

The standard development project started in 2023.


The standard is essentially complete, presently at Draft International Draft stage, with national standards bodies due to vote before the end of February 2026.


It looks likely to be published during 2026.

Commentary

The standard's risk-based approach makes sense, but (as with so much AI security-related work at the moment) the scope, focus or perspective feels rather academic and constrained to me. The standard does not, in my admittedly jaundiced opinion, adequately address or acknowledge the bigger picture here e.g.:

  • Broader aspects of information risk and security management such as strategies, policies, architectures, compliance, change and incident management, including the extent to which those activities address privacy, specifically [the standard refers to ISO/IEC 27090 for this - currently also in draft];

  • 'Classical' information risks, threats, attacks, vulnerabilities, impacts and consequences that just happen to involve AI, such as smart phishing, smart malware, smart fraud, smart piracy etc. using AI systems, services and tools for nefarious purposes including coercion, misinformation and disinformation - with incidental and indirect rather than central and direct privacy implications;

  • Societal aspects such as the continued erosion of trust and control over our personal information as it is increasingly being demanded, requested, gathered, shared and exploited, incuding by various authorities, both openly and covertly, systematically, at scale;

  • The longstanding disparity of privacy approaches between most of the world (with GDPR and OECD guidance essentially giving individuals rights to retain ownership and control of their own personal information in perpetuity), and the USA in particular (where it seems personal information can be gathered, shared and exploited commercially by whoever holds it, similarly to other types of information, with little referene to the individuals concerned);

  • Compliance, commercial, technological and practical implications if, say, the individuals whose personal information has been used for model training decide to withdraw their consent and (uner GDPR) insist that their information is deleted and no longer used, or insist on corrections being made; 

  • Innovation and novelty of all this, meaning that collectively we have quite a journey ahead towards maturity, with anticipated and surprising incidents ('learning points') likely along the way - such as people naively building and using advanced AI systems without reference to applicable laws, regulations, policies and practices ('shadow AI'), and the race towards Artificial General Intelligence; 

  • Commercial aspects such as the intense competition within the AI industry, and what will happen with potentially valuable AI models, big data and metadata if AI companies implode or are taken over, possibly but not necessarily just when the AI bubble bursts.


However, the standard does usefully discuss the use of AI to support: 

  • Privacy consent management and control;

  • Privacy-Enhancing Technologies such as cryptographic authentication, encryption and anonymisation, pseudonymisation and data minimisation (a nod towards risk avoidance);

  • Privacy assurance such as auditing, monitoring, detecting and responding to privacy violations; 

  • Security for AI models and federated learning, including access control and identity management;

  • Natural Language Processing for data privacy policies.

This page last updated:

6 December 2025

© 2025 IsecT Limited 

 

  • Link
  • LinkedIn
bottom of page