Information risks
What are 'information risks'?
Simply put, information risk is 'risk pertaining to information'.
Breaking it down:
Information is the valuable meaning or knowledge that we derive from data, the content of computer files, paperwork, conversations, expertise, intellectual property and so forth.
Risks, in this context, are the uncertain prospect of harmful incidents.
So, stitching it back together, information risk can be laboriously defined as 'Uncertainty involving or affecting information, normally the deliberate, incidental or accidental action of threats exploiting exposed vulnerabilities, causing adverse impacts'.
To put that more succinctly, information risk is 'risk pertaining to information'.
While the ISO27k standards neither define nor use 'information risk', 'information security risk' is defined in ISO/IEC 27000 as 'Potential that a threat will exploit a vulnerability of an asset or group of assets and thereby cause harm to an organisation'. The harm is described in terms of a loss of confidentiality, integrity or availability of information, mere technicalities. I prefer to take the organisation's perspective: reducing the number and severity of possible adverse business impacts is the primary objective of information security.
Is there a list of information risks?
The suggested resources are all generic, useful reminders of the general types of risk worth considering. Pore over your organisation’s incident records and past risk assessments for further inspiration, and work with management to identify and consider information risks in your particular business context.
Yes, several e.g.:
IT Grundschutz Catalogue (the baseline IT protection manual) includes an extensive threat catalogue, exhausting if not exhaustive;
ISO/IEC 27002, NIST SP800-53 and various other information security and privacy standards, laws and regulations are, in effect, incomplete information security control catalogues that may mention threats, vulnerabilities and impacts;
ISO/IEC 27005 identifies a few threats and vulnerabilities in an annex.
Mitre’s CVE (Common Vulnerabilities and Exposures) is a useful, well-regarded catalogue of cybersecurity (meaning primarily technological/IT system) vulnerabilities. Again, not totallycomprehensive but close enough for government work.
Mitre’s CAPEC (Common Attack Pattern Enumeration and Classification) is a structured catalos of cybersecurity ‘attacks’ i.e. how some threat agents exploit some vulnerabilities;
Good information security textbooks are worth studying too, for example:
Security Engineering by Ross Anderson is a well-respected source;
Cem Kaner’s book Testing Computer Software has a lengthy, structured appendix listing common software errors, some of which create vulnerabilities;
Building Secure Software by John Viega and Gary McGraw, plus Gary’s other books, discuss the concept of threat modelling to develop security specifications for application software;
The Security Development Lifecycle by Michael Howard and Steve Lipner outlines Microsoft’s approach to threat modelling using STRIDE (Spoofing identity, Tampering, Repudiation, Information disclosure, Denial of service and Elevation of privilege).
Most information risk analysis and management support tools, systems, methods and advisories include examples or lists of stuff to consider. And finally, don't forget Google and AI.
What is information risk management?
This is a complex, busy process, liable to go badly wrong. ISO27k provides a sensible management structure to help keep it on track.
'Management' indicates someone proactively addressing information risks on an ongoing basis, along with related governance aspects such as direction, control, authorisation and resourcing of the processes.
The first stage is to Identify potential information risks. Several factors or information sources feed-in to that:
Vulnerabilities are the inherent weaknesses within our facilities, technologies, processes (including information risk management itself!), people and relationships, some of which are probably unrecognised or not fully appreciated;
Threats are the external actors and natural events that might cause incidents if they acted on vulnerabilities causing impacts [Note: malicious, fraudulent, inept, coerced or misled insiders can present threats: we are not purely concerned about evil hackers, malware and terrorists roaming the Interwebs!];
Assets are, specifically, information assets - valuable information content plus, to a lesser extent, the associated storage media, computer hardware devices etc. ;
Impacts are the harmful effects or consequences of incidents affecting assets, damaging the organisation and its business interests, often also third parties;
Incidents range in scale from minor, trivial or inconsequential events up to disasters and outright catastrophes;
Advisories, standards etc. offer relevant warnings and guidance from organisations such as CERT, the FBI and NSA, ISO/IEC, journalists, bloggers and podcasters, technology vendors plus information risk and security professionals. Threat and vulnerability intelligence services can be useful, along with security advisories and patch notifications from software, hardware, service and information suppliers.
Next, the evaluate risks stage involves considering all that information in order to determine the significance of various risks, which in turn drives priorities for the next stage. The organisation’s appetite for risks is a major concern here, reflecting corporate strategies and policies as well as broader cultural drivers and personal attitudes of the people engaged in risk management activities.
Treat risks involves avoiding, mitigating, sharing and/or accepting them. This stage involves both deciding what to do and doing it (implementing the risk treatment decisions).
Handle changes might seem obvious but it is called out due to its importance. Information risks are constantly in flux, partly as a result of the risk treatments, partly due to various other factors both within and without the organisation. The ISO27k way is risk-driven, such that the most significant risks at any point should be in hand … but there is always more to do, so this is an endless journey across shifting sands.
Finally, a reminder that the organisation often has to respond to external obligations such as legal and regulatory compliance plus market pressures, customer expectations, commercial contracts etc. These also change from time to time, and should be actively monitored.
What risk analysis method should we use?
Determine your own risk analysis, risk management and/or governance requirements and evaluate the methods, tools, products etc. carefully. There is further advice on how to select specific methods/tools in the next FAQ.
Since the ISO27k standards don't mandate a particular method, you can select wnhichever methods align with your organisation’s expertise and requirements.
Risk analysis methods are broadly categorised as quantitative (based on mathematical modelling and statistical analysis) or qualitative (experiential, subjective). ISO/IEC 27005 offers advice on selection and use of methods in the ISMS context but does not insist on any specifics.
It is perfectly acceptable, and often beneficial, to mix-n-match multiple methods. For instance, a high-level overview method might identify broad areas of concern (such as privacy), which can then be examined in detail using focused methods (e.g. privacy impact assessments).
Leverage the expertise of business departments such as Internal Audit, Risk Management, Health and Safety, Finance, Project or Programme Management and Operations, as their methods can often be applied to information risks. There is no need to abandon familiar tools simply for ISO27k. However, be mindful of discrepancies in results from different methods. Avoid simplistic approaches like choosing the least costly controls or addressing only the most obvious 'key' risks. Instead, use the analyses as decision support tools for management. Management must determine appropriate security investments, risk appetite and improvement timelines. This requires a combination of management vision, expert advice, and practical judgment.
Below is a very brief introduction to a number of information risk analysis and management methods, standards, guidelines and tools, plus some aimed at supporting Governance, Risk and Compliance and even Security Information and Event Management.
Analog Risk Assessment (ARA) is a deceptively straightforward method to analyse, discuss and consider risks subjectively and simplistically according to their relative probabilities of occurrence and leves of impact;
Calabrese’s Razor is a method developed by Chris Calabrese to help the Center for Internet Security prioritise technical controls in their security configuration guides. It helps evaluate and compare the costs and benefits for each control on an even footing;
COBIT from ISACA provides a comprehensive model guiding the implementation of sound IT governance processes/systems, including to some extent information security controls;
COSO ERM (Committee Of Sponsoring Organisations of the Treadway Commission Enterprise Risk Management framework) is a general structure/approach for managing all forms of organisational risk;
Delphi is a forecasting technique involving successive rounds of anonymous predictions with consolidation and feedback to the participants between each round;
DIY (Do It Yourself) methods, a genuine alternative, not just a straw man. DIY involves using risk analysis methods with which you or your organisation are already familiar, perhaps home-grown methods or even those that are not normally used to examine information risks (e.g. Delphi). Can your existing risk analysis methods, processes and tools be adapted for information risks?;
FMEA (Failure Modes and Effects Analysis) is commonly used in engineering design. It focuses on the possible ways in which a system might possibly fail, almost regardless of the causes;
The UK’s Institute of Risk Management (IRM), Association of Insurance and Risk Managers (AIRMIC) and ALARM, The National Forum for Risk Management in the Public Sector, jointly released A Risk Management Standard back in 2002, for all forms of organisational risk, not just information risk;
ISO 31000 offers guidance on the principles and implementation of risk management in areas such as finance, chemistry, environment, quality, information security etc.;
ISO/IEC 27005 isn’t really a risk assessment or management method as such, more of a meta-method, an approach to choosing methods that are appropriate for your organisation;
Mehari is a free open-source risk analysis and management method in several European languages developed by CLUSIF (Club de la Sécurité de l'Information Français) and CLUSIQ.;
NIST SP 800-30 “Risk Management Guide for Information Technology Systems” is a free PDF download from NIST. An accompanying guideline is also available and also free;
NIST SP 800-39 “Managing Risk from Information Systems - An Organisational Perspective” is another freebie from NIST;
OCTAVE (Operationally Critical Threat, Asset, and Vulnerability Evaluation) is CERT’s risk-based strategic assessment and planning technique for security. It takes a business rather than technology-centric view of security risks. OCTAVE Allegro is a quick version of OCTAVE;
Risk IT from ISACA complements their other excellent tools COBIT and ValIT;
Stochastic modelling methods using Markov chains, stochastic Petri nets, Monte Carlo simulation, Bayesian or other statistical techniques and probability theory are commonly applied to estimate uncertain risk values from incomplete data in the financial industry;
Verinice is a free open-source tool supporting the BSI IT-Grundschutz standards. It’s very nice.
We are not selling, recommending or endorsing any of them. We haven’t even used most of them, personally, and we don't know your requirements except in very general terms.
OK, how should we select risk analysis methods?
Don’t get completely hung-up on this: go with that you have and make it work for you, learning, refining, moving ahead.
Read ISO/IEC 27005 for starters and think carefully about what you want. What do you expect the method or tool to achieve for you? Which factors and/or features are most important? Are there any things the method or tool should not do (e.g. gobble-up excessive amounts of limited resources)?
Determine your requirements such as:
Quantitative or qualitative: opinions vary on the relative value of quantitative versus qualitative methods. Few information security or risk management professionals would recommend truly quantitative analysis of information risks in all circumstances due to the shortage of reliable data on incidents (probabilities and impacts), although they are potentially useful in some more narrowly-defined situations. One solution to this dilemma is to use quick/simple qualitative risk assessments followed by risk analyses on selected ‘high risk’ areas using more detailed qualitative or quantitative methods;
Scope: are you purely looking at “information risks” or risks in a broader sense, and what do you really understand by “information risks” anyway: are you in fact concerned about risks to information assets, or business risks that happen to involve information, or something else? Furthermore, which information assets are you concerned with? What will happen with the out-of-scope risks that could be just as significant for the organisation, especially if they remain unrecognised, unanalysed and untreated?
Scalability: are you looking to support a relatively simple analysis of risks for a single process or IT system, an organisation-wide analysis, or all of the above? Will you be completing the analysis just once or repeatedly, and if so how often?
Maintainability and support: some methods use AI/decision support software, whereas others are procedural or can be supported by generic tools such as spreadsheets. Clearly, therefore, they vary in the amount of technical expertise required to install, configure and maintain them. Home-grown tools can be more easily and cheaply modified in the light of your experiences whereas commercial tools tend to be slicker and more polished;
Usability: some methods and tools lead the user through the risk analysis process a step at a time, whereas others are more free-form but arguably assume more knowledge and expertise of the users. Some attempt to reduce the information gathering phase to simplistic self-completion questionnaires for risk non-specialists, others require competent risk analysts;
Value: simply put, value means the business benefits of the tool less the associated costs. Purchase price is not the only factor here.
Can you explain the SoA and RTP?
Don’t get hung up on the names and acronyms. Concentrate on their purpose, which is to clarify the relationship between your information risks and their treatments.
Let's assume that, despite reading ISO/IEC 27001, you are unsure.
The Statement of Applicability is a formal definition of the controls employed by your ISMS. There needs to be some rationale to explain your reasoning and persuade the auditors that important decisions to include or exclude controls from Annex A or from elsewhere were made not arbitrarily but rationally, according to the risks.
Be ready for some robust audit discussions if you decide not to implement common controls at all, blithely accepting significant risks. Likewise, be ready for some robust management discussions if you decide to implement all of Annex A simply simply because it’s an ISO standard, not because the controls are appropriate and necessary to mitigate (reduce) unacceptable risks.
The Risk Treatment Plan lists the risks identified and evaluated in your risk assessment, along with the associated treatments: unacceptable risks may be mitigated with controls listed in the SoA, or avoided, or shared with other organisations. Small risks may be willingly accepted if they fall within management’s risk appetite or if the controls would cost more than the anticipated incidents, while various other risks (such as the possibility of erroneous assumptions within, and hence decisions based upon, your risk analysis) are implicitly and necessarily accepted.
How should we handle our client's information risks?
This may be an opportunity to sell your client some security/risk consultancy services! Either way, have your pet lawyer take a very careful look at any contracts or SLAs relating to third party information assets in your care to be crystal clear about your information security obligations and liabilities.
The managers of, say, commercial shared data centre services should ideally involve (key) clients directly in (part of) the risk analysis. Helping client managers understand and elaborate the information risks relating to their assets should clarify what they expect and enables the supplier to appreciate what is expected – the priorities, for example. What comes first, and why?
If clients are unwilling or unable to engage fully with the risk analysis, managers should at least assess the information risks relating to the contracts and services from the supplier’s perspective, including the risk that clients may have unrealistic or inappropriate expectations about the information security services provided.
A serious information security incident involving the supplier would almost certainly damage customer relations, might lead to legal arguments over the contract/SLA and could either put clients out of business or see them defect to another supplier. Similar considerations apply in other circumstances where the organisation handles information assets belonging to third parties - customers’ personal data and credit card details, for instance.
What is the difference between risk assessment and audit?
Challenging the status quo can be a valuable, if cathartic experience. At the end of the day, just remember that the primary aim of both activities is to improve the organisation, stimulating management to make changes for the better. These are change catalysts or opportunities to improve.
Risk assessment identifies and analyses potential risks, while an audit evaluates the effectiveness of existing controls at keeping risks within the corporate risk appetite.
Risk assessment tends to be a theoretical 'what if' exercise, often involving workshops, discussions and models to identify and explore inherent and residual risks. It is conducted by those familiar with the area, including risk managers and security experts.
An audit, on the other hand, is a more practical, hands-on 'show me' activity. Among other things, auditors examine and validate the controls in place to determine if they adequately address various risks – both in theory (if they worked as designed and documented, would they be sufficient?) and in practice (are they, in fact, working as designed and documented?).
A key distinction is independence: audits can only be conducted by independent auditors, providing an unbiased perspective. Auditors bring fresh eyes, challenge assumptions and identify blind spots.
Compliance audits, such as ISO 27001 certification audits, specifically assess adherence to regulations and standards. They also consider the risk of non-conformity: significant issues can not only prevent certification but underline the ISMS, potentially putting the organisation’s entire approach at risk.
Finally, the risk assessment process itself is auditable, while auditors must also manage audit risks, such as failing to identify, evaluate and report critical issues appropriately.
Which compliance obligations are relevant to ISO27k?
The obligations or rules expressed formally in legal language tend to be minimalist, meaning that compliance alone may not protect the organisation’s business interests. Compliance is necessary but not sufficient. You may prefer to handle this separately from the ISMS.
The organisation's compliance with various obligations can be an important driver to implement an ISMS, not least because the ISMS can take some of the weight off management’s shoulders. Managers generally either accept the need to comply, or can be persuaded to do so in order to avoid the personal adverse consequences (typically fines, prison time and career limitations).
As to which obligations are relevant, there are loads of them! Although I Am Not A Lawyer, here is an incomplete listing of the general types or categories of laws, regulations etc. that have some relevance to information, information risk, information security and thus potentially the ISMS:
Building codes – structural integrity, resistance to fires, floods, earthquakes, fire exits …
Business records – financial reporting, tax, credit, banking, money laundering, company accounts …
Business continuity – critical infrastructure, resilience …
Classified information – governmental and military, spying, official secrets, terrorism, organised crime …
Commercial contracts – Non Disclosure Agreements, digital signatures, maintenance and support agreements, Internet/distance selling, invoicing, credit and payment, PCI-DSS, plus various other obligations on or towards business partners, suppliers, customers, advisors, owners …
Community relations – being a good neighbour, supporting the underprivileged …
Consumer protection – product designs, advertising, branding, warranties, fitness for purpose, merchantability, quality and security ...
Corporate governance – company structure, ownership and control, obligations of Officers, independent oversight/audits …
Cryptography – standards, laws and regs e.g. restrictions on use and export of strong crypto …
Defamation – libel, slander ...
Employment – disciplinary process, pre-employment screening/background checks, contracts of employment, codes of conduct …
Environmental – pollution, eco-friendliness, greenwashing …
Ethics – morals, cultural and religious aspects e.g. Sharia law;
Forensics – chain of custody, warrants and warrantless searches, admissibility ...
Fraud – identity fraud, misrepresentation, embezzlement, malfeasance …
Freedom of information – enforced disclosure, including ‘discovery’ in legal disputes;
Hacking – malware, ransomware/coercion, denial of service, unauthorised access …
Health and safety – safety-critical control systems, fire exits, building standards/codes, industrial control systems, working conditions, hazards …
Industry-specifics – some industries are tightly regulated, others less so …
Insurance – terms and conditions, excesses, disclosure of relevant facts ...
Intellectual property – copyright, trademarks, patents, DMCA, trade secrets, publication/disclosure etc. protecting both the organisation’s IP and that of third parties;
Permits – operating licenses in some industries and markets, software licenses …
Pornography – paedophilia, discriminatory/offensive materials, blackmail …
Privacy – data protection, personally identifiable information …
Surveillance – spying, wiretapping, CCTV, monitoring, investigation, forensics …
Technical – standards and interoperability e.g. ISO27k, TCP/IP, WPA2, Windows compatibility …
Telecommunications – networking, lawful/unlawful interception, mail fraud …
Theft – of hardware and media …
Trespass – right of access, right to exclude, ‘citizen’s arrest’ …
International – as well as domestic laws and regulations, those in other countries might also be applicable if your business has an international presence or uses cloud and other services hosted overseas …
++ Others: speak to your Legal/Compliance team about this.
By the way, beware changes to the legislation and ‘case law’ (where judges/courts interpret things in particular ways, setting precedents). Aside from being familiar with the obligations, someone needs to keep on top of the associated policies, contracts, agreements, standards and codes, plus awareness and training, compliance/conformity assessments and enforcement aspects. For example, do you have the policies and procedures in place for exceptions and exemptions? Do you need to check compliance and perhaps enforce your organisation’s obligations on third parties e.g. confidentiality agreements with business partners and former employees?
Warning: assuming you are an information security professional looking into this, be very wary of being expected or even perceived by colleagues and management as a legal expert. Even professional lawyers specialise because the field is too broad for anyone to be entirely competent across the board. Senior managers generally own and are accountable for corporate compliance. Don’t take on their mantle! By all means offer general advice and guidance but leave them with the compliance burden. For your and their protection, explicitly recommend that they seek the guidance of competent professionals. Once again, for good measure, IANAL and this FAQ is not legal advice.
What should we do about exceptions?
Key to this approach is personal accountability of Information/Risk Owners for adequately protecting their information assets. If senior management doesn't understand or support concepts such as exceptions, exemptions, accountability, responsibility, ownership, information assets and risk, then evidently the organisation has significant governance issues to address first!
First understand the vital difference between exceptions and exemptions*:
Exceptions are unauthorisednoncompliance/nonconformity with requirements, typically identified by audits, management reviews, during the system design phase when developing software and processes, or revealed by information security incidents;
Exemptions are authorised noncompliance/nonconformity. Exemptions are the way to formalise risk management decisions when Information/Risk Owner explicitly accept specific identified risks on behalf of the organisation for legitimate business reasons.
For example, imagine that an IT systems audit has identified that system A is configured to accept passwords of at least 6 characters, while the corporate password standard mandates at least 8 characters. This is an exception that should be brought to the attention of the Information/Risk Owner for system A. The owner then considers the situation, considers the risk to the organisation and to the information asset, takes advice from others and decides how to treat the risk. The preferred response is to bring the system into line with the policies. However that is not always possible. If instead the decision is to accept the risk, an exemption to the specific policy requirement is granted, but - and this is the important bit - the Information/Risk Owner is held personally accountable by management for any security incidents relating to that exemption by simple extension of their accountability for protecting their information assets.
Exemptions should be formalised e.g.:
The Information/risk Owner should be required to sign a crystal-clear statement regarding their understanding and acceptance of the risk to their asset if the exemption is granted;
The exemption should be granted by being countersigned on behalf of management by an authoritative figure such as the CEO or CISO;
Optionally, the exemption may specify compensating controls (such as explicit guidance to users of system A to choose passwords of at least 8 characters in this case);
All exemptions should be formally recorded on a controlled corporate register;
All exemptions should be reviewed by the owners and management periodically (e.g. every year) and, if still required and justified, renewed using the same formal process as the initial authorisation. Typically exemptions may be renewed and continue indefinitely just so long as the owner is prepared to continue accepting the risk and management is prepared to accept the situation, but some organisations may impose limits (e.g. an exemptionautomatically expires after one year and cannot be renewed without a majority vote in favour by the Board of Directors).
If there are loads of exceptions and especially exemptions to certain mandatory requirements, management really ought to reconsider whether the requirements are truly mandatory. If in fact they are, any current exemptions should be set to expire at some future point, forcing owners to select risk treatments other than ‘accept the risk’. Information Security should take up the challenge to help improve conformity. If the requirements are not in fact mandatory after all, the policies etc. should be revised accordingly.
* Note: organisations use different words for these two concepts, such as exceptions and waivers, or exemptions and waivers, or even exemptions and exceptions with their meanings reversed. The specific terms don’t particularly matter provided they are clearly defined, the distinction is clearly understood and they are used consistently in practice.
I’m confused about ‘residual risk’ …
Proactively and systematically managing residual risks indicates a mature or maturing ISMS. It suggests the organisation already has a grip on its unacceptable risks and is taking a sensible, realistic approach.
Residual literally means 'of the residue' or 'left over'. So, residual risk is the left-over risk remaining once risk treatments have been applied.
So, for example, suppose after risk assessment there are 3 risks (A, B and C): risk A is acceptable, B and C are not acceptable. After risk treatment, B becomes acceptable but C is still not acceptable. Which is the residual risk: just C? Or B and C?
It’s a trick question. In fact, A, B and C all leave some (residual) risk behind ...
Accepted risks are still risks: they don't cease to have the potential for causing impacts simply because management decides not to do anything about them! Acceptance merely means management doesn't believe they are worth reducing further. Management may be wrong (Shock! Horror!) - the risks may not be as they believe, or things may change;
Mitigated or controlled or managed risks are still risks: they are reduced but not eliminated, usually, and the controls may fail in action (e.g. antivirus software that does not recognise and block 100% of all malware, or that someone accidentally disables one day);
Eliminated risks are probably no longer risks, but what if the risk analysis was mistaken or the risks aren’t in fact 100% totally eliminated? What if the situation changes?
Avoided risks are probably no longer risks, but again the risk analysis may have been wrong, or someone may deliberately or inadvertently take the risk anyway;
Shared risks are reduced but are still risks, since the sharing may not turn out well in practice (e.g. if an insurance company declines a claim for some reason) and may not be adequate to completely negate the impacts (e.g. the insurance 'excess' charge). Remember that the manager/s who elected to share risks are personally accountable for those decisions if it all turns to custard, goes pear-shaped or hits the fan ...
