Topic-specific policies
FAQ: information risk management


Search this site
 

ISMS templates

This section of the ISO27k FAQ addresses common questions about Information Risk Management in the context of an ISO27k Information Security Management System:

 

Q: What is Information Risk Management?

A: I’m not being facetious when I say that IRM is the management of risks to information:

  • Management implies someone proactively, deliberately, explicitly and systematically identifying, assessing, evaluating and dealing with risks on an ongoing basis (coping with any changes), along with related governance aspects such as direction, control, authorization and resourcing of the process, risk treatments etc.;
  • Risk, in this context, is the possibility, the potential occurrence of events or incidents that might materially harm the organisation’s interests or interfere with the realisation of business objectives;
  • Information is the valuable meaning, knowledge and insight deriving from raw data such as the content of computer files, paperwork, conversations, expertise, intellectual property, art, concepts and so forth.

The process diagram sums it up:
 

Information risk mgmt 790


The first stage of the process is to Identify potential information risks. Several factors or information sources feed-in to the Identify step, including:

  • Vulnerabilities are the inherent weaknesses within our facilities, technologies, processes (including information risk management itself!), people and relationships, some of which are not even recognized as such;
  • Threats are the actors (insiders and outsiders) and natural events that might cause incidents if they acted on vulnerabilities causing impacts;
  • Assets are, specifically, information assets, in particular valuable information content but also, to a lesser extent, the related storage vessels, computer hardware etc.;
  • Impacts are the harmful effects or consequences of incidents and calamities affecting assets, damaging the organisation and its business interests, and often third parties;
  • Incidents range in scale from minor, trivial or inconsequential events up to calamities, disasters and outright catastrophes;
  • Advisories, standards etc. refers to relevant warnings and advice put out by myriad organisations such as CERT, the FBI, ISO/IEC, journalists, technology vendors plus information risk and security professionals (our social network).

The Evaluate risks stage involves considering/assessing all that information in order to determine the significance of various risks, which in turn drives priorities for the next stage. The organisation’s appetite or tolerance for risks is a major concern here, reflecting corporate strategies and policies as well as broader cultural drivers and personal attitudes of the people engaged in risk management activities.

Treat risks means avoiding, mitigating, sharing and/or accepting them. This stage involves both deciding what to do, and doing it (implementing the risk treatment decisions).

Handle changes might seem obvious but it is called out on the diagram due to its importance. Information risks are constantly in flux, partly as a result of the risk treatments, partly due to various other factors both within and without the organisation.

Down at the bottom of the diagram, we’ve acknowledged that the organisation often has to respond to External obligations such as compliance and market pressures or expectations.

 

 

FAQ: “We are just starting our ISO27k program. Which information risk analysis method/s could we use?”

 

A: It is difficult to recommend particular methods or tools without knowing more about your organisation in terms of its maturity in risk analysis and information security management, its size and complexity, industry, ISMS status and so forth. While ISO/IEC 27005 offers general advice on choosing and using information risk analysis or assessment methods, the ISO27k standards do not specify any specific method, giving you the flexibility to select a method, or more likely several methods and/or tools, that suit your organisation’s requirements.

Many different information risk analysis methods and tools exist (see the list below for starters), in two main groups sharing broadly similar characteristics: the quantitative (mathematical) and qualitative (experiential) methods. None of them, not one, is explicitly required or recommended by the ISO27k standards which give some guidance but leave the choice of method/s down to users, depending on their requirements and factors such as their familiarity with certain methods. So compliance is not really a factor in the choice, except in the most general sense.

By the way, it is perfectly acceptable, advised even, for an organisation to use multiple risk analysis methods. Some are more suited to particular situations than others - for example, it might make sense to use a simple high-level overview method to identify aspects of concern, and then to change to other more detailed in-depth methods to examine those particular aspects more fully. Furthermore, some risk analysis methods are favoured by the experts in functions such as audit, risk management, health and safety, penetration testing, application design and testing, and business continuity management: there is no real benefit in forcing them to abandon their favourite methods and tools just to conform to ISO27k. In fact, the differing perspectives, experience and insight these methods, tools and experts bring could prove very valuable (e.g. health and safety people assess “hazards” using methods remarkably similar to ours, while safety-critical is conceptually the same as business-critical).

One thing to take care over, though, is how to resolve the inevitable discrepancies in the results from different methods. A crude policy such as “Pick whichever recommends the least costly controls and minimise only the obvious risks” is no better than “Pick the most comprehensive and minimise all the risks”. The analyses are merely decision support tools to guide management, who still need to make the vital decisions about how much security investment is appropriate, how much risk can be tolerated, how much certainty is really needed in the decision process, and when to make any needed information security improvements. Resolving such dilemmas requires management vision and experience, coupled with expert analysis/advice ... and gut feel. Good luck ... and don't neglect your contingency plans!

Below is a very brief introduction to a number of information risk analysis and management methods, standards, guidelines and tools, plus some aimed at supporting Governance, Risk and Compliance and even Security Information and Event Management). Please note that we are not selling or endorsing any of them. We haven’t even used most of them, personally. The short descriptions below are mostly drawn from the websites and should not be swallowed whole. You need to determine your own risk analysis, risk management and/or governance requirements and evaluate the methods, tools, products etc. carefully - there is further advice on how to select specific methods/tools in the next Q&A. Caveat emptor.

  1. Analog Risk Assessment (ARA) is a deceptively simple creative method to analyse, visualize, report, compare and consider risks subjectively according to their relative probabilities of occurrence and impacts. Use Probability Impact Graphs to represent and debate disparate risks on a directly-comparable basis;
  2. Calabrese’s Razor is a method developed by Chris Calabrese to help the Center for Internet Security prioritize technical controls in their security configuration guides, though it has wider application. It helps to evaluate and compare the costs and benefits for each control on an even footing. An interesting approach;
  3. COBIT from ISACA provides a comprehensive model guiding the implementation of sound IT governance processes/systems, including to some extent information security controls. It is widely used by SOX and IT auditors;
  4. COSO ERM (the Committee Of Sponsoring organisations of the Treadway Commission’s Enterprise Risk Management framework), published in 2004, is a widely used general structure/approach to managing all forms of organisational risk;
  5. Delphi is essentially a forecasting technique involving successive rounds of anonymous predictions with consolidation and feedback to the participants between each round. It can be applied to predicting information risks with no less chance of success than the other methods shown here;
  6. DIY (Do It Yourself) home-grown methods - see below;
  7. FMEA (Failure Mode and Effects Analysis) is a method commonly used in engineering design to examine the possible ways in which a system (or process or whatever) might possibly fail, impacting the organisation (or users, customers etc.). The specific causes of such failures are de-emphasized compared to other risk analysis methods: the focus is on outcomes, identifying and mitigating the most harmful effects regardless of cause, given that, in complex systems, it is generally impossible to determine all possible failure scenarios exhaustively;
  8. FMECA is FMEA with a criticality analysis to help home-in on the parts of the design most worthy of effort to reduce failures;
  9. The UK’s Institute of Risk Management, Association of Insurance and Risk Managers (AIRMIC) and ALARM, The National Forum for Risk Management in the Public Sector, jointly produced A Risk Management Standard way back in 2002. It encompasses all forms of organisational risk, not just information risk, using terms defined in ISO Guide 73. It is still maintained and is available for free in several languages;
  10. ISO 31000 offers guidance on the principles and implementation of risk management in general (not IT or information security specific). ISO 31000 is intended to provide a consensus general framework for managing risks in areas such as finance, chemistry, environment, quality, information security etc. It is highly regarded;
  11. ISO/IEC 27005 isn’t really a risk assessment or management method as such, more of a meta-method, an approach to choosing methods that are appropriate for your organisation. The 2022 update to this standard was a huge improvement - well worth a look;
  12. NIST SP 800-30 “Risk Management Guide for Information Technology Systems” is a free PDF download from NIST. An accompanying guideline is also free;
  13. NIST SP 800-39 “Managing Risk from Information Systems - An organisational Perspective” is another freebie from NIST, funded by U.S. tax-payers;
  14. OCTAVE (Operationally Critical Threat, Asset, and Vulnerability Evaluation) is CERT’s risk-based strategic assessment and planning technique for security. It takes a business rather than technology-centric view of security risks. OCTAVE Allegro is, as the name suggests (to musicians if not the unfortunate owners of quite possibly the worst British car ever made - and, oh boy, that’s saying something), a quick version of OCTAVE;
  15. Risk IT from IT Governance Institute/ISACA is similar in style to COBIT and Val IT but focuses on risk;
  16. Stochastic modeling methods using Markov chains, stochastic Petri nets, Monte Carlo simulation, Bayesian or other statistical techniques and probability theory are commonly applied to estimate uncertain risk values from incomplete data in the financial industry. They have some potential for systematically examining information risks, but are uncommon in practice;
  17. Verinice is a free open-source tool supporting the BSI IT-Grundschutz standards.

We are not recommending the methods and products/tools listed above as such, merely providing some options for your consideration. If you know of other information risk analysis tools, products and methods worth including in this FAQ, please get in touch.

By the way, DIY is a genuine alternative, not just a straw man. It involves using risk analysis methods with which you or your organisation are already familiar, perhaps home-grown methods or even those that are not normally used to examine information risks (e.g. Delphi and stochastic modeling). Most if not all organisations have to examine and respond to all sorts of risks routinely. Many use informal/unstructured techniques such as risk workshops and brainstorming, coupled with more structured and rigorous methods as necessary. Maybe your existing risk analysis methods, processes and tools are already being used or could be adapted to examine information risks? Provided they are sufficiently documented, rational, comprehensive and stable (meaning the results are reasonably repeatable), the ISO/IEC 27001 auditors may well be persuaded that your organisation understands its information risks well enough to design a suitable management system.

That said, be wary of naive attempts to quantify and compare risks mathematically for example using simple products or sums of risk factors such as threat, vulnerability and impact values. This is all figurative, informal arithmetic, not mathematically let alone scientifically sound by any means. There are problems as a result of:

  • The values we assign to the risk factors, which are usually ordinal values on arbitrary and often non-linear scales;
  • Inherent uncertainties in our assessments of those values, not least because they can vary dramatically from day-to-day as well as according to context; and
  • Doubts about the validity or sufficiency of the chosen factors in calculating risk - are there other factors we don’t yet appreciate? Are they equally important?

Similar issues occur, by the way, with many information security metrics. People who are unfamiliar with statistics can easily get carried away by the numbers and assign great significance to minor differences that are well within the bounds of random noise. On top of that, the situations we are dealing with are inherently complex and difficult to model or analyse scientifically, so an apparent correlation between two or more factors, whether positive or negative, could simply be an anomaly, a pure coincidence, rather than a true causal relationship. This stuff is hard!

 

Implementation tip: check the ISO27k Toolkit for useful goodies.

Top

 

 

FAQ: “How do we choose a risk analysis tool or method?”

 

A: Try the following tried-and-trusted almost universal spreadsheet-based method to evaluate your options and choose the tools, methods, software, cars, partners, holiday destinations, political parties, employers, employees, careers, lifestyles, widgets ...

First shortlist and look over the available methods and tools, thinking carefully about your requirements. What do you expect the method or tool to achieve for you? Which factors and/or features are most important? Are there any things that your would want your chosen method or tool not to do (e.g. gobble up excessive amounts of limited resources)? Consider aspects under headings such as:

  • Quantitative or qualitative: opinions vary on the relative value of quantitative versus qualitative methods. Few information security or risk management professionals would recommend truly quantitative analysis of information risks in all circumstances due to the shortage of reliable data on incidents (probabilities and impacts), although they are potentially useful in some more narrowly-defined situations. One solution to this dilemma is to use quick/simple qualitative risk assessments followed by risk analyses on selected ‘high risk’ areas using more detailed qualitative or quantitative methods;
  • Scope: are you purely looking at “information risks” or risks in a broader sense, and what do you really mean by “information risks” anyway: are you in fact concerned about risks to information assets (whatever that means), or business risks that happen to involve information, or something else? Furthermore, which information assets are you concerned with? These questions are very much linked to the scope of your ISMS and need to be thrashed out by management in order to compile your Statement of Applicability;
  • Scaleability: are you looking to support a relatively simple analysis of risks for a single process or IT system, an organisation-wide analysis, or all of the above? Will you be completing the analysis just once or repeatedly, and if so how often? If you intend to gather and analyse vast amounts of data over time, you will probably prefer tools based on databases rather than spreadsheets;
  • Maintainability and support: some methods use clever decision support software to support those undertaking the analysis, whereas others are procedural or can be supported by generic tools such as spreadsheets. Clearly, therefore, they vary in the amount of technical expertise required to install, configure and maintain them. Home-grown tools can be more easily and cheaply modified in the light of your experiences compared to commercial tools (at least until the original developer departs, unless he/she made a conscious effort to document the system!) whereas commercial tools tend to be slicker and more polished. Commercial software having flexibility as a key design goal may give the best of both worlds;
  • Usability: some methods and tools lead the user through the risk analysis process a step at a time, whereas others are more free-form but arguably assume more knowledge and expertise of the users. Some attempt to reduce the information gathering phase to simplistic self-completion questionnaires for risk non-specialists, others require competent risk analysts to collect the data;
  • Value: by this we mean the benefits to your organisation from the tool, offset by the costs of acquiring, using and maintaining the tool. Purchase price is just one factor. An expensive tool may be entirely appropriate for an organisation that will get loads of value from the additional features. A cheap or free tool may prove costly to learn, difficult to use and limited in the features it offers ... or it may be absolutely ideal for you. Your value judgment and final selection is the end result of the evaluation process. You may even decide to adopt more than one for different situations and purposes!

Now write down your evaluation criteria, preferably as rows in a spreadsheet. Talk to your colleagues and ideally peers in other organisations (such as members of the ISO27k Forum) who already use risk analysis tools/methods about the criteria and incorporate good ideas. Go back and look again at the tools/methods listed above and further refine your criteria, ideally into a ranked series ranging from “absolutely vital” down to “nice-to-haves”.

Add a ‘weighting’ column to your spreadsheet and fill it with a series of percentages that reflect the relative desirability of all criteria and add up to 100% (e.g. something really important might be weighted at say 10%, something entirely optional might be worth less than 1%). [If you are evaluating risk analysis tools/methods for distinctly different circumstances, create separate variant spreadsheets with the corresponding criteria and weightings for each.]

Add columns in which you will enter evaluation scores for each tool/criterion combination e.g.:

    0 = “hopeless”: tool/method does not satisfy this criterion at all;

    1 = “poor”: tool/method hardly satisfies this criterion;

    2 = “OK”: tool/method barely satisfies this criterion;

    3 = “good”: tool/method fully satisfies this criterion;

    4 = “outstanding”: tool/method exceeds our expectations with additional useful/valuable functions.

If you can’t decide whether something scores 2 or 3, it’s perfectly OK to score, say, 2½!

Add columns for comments against each tool/method, and a summary row for closing comments on each tool/method - trust me, comments will come in handy later.

Finally, insert mathematical functions to multiply each score by the corresponding weight and total each column, and your spreadsheet is ready to support the next step: evaluation.

For the evaluation, start by a quick assessment and rough scoring of your list of tools/methods in order to weed-out those that are very unlikely to meet your needs (i.e. low scores in high-ranked requirements), leaving you with a shortlist for further analysis.

You will most likely need to obtain evaluation versions of the shortlisted tools/methods to try them out - you might even go so far as to run mini trials or pilot studies, preferably using the same or similar scenarios in each case for fairness.

Continue looking at the shortlisted methods/tools and refining the scores until you have scores under every criterion for them all.

If you have followed the process diligently, the tools/methods that score the highest are your preferred ones (remember: you may end up using more than one). You are now all set to write your investment proposal, management report or whatever, adding and referring to the completed evaluation spreadsheet as an appendix. Those evaluation comments repay the effort at this stage. Consider incorporating sample reports, screenshots etc. from the tools/methods.

Don’t forget to secure and classify your evaluation spreadsheet and report! The information it contains (the criteria, the weightings, the scores and the comments) is valuable and deserves protection. Consider the information risks!

 

Implementation tip: don’t get too hung-up on the terminology or methods. If your organisation already does some form of risk analysis or assessment of its information and other risks, it is generally worth adopting the same or a similar approach at least at the start. Your colleagues are likely to be more comfortable with what they know, and hence it should be easier to get them to focus on the analysis rather than the method being used. Within reason you can also pick out useful parts of methods or processes piecemeal, rather than necessarily adopting the entire set. Remember, risk analysis is a tool, a step on the way not a destination in itself.

Top

 

 

FAQ: “Is it OK to determine then add or multiply threat, vulnerability and impact ratings to calculate our information risks?”

 

A: Although commonplace, such an approach is mathematically invalid if, as is usually the case, your threat, vulnerability and impact ratings are of the form 1 = low, 2 = medium, and 3 = high. Using more categories and ratings, or adding instead of multiplying the values, doesn’t help. The point is that conventional arithmetic is inappropriate with numeric categories.

Values such as 1, 2 and 3 indicating counts or quantities of the instances of something are called cardinal numbers. The second value (2) indicates exactly twice the amount indicated by the first (1), while the third value (3) indicates exactly three times the first amount. 1.25 is a legitimate cardinal value. Conventional arithmetic works properly with cardinals.

Alternatively, numbers such as 1, 2 and 3 can indicate positions within a defined, ordered set of values, for example 1st, 2nd and 3rd places in a running race. These ordinal numbers tell us nothing about how fast the winner was going, nor how much faster she was than the runners-up: the winner might have led by a lap, or it could have been a photo-finish. It would be wrong to claim that the 3rd placed entrant was ‘three times as slow as the 1stunless you had additional information about their speeds, measured using cardinal values and units of measure: by themselves, their podium positions don’t tell you this.  Some would have it that being 1st is all that really matters anyway: the rest are all losers!  Conventional arithmetic doesn't apply to ordinals such as threat, vulnerability or impact values of 1, 2 or 3 (or 0,1,2,3,4,5 or whatever you happen to use).

Alternatively, 1, 2 and 3 might simply have been the numbers pinned on the runners’ shorts by the race organizers. It is entirely possible that runner number 3 finished first, while runners 1 and 2 crossed the line together. The fourth entrant might have hurt her knee and dropped out of the race before the start, leaving the fourth runner as number 5! In this case, these are nominal numbers, labels that just happen to be numeric. Phone numbers and post codes are further examples. Again, it makes no sense to multiply or subtract phone numbers or post codes because they do not indicate quantities like cardinal values do. If you treat a phone number as if it were a cardinal value and divide it by 7, all you achieved was a bit of mental exercise: the result is pointless. If you ring that number 7 times, you still will not be connected! Standard arithmetic makes no sense at all with nominals.

When we convert ordinal values such as low, medium and high, or green, amber and red, risks into numbers, they remain ordinal values, not cardinals, hence conventional arithmetic is inappropriate. If you convert back from ordinal numbers to words, does it make any sense to try to multiply something by "medium", or add "two reds"? Two green risks (two 1’s) are not necessarily equivalent to one amber risk (a 2). In fact, it could be argued that the risk scale is non-linear, hence extreme risks are materially more worrisome than most mid-range risks, which are of not much more concern than low risks. Luckily for us, real extremes tend to be quite rare!

Financial risk analysis methods (such as SLE/ALE, NPV or DCF) attempt to predict and quantify both the probabilities and outcomes of incidents as cardinal values, hence standard arithmetic applies but don’t forget that prediction is difficult, especially about the future (said Neils Bohr, shortly before losing his shirt on the football pools). If you honestly believe your hacking risk is precisely 4.83 times your malware risk, you are sadly deluded, placing undue reliance on the numbers and predictions.

 

Implementation tip: risk values calculated from numbered categories tell us only about the relative positions of the risks in the set of values, not how close or distant they are ... but to be fair that may be sufficient for prioritization and focus. Personally, a Red-Amber-Green spectrum or scale tells me all I need to know, with sufficient precision to make meaningful decisions in relation to the risks.

 

Note: attentive readers may have spotted that the method described above for evaluating risk management tools inappropriately applies simple arithmetic to category labels. Suck it up! It works! Do as I say and as I do!

Top

 

 

FAQ: “We have taken over operations for a data center which belongs to and was previously operated by our client. We have expanded our information asset inventory to include not just our own assets but also the data centre assets belonging to our client. How should we handle risk-assessing our client’s information assets?”

 

A: Ideally, work with your client relationship people to involve the client directly in the risk analysis. Helping your client’s management to understand and elaborate the information risks relating to their assets will clarify what they expect of your organisation in respect of information security services, and will ensure that your colleagues appreciate what is expected of them.

If the client is unwilling or unable to engage fully with the risk analysis, you should at least assess the information risks relating to the contract and services from your organisation's perspective, including the risk that the client may have unrealistic or inappropriate expectations about the information security services you are providing for them.

For example, you presumably take regular backups purely for your own operational reasons, routinely backing up the operating system and application software, configuration details, disk structures etc. However, the client may mistakenly believe that you are also backing up all their vital business data, even if they have never formally specified this as part of the contract or Service Level Agreement with your organisation ... you can probably see where this is headed. Imagine the fallout if something goes terribly wrong one day, for instance a disk fails or is accidentally overwritten. You should be able to replace the disk and restore the directory structure, but you may not be able to recover the client's data. Maybe you have a full-disk image backup, but it is several days or weeks old whereas the client thought you were doing real-time disk mirroring!

You are probably well advised to consider your client's information risks anyway, even if they don’t want to know. A serious information security incident involving the data centre will almost certainly damage your customer relations, could lead to legal arguments over the contract/SLA and in the worst case could put the client out of business.

Note that similar considerations apply in other circumstances where the organisation handles information assets belonging to third parties - customers’ personal data and credit card details, for instance. You may need to analyse and treat their risks on their behalf even if they are incapable or can’t be bothered to do so, since you just know they will try to take you to the cleaners if some disaster harms their precious information. They may claim that you have failed in an ‘implied duty of care’, a term so vague and ambiguous that the lawyers will have a field day.

 

Implementation tip: this may be an opportunity to sell your client some security/risk consultancy services! Either way, have your pet lawyer take a very careful look at any contracts or SLAs relating to third party information assets in your care to be crystal clear about your information security obligations and liabilities.

Top

 

 

FAQ: “What is the difference between risk assessment and audit?”

 

A: Risk assessment is an activity to identify and characterise the inherent and/or residual risks within a given system, situation etc. (according to the scope of the assessment). It tends to be a somewhat theoretical hands-off exercise, for example one or more workshop sessions involving staff and managers within and familiar with the scope area plus other experts in risk and control, such as Risk Managers, Information Security Managers and (sometimes) Auditors, discussing and theorising about the risks.

While audit planning and preparation also normally involves assessing the inherent risks in a given system, situation, process, business unit etc. (again according to the scope), auditors go on to check and validate the controls actually within and supporting the process, system, organisation unit or whatever in order to determine whether the residual risks are sufficiently mitigated or contained. Audit fieldwork is very much a practical hands-on exercise.

Risk assessments are normally performed by the users and managers of the systems and processes in scope, whereas audits are invariably conducted by independent auditors. Auditor independence is more than simply a matter of organisation structure i.e. auditors not reporting to the business managers in charge of the areas being audited. More important is the auditors’ independence of mind, the ability to ‘think outside the box’. Whereas those closely involved in a process on a day-to-day basis tend to become somewhat blinkered to the situation around them through familiarity, auditors see things through fresh eyes. They have no problem asking dumb questions, challenging things that others take for granted or accept because they have long since given up trying to resolve them. They are also perfectly happy to identify and report contentious political issues, resourcing constraints and opportunities for improvements that, for various reasons, insiders may be reluctant even to mention to their management. Audits are arguably the best way to find and address corporate blind spots and control weaknesses that sometimes lead to significant information security incidents.

Compliance audits are a particular type of audit that assess the extent to which the in-scope processes, systems etc. comply with applicable requirements or meet their obligations laid down in laws, regulations, policies and standards. In the case of ISMS certification audits, for instance, certification auditors from an accredited certification body check that the ISMS complies with and fulfils the requirements in ISO/IEC 27001. There is also an element of risk assessment in compliance audits, however, since noncompliance can vary in gravity between purely inconsequential (e.g. trivial spelling mistakes in information security policies) and highly material (e.g. a complete lack of documented information security policies). Issues at the lower end of the scale (as determined by the auditors) may not necessarily be reported while those at the higher end will definitely be reported to management and will probably result in a refusal to certify the ISMS compliant until they are adequately resolved.

The risk assessment process is potentially auditable, by the way, while auditors are also concerned about audit risks (for example the possibility that their sampling and checking may fail to identify or highlight something truly significant, such as a rogue trader).

 

Implementation tip: challenging the status quo can be a valuable, if cathartic experience. At the end of the day, just remember that the primary aim of audits is to improve the organisation, stimulating management to make changes for the better. Effective auditing includes but goes beyond pure compliance checking and the rather negative aura associated with that. It is the ultimate change catalyst.

Top

 

 

FAQ: “Is threat assessment, threat modeling, threat analysis, vulnerability assessment, vulnerability modeling, penetration testing, business impact analysis, threat-vulnerability analysis, IT auditing ... or whatever ... the same as risk analysis, risk modeling, risk assessment ... or whatever ... ?

 

A: Yes and no. Strictly speaking, these are all different, or rather they should all be interpreted differently but, in practice, there is much variation in the way the terms are used. In your particular organisation, situation, classroom or what-have-you, one or more of these terms may well be in common use, meaning something more or less specific. The terms may even be formally defined in some manner, for example in the organisation’s risk or security policies, procedures and standards, in laws and regulations, in contracts etc. That’s all very well but the people using the terms are not all experts in this field, and to be fair even the experts sometimes disagree, with good reason.

Consider the meaning of risk for example. As far as I personally am concerned, risk means both (1) the combination or coincidence of one or more threats acting on one or more vulnerabilities to cause one or more impacts on something (i.e. normally the organisation, but sometimes risk relates to an individual business unit, system, person, location, information or other asset etc., or to several); and (2) an estimate of the probability and impact of some harmful event, incident, situation etc. ISO/IEC 27000 has versions of those two definitions, plus several others:

    risk

    effect of uncertainty on objectives

    Note 1 to entry: An effect is a deviation from the expected — positive or negative.

    Note 2 to entry: Uncertainty is the state, even partial, of deficiency of information related to, understanding or knowledge of, an event, its consequence, or likelihood.

    Note 3 to entry: Risk is often characterized by reference to potential “events” (as defined in ISO Guide 73:2009, 3.5.1.3) and “consequences” (as defined in ISO Guide 73:2009, 3.6.1.3), or a combination of these.

    Note 4 to entry: Risk is often expressed in terms of a combination of the consequences of an event (including changes in circumstances) and the associated “likelihood” (as defined in ISO Guide 73:2009, 3.6.1.1) of occurrence.

    Note 5 to entry: In the context of information security management systems, information security risks can be expressed as effect of uncertainty on information security objectives.

    Note 6 to entry: Information security risk is associated with the potential that threats will exploit vulnerabilities of an information asset or group of information assets and thereby cause harm to an organisation.

 

“Effect of uncertainty on objectives” is the official - but perhaps not the clearest and most helpful - definition for the ISO27k standards, hence all those explanatory notes. It is very generic.

Anyway, moving swiftly on, threat modelling, analysis or assessment normally explores the range of threats that are of some concern to the organisation, identifying, evaluating and sizing them, assessing their capabilities, resourcing, motivation, objectives etc.  The analytical process usually focuses on active/deliberate threats, although accidents, mistakes and natural disasters are at least as capable of causing harm. Similarly with external and internal threats, combinations of threats (e.g. looters following a flood) plus as-yet-unknown threats: a comprehensive threat model, picture or landscape will cover all possibilities.

Vulnerability assessment, often conducted in the technical sphere but actually a more broadly applicable method, is about finding and assessing the weak points (again in organisations, systems, locations, people, processes etc.) that might be exploited, assessing them also in terms of severity, exposure, nature, obviousness etc. Vulnerability assessment is at the heart of penetration testing, application system security testing etc.

Business impact analysis, often conducted in the context of business continuity management, is about working out the likelihood and scale of consequences to the organisation and its business, business interests, business partners and other stakeholders if various incidents came to pass.

Bringing the results of those three analyses together is one way to assess, analyse or model risks – but often we just go directly to an assessment of credible risk scenarios based on the kinds of damaging incidents that have happened (to us or others).  Auditors and risk professionals are good at this stuff, along with security/risk-aware managers etc

Personally, I find the Analog Risk Assessment method helpful, directly considering and discussing both the probability and the business consequences of various kinds of incident to figure out which are the scariest. Some people prefer quantitative and “semi-quantitative” methods … and actually there is merit in using a combination of methods for a more comprehensive view of risk - which is pretty much what ISO/IEC 27005 advises.

Finally, I’ll just briefly mention that terms such as ‘analysis’, ‘assessment’ and ‘modeling’ can be interpreted and used differently.

 

Implementation tip: if you find all these risk-related terms and methods confusing, join the club! They are very commonly misunderstood, misinterpreted and mistaken. Read and think carefully about the formal definitions in ISO/IEC 27000, and indeed in other standards and books, for a more accurate and complete picture. Be careful in how you express yourself on risk matters bearing in mind that other people may not share your particular interpretation or understanding of the terms.

Top

 

 

FAQ: “How should management define the organisation’s risk appetite?”

 

A: Apart from certain limited circumstances, most “real world” information risks cannot be objectively, rationally and accurately calculated or measured mathematically. We're dealing with an unbounded problem space and imperfect knowledge of it. At best some “knowable” risks can be estimated and ranked, but even this process is critically dependent on how the risks are framed or scoped (including how risks or information assets are accumulated or grouped together), and on who does the assessment and how, while other “unknowable” and hence unpredicted risks are almost certainly Out There waiting to bite us on the bum (which is what contingency planning is all about). It's a matter of probabilities and complex interdependencies so simple mathematics don't help: risks aren’t simply additive or accumulative.

But that is not to say that risk assessment, measurement and comparison is totally pointless, rather that the results should be treated with a great deal of caution since there are clearly significant margins for error. Large differences in calculated probabilities or impacts of certain information risks and incidents may be meaningful, whereas small differences may not. Where you draw the line between big and small is down to your own experience in this area, your trust in the numbers and analysis, the reasons for differentiating them, and gut feel.

There is a perspective effect too. From a senior executive’s point of view, impacts that involve them personally going to prison, being demoted or sacked, or suffering big hits on their executive bonus schemes through stock price crashes, are likely to register, even when probabilities drop from “probable” to “possible”. Compliance with laws and regulations tends to fall into this category. From an individual data subject’s perspective, impacts involving unauthorized disclosure of their most personal details are likely to be off the scale yet they may not understand or be concerned about probabilities.

And there’s still more to consider in terms of selecting appropriate risk treatments. Few information security controls absolutely reliably and comprehensively mitigate risks. Even “strong” encryption is fallible, often due to implementation or key management flaws and sometimes due to cryptanalysis or blind luck. Most risk treatments help to reduce if not eliminate specific risks, and a few (such as contingency planning and having an effective ISMS) help reduce unspecified risks.

 

Implementation tip: given the above, it may not be realistic for us to expect management to define their 'risk appetite' in general policy terms but, faced with individual situations, someone needs to make judgement calls about the risks and controls. Risk analysis helps frame and make those decisions but doesn't often give cut-and-dried answers.

Top

 

 

FAQ: “Which compliance obligations are relevant to information security and ISO27k?”

 

A: There are loads of them! Although I Am Not A Lawyer, just to get your brainstorming started here’s a simple but incomplete listing of the general types or categories of laws, regulations and contracts/agreements that have some relevance to information security and ISO27k:

  • Banking & finance e.g. financial reporting, tax, credit, money laundering, company accounts, credit cards (PCI-DSS and more) …
  • Business continuity, critical national infrastructure …
  • Commercial contracts & agreements e.g. confidentiality agreements, digital signatures, product guarantees, advertisements/offers/promises, maintenance and support agreements, Internet/distance selling, invoices, PCI-DSS again, plus other obligations with business partners, suppliers, customers, advisors, owners etc.
  • Corporate governance, obligations on officers, independent oversight/audits, company structure …
  • Cryptography – standards, laws and regs e.g. restrictions on use & export of strong crypto
  • Defamation, libel, slander ...
  • Employment e.g. disciplinary process, pre-employment screening/background checks, contracts of employment, codes of conduct …
  • Environmental e.g. monitoring for polluting discharges & limits
  • Ethics, morals, cultural and religious aspects e.g. Sharia law
  • Fraud, identity theft, misrepresentation, embezzlement …
  • Freedom of information – enforced disclosure …
  • Hacking, malware, denial of service, unauthorized access to information systems and networks …
  • Health and safety e.g. safety-critical control systems, fire exits, building standards/codes, industrial control systems, working conditions, hazards …
  • Insurance and risk e.g. terms & conditions, excesses, disclosure of relevant facts ...
  • Intellectual property rights - copyright, trademarks, patents, DMCA, trade secrets …
  • Military/governmental stuff: spying, official secrets & classification, terrorism, organized crime …
  • Permits and licenses to operate (in some industries and markets) …
  • Porn, pedophilia, discriminatory/offensive materials, threatening behavior, coercion …
  • Privacy, data protection, personally identifiable information …
  • Technical standards and interoperability e.g. ISO27k standards (!), TCP/IP, WPA, Windows compatibility, Java compliance ...
  • Wiretapping, surveillance, CCTV, monitoring, investigation, forensics …
  • Others ...

As if that list is not enough already, as well as domestic laws and regulations, you should also consider whether the laws, regs etc. in other countries might also be applicable. 

Oh and by the way, we’re on shifting sands, constantly evolving through changes to the legislation and emergent ‘case law’.

Aside from being familiar with all the obligations, someone needs to be on top of the associated policies, contracts, agreements, standards, codes, awareness/education/training, compliance assessments and enforcement aspects.  For example, do you have the policies and procedures in place to deal with exceptions and exemptions? Do you need to check compliance and perhaps enforce your organisation’s obligations on third parties e.g. confidentiality agreements with suppliers or business partners?

 

Implementation tip: personally, I favour the approach of treating this as a risk management issue i.e. analyse and consider the potential threats (e.g. investigation or discovery), the vulnerabilities (e.g. various practical and economic constraints on the extent of your compliance), and the impacts (e.g. enforcement actions, penalties, bad publicity, increased oversight).  The organisation (plus the individuals within it) has choices, strategic options, to make about when and how it complies with its obligations, in other words how it treats the risks. Full compliance is not necessarily appropriate in every situation, potentially creating opportunities for commercial advantage (cutting corners to cut costs). Furthermore, “full compliance” is not always entirely possible, in the same way that “complete security” is an oxymoron. Security is asymptotic.

 

Warning: assuming you are an information security professional looking into this stuff, be very wary of being expected or even perceived by your colleagues (management especially) as a legal expert: even qualified professional lawyers specialise within the field because it is too broad for anyone to be entirely competent across the whole lot. In an organisational context, the ‘officers’ of the corporation (normally senior management, execs and non-execs) are the primary owners of most of the compliance issues. They are the ones who are primarily accountable for the organisation’s compliance, or lack of it. Unless it fits, don’t take on their mantle! By all means offer general advice and guidance but be circumspect or cautious. Leave them very firmly carrying the compliance burden and, for your and their protection, explicitly recommend that they seek competent legal advice. Once again, for good measure, IANAL and this is not legal advice.

Top

 

 

FAQ: “How should we handle exceptions?”

 

A: You first need to understand the vital difference between exceptions and exemptions*:

  • Exceptions are unauthorized noncompliances with mandatory requirements, typically identified by compliance or other audits, management reviews, during the design phase when developing software and processes, or revealed by information security incidents;
  • Exemptions are authorized noncompliances with mandatory requirements. Exemptions can formalize management decisions to accept identified information risks.

For example, imagine that an IT systems audit has identified that system A is configured to accept passwords of at least 6 characters, while the corporate password standard mandates at least 8 characters. This is an exception that should be brought to the attention of the Information Asset Owner for system A. The IAO then considers the situation, considers the risk to the organisation and to his/her information asset, takes advice from others, and decides how to treat the risk. The preferred response is to bring the system into line with the policies. However that may not be possible right now. If instead the IAO’s decision is to accept the risk, an exemption to the specific policy requirement is documented and granted, but - and this is the important bit - the IAO is held personally accountable by management for any security incidents relating to that exemption by simple extension of their accountability for protecting their information assets.

Exemptions should be formalized e.g.:

  • The IAO should be required to sign a crystal-clear statement regarding their understanding and acceptance of the risk to their asset if the exemption is granted;
  • The exemption should be granted by being countersigned on behalf of management by an authoritative figure such as the CEO or CISO;
  • Optionally, the exemption may specify compensating controls (such as explicit guidance to users of system A to choose passwords of at least 8 characters in this case);
  • All exemptions should be formally recorded on a controlled corporate register;
  • All exemptions should be reviewed by IAOs and management periodically (e.g. every year) and, if still required and justified, renewed using the same formal process as the initial authorization. Typically exemptions may be renewed and continue indefinitely just so long as the IAO is prepared to continue accepting the risk and management is prepared to accept the situation, but some organisations may impose limits (e.g. an exemption automatically expires after one year and cannot be renewed without a majority vote in favour by the Board of Directors).

If there are loads of exceptions and especially exemptions to certain mandatory requirements, management really ought to reconsider whether the requirements are truly mandatory. If in fact they are, any current exemptions should be set to expire at some future point, forcing IAOs to use risk treatments other than ‘accept the risk’. Information Security should take up the challenge to help IAOs improve compliance. If the requirements are not in fact mandatory after all, the policies etc. should be revised accordingly.

* Note: your organisation may use different words for these two concepts (e.g. ‘waivers’) or even flip them around. The terms don’t particularly matter, provided they are defined, the distinction is clearly understood, and they are used consistently. If the terms are used loosely and inconsistently, that strongly suggests a lack of accountability which is a governance issue.

 

Implementation tip: key to this approach is personal accountability of IAOs for adequately protecting/securing their information assets. If management doesn't really understand or support concepts such as exceptions, exemptions, accountability, responsibility, ownership, information assets and risk, then the organisation has more important issues to address, and the rest is moot!

Top

 

 

FAQ: “Is there a comprehensive catalogue of information risks?”

 

A: Since each organisational situation and context is unique and dynamic, the only comprehensive catalogue of information risks may be the one you compile and maintain yourself - and, despite your best efforts, even that may unfortunately have significant errors and omissions. Uncertainty is the very essence of risk.

Published at the end of 2023 A basic (neither comprehensive nor detailed) checklist in the ISO27k Toolkit specifies 80 information risks, most but not all of which could be classed as information security risks (a curious term not actually defined in the ISO27k standards). A more elaborate information risk catalogue describing over 200 risks from which the shortlist of 80 were selected is available from SecAware.com ... but, to be honest, not even that is truly comprehensive.

A few published catalogues cover various elements or aspects of information risk, such as common threats and vulnerabilities, or cyber-risks. ISO27k Forum members have used the following:

Good information security textbooks are worth checking too, for example:

  • Cem Kaner’s Testing Computer Software has a lengthy, structured appendix listing common software errors, some of which create security vulnerabilities. Despite its great age, many if not all of those vulnerabilities persist even in modern software. “Those who do not study history are condemned to repeat it”!
  • Building Secure Software plus many of Gary McGraw’s other books discuss the concept of threat modeling to develop security specifications for application software;
  • The Security Development Lifecycle by Michael Howard and Steve Lipner outlines Microsoft’s approach to threat modeling using STRIDE (Spoofing identity, Tampering, Repudiation, Information disclosure, Denial of service and Elevation of privilege) - again it’s not a complete list of threats but a reasonable starting point, a prompt.

Most information risk analysis and management support tools, systems, methods and advisories include examples if not lists of stuff to consider.

Finally, Google is your friend.

 

Implementation tip: those are all generic catalogues. They may be useful reminders of the general types of stuff worth considering in your risk analyses but it is worth brainstorming with colleagues from Information Security, “the business”, and related functions such as Risk Management, Compliance, Legal, Health & Safety, IT, HR, Operations etc. to develop more specific lists of risks that are relevant to your organisation. Pore over your incident records and past risk assessments for clues and inspiration. Are you brave enough to publish your own information risk catalogue/s on the corporate intranet to remind workers of the wide range of issues of concern to Information Security and the business, inviting them to comment and contribute?

Top

 

 

FAQ: “Our third party penetration testers recently found 2 medium risk and 7 low risk vulnerabilities. I disagree with the ratings and want to challenge the medium risks (some old software) before they report to the Board. What do you think?”

 

A: ‘Low/medium risk vulnerability’ doesn't actually make sense. Fair enough, your pen testers have identified some technical vulnerabilities, but that's not the same as risks to the organisation. To be classed as risks, there would also have to be threats and impacts:

  • Threats could be, for example, just the general threat of non-specific network hacks or malware, or something more significant such as your organisation being a high profile target, likely to be attacked specifically by more competent and resourceful hackers.
  • Impacts depend on what those servers are used for, how they are connected on your network, and the projected business effects and costs that successful compromises would cause.

Finally, you need to consider the cost and perhaps additional risks of mitigating the vulnerabilities. I've no idea what upgrading or replacing the products would cost, nor what effects that might have on the rest of your IT. I would at least consider compensating controls such as additional, closer monitoring and slick responses instead of upgrades. In other words, look at the full range of risk treatments.

With additional information on these wider aspects of risk, management should be able to make better informed decisions about what, if anything, needs to be done to treat these risks or whether other risks are of greater concern.

 

Implementation tip: third party security testers, like IT auditors, are independent of the organisation and hence often see things in a new light. They bring experience and knowledge of the outside world. This is a valuable perspective that insiders lack, so don’t just dismiss what they tell you out of hand without considering it properly and ideally discussing it openly with them. However, their independence means they may not fully appreciate the business context for information security, for example competing investment priorities. It is your management’s role to take decisions and allocate resources in the best interests of the organisation, so give them the information they need to do their job.

Top

 

 

FAQ: “I’m confused about ‘residual risk’. For example, after risk assessment there are 3 risks (A, B and C): risk A is acceptable, B and C are not acceptable. After risk treatment, B becomes acceptable but C is still not acceptable. Which is the residual risk: just C? Or B and C?”

 

A: Residual literally means 'of the residue' or 'left-over'. So, residual risk is the left-over risk remaining after all risk treatments have been applied. It’s the risk kicking around in the bottom of the bucket after you’ve tipped out the rest. However, in your example, A, B and C all leave some (residual) risk behind.

  • Accepted risks are still risks: they don't cease to have the potential for causing impacts simply because management decides not to do anything about them. Acceptance means management doesn't think they are worth reducing. Management may be wrong (Shock! Horror!) - the risks may not be as they believe, or they may change (e.g. if novel threats appear or new vulnerabilities are being exploited);
  • Mitigated or controlled risks are still risks: they are reduced but not eliminated, usually, and the controls may fail in action (e.g. antivirus software that does not recognize and block 100% of all malware, or that someone accidentally disables one day);
  • Eliminated risks are probably no longer risks, but even then there remains the possibility that your risk analysis was mistaken (e.g. perhaps you only eliminated part of the risk, or perhaps the risk materially changed since you assessed and treated it), or that the controls applied may not be as perfect as they appear (again, they may fail in action);
  • Avoided risks are probably no longer risks, but again there is a possibility the risk analysis was wrong, or that they not be completely avoided (e.g. in a large business, there may be small business units out of management's line of vision, still facing the risk, or the business may later decide to get into risky activities it previously avoided);
  • Shared risks are reduced but are still risks, since the transferal may not turn out well in practice (e.g. if an insurance company declines a claim for some reason) and may not negate the risks completely (e.g. the insurance 'excess' charge). Remember that the manager/s who made the decision to transfer the risk are accountable for that decision if it all goes pear-shaped ...

... and in fact the same point about accountability applies to all decisions made by everyone. If a manager does not explicitly treat an identified risk, or arbitrarily accepts it without truly understanding it, they are in effect saying “I do not believe this risk is of concern”: that is a management decision for which they can be held to account.

The overall point is that you need to keep an eye on residual risks, review them from time to time, and where appropriate improve/change the treatments if the residuals are excessive.

[Aside: before any risk treatment is applied or ignoring all risk treatments, the risk is known as the inherent risk. Oh and denied risk is that which someone determines is simply incredible or so unlikely/remote that it is practically non-existent – like for instance the possibility of a pair of planes crashing into both of the World Trade Center twin towers ...]

 

Implementation tip: managing residual risks, systematically and explicitly, is a sign of a mature ISMS since it implies that management is taking a sensible, realistic approach towards managing information risks, including those believed to be appropriately treated. There is a strong link here between risk, security, incident and business continuity management.

Top

< Previous FAQ section   FAQ index   Next FAQ section >

Copyright © 2024 IsecT Ltd. Contact us re Intellectual Property Rights