Risky Business3 min read

It’s always tempting to go to Twitter with a half-formed idea. Always tempting and always a mistake. I (re-)learned this lesson at the end of last month when I tweeted a half-formed idea about risk assessment of all things. My takedown was swift and brutal: first from my old friend @ukcivilservant (aka the excellent Martin Stanley) and then from the anonymous @SimonVonDulwich account.

Never, never, never [ask] a question you don’t already know the answer to.

Harper Lee, To Kill a Mockingbird

But I’m getting ahead of myself. The story begins when I was reviewing the risk register of a charity that I’m involved with. The way the risks had been calculated looked odd. After some investigation, I discovered the approach being followed was based on official Charity Commission guidance (CC26).

My confirmation bias kicked in. I have long been suspicious of the way politicians and officials misuse numbers in public life. In my view, numbers and measurement are essential when it comes to understanding complex social problems. But (and it’s an important but) they can only ever help to ask good questions. Numbers never provide definitive answers. And yet, whether it’s funding formulae, league tables, deprivation indices or – as in this case – risk assessments, this is exactly what people seem to want; to replace human judgement with a nice, simple one-dimensional number.

For every complex problem there is an answer that is clear, simple, and wrong.

H. L. Mencken

Please don’t misunderstand me. As an aid to decision makers, the standard approach to risk assessment has much to recommend it. For each risky event faced by an organisation, consideration is given to two separate and distinct factors: (1) the likelihood of the event occurring (2) the impact if it does occur. Following the standard mathematical approach to such things:

Risk = Expected impact = (Likelihood of event) x (Impact of event)

In an ideal world, the likelihood assessment would be a percentage between zero and 100, and the impact of different events would all be measured on some agreed continuous scale. (Economists would no doubt like the common scale to be money but it certainly doesn’t have to be.)

In the real world, decision makers often follow a simplified rule of thumb, typically reducing each dimension to a five-point scale. Then, for each risk faced by the organisation, the process is for the decision maker to:

  1. make their best estimate of the likelihood, x, from 1-5
  2. make their best estimate of the impact, y, from 1-5
  3. calculate the risk as the product of likelihood and impact, risk = x.y

The risk ends up in one of 25 boxes in a 5×5 grid and the final stage in the process is to overlay a RAG rating that represents relative severity of each risk. The diagram above uses the RAG rating recommended by the Charity Commission in CC26:

RED:      “major or extreme/catastrophic risks that score 15 or more”
AMBER: “moderate or major risks that score between 8 and 14”
GREEN: “minor or insignificant risks scoring 7 or less”

Now these risk estimates are just that: estimates. And it is the task of the decision maker to decide whether they agree with the estimate and how they ought to respond to it. They could choose to ignore the risk, they could insure against it, they might try to mitigate it, they could seek to transfer it to a third party. The list is endless – and at the discretion of the decision maker.

So far, so good? Well, yes, but there’s a fly in the ointment. It turns out that decision makers are not particularly good at estimating probabilities accurately, especially the likelihood of rare events. CC26 sums it up like this:

“In recent years, methodologies for measuring risk impact and likelihood have developed further. Many organisations now take account of events that are rare or unprecedented, where the rules are unknown or rapidly changing or where risks are driven by external factors beyond their control. These risks which have very high impact and very low likelihood of occurrence are now accepted by many as having greater importance than those with a very high likelihood of occurrence and an insignificant impact. In these cases, the concept of impact and the likelihood of risks occurring and their interaction should be given prominence in both the risk assessment and risk management processes. Using the method outlined in the previous paragraph, they would have scored the same.” [emphasis added]

In other words, special care is needed for very high impact risks even when the apparent likelihood is very low. Now we get to the heart of the matter. The Charity Commission has rightly recognised that some risks require greater attention than other risks despite trustees making their best efforts to estimate the various likelihoods and impacts.

How to resolve this anomaly? Well, a generous interpretation of “very low” likelihood and “very high” impact would suggest the Commission is drawing trustees’ attention to the four boxes highlighted below. And a similarly generous interpretation of “prominence” would be to red rate all four boxes.

But this is not, in fact, the path chosen by the Commission. Instead it recommends keeping the RAG rules as they are but replacing the risk formula, x.y, with x.y + y.
In its words:

“The [new] formula multiplies impact with likelihood then adds a weighting again for impact. The effect is to give extra emphasis to impact when assessing risk.”

Now it is trivial to observe that the only substantive difference between this grid and the first one is that the RAG colour scheme has been shifted one column to the left. That is all that has been achieved. And that is where my original, ill-fated Twitter journey began:

In effect, the process being recommended by the Charity Commission is for trustees to:

  1. make their best estimate of the likelihood, x, from 1-5
  2. make their best estimate of the impact, y, from 1-5
  3. artificially increase the likelihood estimate beyond where they think it should be, x* = x+1
  4. calculate a revised risk score based on this inflated likelihood, risk* = x*.y

And so there I was last month, patting myself on the back for a geeky algebraic tweet, when out of nowhere came @ukcivilservant and @SimonVonDulwich to tell me I had got it all wrong.

Now, to be clear, I was certainly not suggesting that the underlying risks had somehow changed. Indeed my entire point was that they hadn’t but that the new formula distorted all the risks, not just the very low likelihood/very high impact ones. However, it’s fair to say I was more interested in the underlying algebra than I was in the detail of CC26 Chapter 4 (A Risk Assessment Model).

Consequently, I was caught somewhat off guard when I was so firmly challenged by my two critics. Not wanting to embarrass myself further on Twitter, I DM’d @SimonVonDulwich to try and understand his objection.

Two things intrigued me about his analysis. The first was his reference to an “L=3” risk, which didn’t seem to meet the Commission’s criteria of “very low” likelihood. The second was his focus on the effect of trustees making marginal changes to their assessment of likelihood and impact, which again seemed outside the scope of the Commission’s concerns.

Which brings me back to the revised formula. Is its effect limited to the Commission’s stated ambition of giving prominence to “risks which have very high impact and very low likelihood of occurrence”?

The answer, of course, is no. The revised formula affects the distribution of all the risks in the grid, regardless of their anticipated likelihood and impact. For example, the “L=3, I=4” risk that @SimonVonDulwich mentioned, which clearly lies outside the Commission’s intentions, nevertheless moves from being a yellow risk under the ancien régime to a red risk now.

On the other hand, the risk with the very lowest level likelihood and very highest level impact (ie. the one in the top-left corner of the 5×5 grid), which clearly represents the central focus of the Commission’s concerns, is only scored as a yellow risk under the new system.

All of which brings me back to my prejudice about the way numbers are used in public life. The attraction of simple formulae over common sense can be overwhelming. Pop the numbers in – bish, bash, bosh – out comes an answer at the other end. Except in this case, the answer the Commission provides singly fails the very question it set itself. Instead of asking trustees to make the best possible assessment of risks – and then to draw attention to exceptional cases worthy of special attention – the Commission’s guidance distorts the assessment of every risk.

There are no routine statistical questions, only questionable statistical routines.

Sir David Cox

So how should charities assess risk in a way that is both simple and transparent? I think we have already seen the answer, reproduced below for convenience.

The three steps are:

  1. Assess each risk fairly based on trustees’ best assessment of likelihood and impact.
  2. Plot each risk on a 5×5 grid in the usual way.
  3. Apply a colour coding to the grid that directs attention to the risks of greatest concern.

The key thing is that step 3 can be changed in any way the Commission or trustees see fit. It could, for example, look something like this:

It really doesn’t matter what RAG overlay is applied to the grid as long as it’s appropriate to the issues at hand in that organisation.

I don’t know if @ukcivilservant and @SimonVonDulwich would agree with my analysis and the conclusions I have drawn. Perhaps they think that asking charity trustees to place greater weight to an “L=3, I=4” risk than they do to an “L=1, I=5” risk is acceptable, even if it wastes trustees’ time and energy and is expressly not what the Charity Commission wants.

I hold to a simpler discipline:

  1. measure what you can, as accurately as you can;
  2. apply judgement to your measurements;
  3. take appropriate action.

1 Comment

  1. Well done Richard. I do agree with you.
    And the problem you identify isn’t confined to charities. Every significant organisation nowadays has a risk register, but such registers are all too often little more than tick boxes which generate little or no internal challenge.

Comments are closed.