It has long been known that people generally prefer a bet with known odds, to one where there is uncertainty in the odds. The classic example of this is a choice of two bets.
Imagine two urns. In the first urn there are 100 balls, 50 white and 50 red. If you reach in to the urn and pull out a red ball, I’ll give you £100. In the second urn, there are also 100 balls, but with an unknown proportion of red and white balls. If you reach in and pull out a red ball, I’ll still give you £100.
Assuming I’m not trying to cheat you (I’m not, honest), which bet would you rather take? When surveyed, people tend to take (or pay more to take) the first bet, even though they are essentially the same. This preference is called ambiguity aversion, and apparently Keynes knew about it. It is the foundation of the Ellsberg paradox, which looks at slightly more complex bets to find a paradox, rather than just a preference.
Why am I looking at this stuff? Well, I’ve an idea that this kind of thing might have an influence on the way that we look at abrupt climate change: particularly “low probability – high impact” events. Looking at that literature, there are very few studies that try and pin down the probability of such an event occuring. When they do try, they often end up with very wide bounds on probability estimates, as the dynamics of the systems are often little known. In our recent review paper, we argued that it might be better to classify these as “high uncertainty – high impact” events. Nevertheless, these bounds (and best guesses) often come in with probabilities that it would be difficult to describe as “low”. I mean, would you get on a plane if there was a 1 in 10 (or 100, or 1000) chance of it crashing?
There might be lots wrong with these probability estimates, but assuming that they are in the right ballpark, why is the literature on cost benefit analysis, impacts studies, adaptation etc. to abrupt climate change so thin? Because we are averse to the ambiguous? Because we (as scientists) don’t like to comment and speculate on risks that we feel we don’t know about? Because we don’t like to be seen to be crying wolf?
In a lovely paper, Fox and Tsversky (1995) look at ambiguity aversion from a different perspective. Using a series of simple studies, they find that the effect tends to disappear if you don’t immediately contrast an ambiguous bet with an unambiguous one. If you ask one bunch of people to pay to bet on the first urn, and another bunch of people to bet on the second urn, they seem to pay about the same. Does this mean that if we want people to seriously consider abrupt climate change, that they clear their minds of other, less uncertain issues that they might be focussed on?
Hi Doug,Nice post! It’s quite relevant to the IPCC’s updated approach to <a href="http://www.ipcc.ch/pdf/supporting-material/uncertainty-guidance-note.pdf">uncertainty and confidence</a> for AR5. Two key points made in their guidance note are:<blockquote>Presentation of findings with “low” and “very low” confidence should be reserved for areas of major concern, and the reasons for their presentation should be carefully explained."</blockquote>and<blockquote>Confidence should not be interpreted probabilistically</blockquote>Where confidence = evidence * agreementOne of the things that the "Key Vulnerabilities" chapter in WG2 need to think about for the FOD is how to deal with high-uncertainty issues such as abrupt changes, and if these are viewed as "low confidence" should they be presented. Interesting.
I’m often struck by people who claim that they don’t gamble, unaware that risk is something we all manage every day when we cross the road.
Doug, interesting, but I’m no certain (ha!) what you believe the alternative to inaction is, or even if you believe there is an alternative?I can see it’s valuable to recognise that Ambiguity Aversion may be in play in a general sense to see how it might influence the decisions we make or don’t make, but beyond that I’m struggling to understand what else you’d advocate.I suppose in an abstract sense there may be circumstances where there is more than one legitimate course of action and that these courses of action have roughly the same "cost". in that situation, I guess you could dip into your pool of high-impact-but-uncertain issues and identify that one or more of them would, as a side-effect, be mitigated by a particular choice.IOW a proper application of the precautionary principle.
Hi mrsean2k, thanks for responding.I guess I’m not advocating much, beyond making more and better studies of the mode, probability, and costs of low-probability, high impact events.I’m quite surprised that there isn’t more focus on these systems. If I were making climate policy, I’d want to know as much as possible about them. I guess many people (including scientists) are implicitly disregarding the probabilities that the studies come up with as way too high. That is fine: I can think of a number of reasons that the probability estimates might be too high. However, I’d want people to go away, and come back with more carefully worked out estimates of probability.
"Looking at that literature, there are very few studies that try and pin down the probability of such an event occuring."The reason for this is just that it’s not possible to define probabilities for rare future events. What is the probability of an alien invasion in the next 50 years? It’s a meaningless question. How do you define probability? There are two ways to determine probabilites. One is by calculation: I can calculate that if I pick up a pen at random from the pile on my desk, the probability of getting the green pen is one in nine. The other is measurement of probability by repeated observation: for example the probability of it raining in Exeter on April 17th is 0.36, or whatever, based on historical observations.In the case of abrupt changes in climate such as the ‘youger dryas’ event, we can’t calculate its probability because we don’t know what were the physical processes that caused it, and we don’t have enough repeated observations to measure it. So any attempt to ascribe a probability is not very meaningful.It’s a shame that your recent paper is paywalled, with no preprint available, and in a journal that my university regards as too obscure to subscribe to. Please could you send me a copy?
Hello Paul, thanks for the comment.You are missing another way of defining probability: as a subjective degree of belief. This sounds controversial but has a long history, and has become a major school of thought in statistical circles. The crux of the idea is that probability is a state of the mind of a person, given what they know (the evidence available to them), rather than a tendency of the outside world. In this case, you can construct probabilities from reasonable betting odds.The kicker is that subjective probabilities must also be coherent (two people with the same evidence would judge the same probability). In the case of your pens, this means that you and I would judge the same probability of picking up a green pen from your desk. If you could only see one half of the pens, and I could see the other half then, well, we might come up with different probabilities.There is a good argument for saying that this is the way that people use probabilities in the real world. Many events of interest are one-off, and highly uncertain, and yet people seem to bet on them (whether with money or otherwise).So although assigning probabilities to low-probability, high-impact events is *difficult*, it isn’t meaningless. In our societal policies, I would argue that we are doing such a thing anyway.I think we have more evidence for the processes which might govern abrupt climate change than we do about an alien invasion, so I would expect our probability estimates to be more meaningful. Thats not to say they are right!There is a good summary of subjective probability here,http://understandinguncertainty.org/node/88 and I recommend looking around the rest of the website too.My apologies about the paper: in my ideal world, all would be free to all the public. You live and learn. I would recommend WIRE:climate change to your university mind, it is new, but has some great stuff. I’m happy to send you a copy. I will set up an email address for such things.
Doug, that all sounds a bit wooly and subjective to me – I’m interested in hard science. You might be interested in http://www2.lse.ac.uk/publicEvents/events/2012/03/20120308t1830vOT.aspx"A substantial literature on risk perception demonstrates the limits of human rationality, especially in the face of catastrophic risks. Human judgment, it seems, is flawed by the tendency to overestimate the magnitude of rare but evocative risks, while underestimating risks associated with commonplace dangers…."
Hi Paul,It is subjective, but I assure you that any woolliness is probably in my description, rather than in the science itself.Thanks for the link, that does indeed look worth attending. I’ve been interested in how the perception of risk might influence expert elicitation for catastrophic events for some time now, so I’m not unaware of this fascinating literature. The objective of course, is to take into account the biases in human judgement, when informing decision makers about catastrophic risk.For some more technical (or ‘hard science’) background on subjective probabilities, I would recommend <a href = "http://www.amazon.com/Probability-Theory-Science-T-Jaynes/dp/0521592712">E.T. Jaynes’ classic</a>, and for eliciting expert judgement on uncertainty, I would have a look at <a href = "http://eu.wiley.com/WileyCDA/WileyTitle/productCd-0470029994.html">Uncertain Judgements</a> (full disclosure, Tony O’Hagan was co-supervisor on my PhD).If you would like a copy of the paper mentioned before, please email me at climatestats, on google’s domain.