Internal variability in surface temperature and the hiatus

Our paper Quantifying the likelihood of a continued hiatus in global warming is published today in Nature Climate Change. Here is the New Scientist take, the Carbon Brief take, and the Met Office Research News article.

Chris Roberts took on a huge task, processing massive amounts of data in the CMIP5 climate model archive, and leading the analysis on what I think is a taut paper.

We look at how long internal variability might contribute to a temporary hiatus or slowdown in global surface warming. How long could cooling due to internal variability counteract the sort of warming caused by human influences that we have seen over the last few decades? In part, it is motivated by our paper from last year [Palmer & McNeall 2014], looking at the consequences of energy sloshing around naturally in the Earth system.

This kind of question has been looked at before [e.g. Santer et al. 2009, Knight et al. 2009, Meehl et al. 2013, Maher et al. 2014].  We’ve taken things further by using a large multi-model collection of simulations, using some nifty stats, looking at conditional probabilities of hiatus continuation, splitting out internal heat rearrangement from top-of-atmosphere changes, looking at the probability of accelerated warming, and checking the surface patterns of warming vs cooling decades.

From my perspective the headline results are that 1) you would expect natural variability to counteract 20 years of warming in only about 1% of the model simulations but that 2) If you’ve already seen a 15 years of a cooling contribution, then the chances that it will continue for another 5 years are surprisingly high (best estimate is 1 in 6, but could be up to 1 in 4).

Screen Shot 2015-02-23 at 09.59.40

Of course, the most likely outcome after a 15 year cooling contribution from internal variability is a return to warming contribution.

Screen Shot 2015-02-23 at 09.56.23

All of this is slightly tricky to interpret in terms of the recent slowdown in global surface warming. It is strongly suspected that other factors (solar cycle, volcanoes, see e.g. Huber & Knutti 2014) have played a part in the slowdown, although the exact relative proportion due to external forcing and internal variability is disputed [you remember the fuss over Marotzke & Forster 2015, right?].

In this paper, we are only studying the statistics of a hypothetical and simulated slowdown, not the slowdown that we are in now. Further, if internal variability has been a large contributor to the real system,  it’s unclear which particular “type” or “mode” of variability is chiefly responsible, and whether all the models are able to simulate that well.

Nevertheless, I think in a situation where there is an active debate as to the causes, extent, and even existence of a warming slowdown, we offer some useful multimodel-derived and quantitative context on how much and for how long internal variability could contribute.

One of the bits of the paper that I’m most chuffed with is buried in the supplementary material. We convinced ourselves pretty early on that adding natural variability from a control run to early 21st Century simulations of a transient run was not a completely crazy thing to do.

An issue here is that any non-linearities in the climate system might invalidate your stats. For example, variability in a system without summer sea ice (which might feasibly happen later in this century) could be fundamentally different from a system where it is present. This would mean that you couldn’t just add the variability from the control run and look at the statistics of no-or-negative warming.

One reviewer spotted this, and, rightly, asked for more evidence than we had presented in the original draft. We did a considerable amount of new analysis (Rank histograms! Statistical tests!), and eventually convinced everyone that the assumption of linearity was valid for our analysis.

Roberts, C.D., M.D. Palmer, D.J. McNeall & M. Collins (2015) Quantifying the likelihood of a continued hiatus in global warming, Nature Climate Change, DOI:10.1038/nclimate2531

Thanks to Matt Palmer and Chris Roberts for comments on an earlier version of this post.

Update 24th February 2015

The paper is discussed at New Scientist, the Carbon Brief, Met Office Research News, Is Nerd, Reddit, University of Exeter, ReportingClimateScience.com, Quartz, Climate Central, Motherboard [Vice]

9 comments

  1. Doug,
    I think this is interesting. And also worthwhile– people on both sides ought to want to have an idea what they expect to happen given what they believe about the fidelity of these modles.

    By following an existing hiatus…. is “hiatus” “any period with a trend<0k/year"? (Some blog posts in some places describe it as "warming not statistically significant using red noise" but I assume you didn't use such a tortured definition. I would not have.)

    I did download a bunch of model data some time back (when I had only AR4 stuff) and did convince myself that the 'weather noise' in models was about the same during control runs, industrial periods and afterwards. No… don't remember where I put the stuff– but will probably find it. People will want to know things like "after a hiatus as 'deep' as the current one and so on. Obviously, one has to define a hiatus to answer a Q.

    Also, fwiw, my impression is the big problem with all this is that some of the models "weather noise" (for lack of a better word) looks inconsistent with earth weather. And the screwier looking models can dominate the answer to the sort of question you are asking. Maybe sometime later I'll post more clearly on that.

    But I know I did find that with models, after a long 'down' trend due to natural variability, you tend to have an 'up trend' (as would be necessary if weather is not a "random walk". (For short trends, this was not necessarily the case.) So, probability that of an upcoming 5 year hiatus after a 15 year hiatus seems consistent with what I seem to remembering seeing.

    Anyway…. obviously, I haven't read the paper. If you could send it to me, I'd love that!

  2. Nullius in Verba · · Reply

    That looks like an interesting and useful paper to me! Well done!

    The high probability of a 15 year hiatus continuing another 5 years is not a big surprise to me. Consider the simple example of a sequence of events occurring independently with probability 0.82. The probability of 15 in a row is 5.1%. The probability of 20 in a row is 1.9%. The probability of 20 in a row given that 15 have already occurred is 37%, the same as the probability of 5 in a row.

    Obviously, trends are not independent year to year, but going from a 5% probability to a 1% probability is not such a big jump – multiplying by an independent 20% event would do it.

    I did wonder why you said the hiatus had lasted the 15 years this century rather than the 17 years since the El Nino peak in 1998. 15 is a nice round number, but the difference sort of matters. Not that I see it as a problem – it’s easy enough given all the graphs you’ve presented to interpolate.

    But the most obvious use I can see for the work is in figure S1 in the SI, which it seems to me could be used to validate the models, eliminate those that have been falsified by observations, and thereby improve the reliability of new estimates based on what’s left.

    Looking at a 17 year hiatus and 0.2 C/decade trend, 17 out of the 23 models are contradicted by reality at a 99% significance level and a couple (?) more at the 95% level. (Of course, whether this is enough to reject them depends on what your priors are.) I gather this subset is the set of models not already falsified by their inability to model El Nino, is that right? If so, the combination of the two tests makes considerable progress in eliminating models that don’t match reality, which according to Popper is how science works! Excellent news!

    What climate trend predictions are made by the models that are left?

  3. […] of a continued hiatus in global warming (Robert et al. 2015). You can read more about it on Doug’s blog, but the core result is probably illustrated in the table on the left. It shows the probability of […]

  4. Steven Mosher · · Reply

    “Also, fwiw, my impression is the big problem with all this is that some of the models “weather noise” (for lack of a better word) looks inconsistent with earth weather. And the screwier looking models can dominate the answer to the sort of question you are asking. Maybe sometime later I’ll post more clearly on that.”

    ya, that is pretty much true from what we saw looking a bunch of different temporal and spatial scales. Like you I would have to go dig that work up.

    There were some models that were just screwing looking. technical term. Say where trend from adjacent grids were radically different, while other models gave smoother fields.

    ha and since you know that ‘smoother fields’ is a hot topic for us.. we will probably return to have a look.

  5. […] Co-author Doub McNeall’s blog post […]

  6. Suppose we bet on the roll of a die that I provided. You win $5 when six comes up and I win $1 when it doesn’t. You have rolled the die 15 times and six has never come up. There is a 6.4% chance of that happening with a fair die, so you are worried whether or not I provided you with a fair die. I tell you not to be surprised if no six appears on the next 5 rolls – the chances of that happening are 40%. Would you think my statement of fact was intended to INFORM or DECEIVE you about the situation?

    Climate scientists have provided ordinary citizens with climate models that predict that the chance of no warming in 5 years is 28%, roughly 28%^2 (7.8% vs your value of 9-10%) for 10 years, 28%^3 for 15 years (but you omit this value from your table of results). I’m seriously worried that your models are overestimating warming. You now inform me that climate models predict that the chance no warming for the next 5 years is 25%. Don’t ask me to believe that the INTENT of your statement is inform the public about the reliability of your models. We both know that 20 years without warming will occur <1% of the time AND THAT THIS IS THE ONLY RELEVANT FACT THE PUBLIC NEEDS TO KNOW to evaluate the reliability of your models.

    If it WERE really important that the public know that a 25% chance exists that there will be no warming in any 5-year period, you should have been telling the public that since 1990. The same goes for a 9% chance of no warming over 10 years. However, we both know that short-term changes are meaningless in the context of climate change. It made sense to ignore these possibilities in the past and TODAY. The real issue is whether warming from doubled CO2 will amount to 2, 3 or 4 degC. Actually, the hiatus has pretty much eliminated the possibility of 4 degC – especially if sensitivity to aerosols in the AR5's likely range – but the possibility exists that unforced variability is underestimated by models. (Naturally-forced variability is another complication.)

    We both know that most, if not all, climate models show little persistence on a decadal time scale. Above, I could calculate roughly the correct probability by using the formula 28%^n (where n is the number of five-year periods), because each five-year period is effectively an independent event. You didn't need to analyze the CMIP5 data to know that the probability of the hiatus continuing for five more years would be about 25% (within error of the probability for observing the first five-year period, 28%). You can, of course, fool the public into thinking that your sophisticated analysis of the significant probability of the pause continuing is telling them something new. Just like I could deceive you with information about a 40% chance of five more rolls of a die without a six.

    IMO, this kind of BS is hurting the reputation of climate science among the sophisticated public. I read your sensible and useful replies to Doug Keenan's partial nonsense. (A random-walk model is an absurd model for global warming. So is the IPCC's linear AR1 model. Neither of you mentioned that warming – defined as the difference in the means around 1990 and 2000 – has negligible possibility of including zero.) But next time, I'll remember this crude propaganda.

    Sorry for the rant. All of the BS from BOTH SIDES about the hiatus is driving me nuts. We have a 60+ year record of warming potentially-attributable to rising GHGs; 40+ years with satellite data. Any periods shorter than these are cherry-picking. The best estimate for ECS for this period is 2 degC. Observations from ERBE and CERES show that climate models do a lousy job of predicting changes in SWR and LWR for clear and cloudy skies associated with TEN annual warming cycles. Manabe et al, http://www.pnas.org/cgi/doi/10.1073/pnas.1216174110 Couldn't your attention be directed more profitably towards solving this problem?

  7. […] primo, uscito su Nature (ma uno degli autori lo spiega sul suo blog), compie un’analisi statistica delle simulazioni climatiche focalizzando l’attenzione […]

  8. […] My colleague Chris Roberts points out that the necessary comparison is with the expected warming (and associated distribution) from the models, not from the somewhat arbitrary threshold of “zero” warming. Here’s a comparison from our paper on the future of the hiatus, from earlier this year: […]

  9. Based on the mean flux of radiation …

    (a) The effective temperature of the Sun’s radiation reaching the surface of Earth is about -40°C. Yes, minus 40.

    (b) The effective temperature of the Sun’s radiation reaching the surface of Venus is about -140°C

    (c) The effective temperature of all the radiation from Earth’s atmosphere to its surface is about 3°C.

    Because these planets are rotating spheres, the actual mean temperature that any of the above radiation could achieve is a few degrees colder than would be achieved with uniform orthogonal flux striking a flat non-reflecting surface. The reason for this relates to the fact that the achieved temperature is only proportional to the fourth root of the flux. So, because the flux varies with the angle of incidence, flux that is above the mean achieves only a relatively small increase in temperature above that achieved by the mean flux.

    From this it is obvious that the mean temperatures of the surfaces of Earth and Venus are not achieved by direct radiation into those surfaces. Some relatively small regions on Earth may rise in temperature due to direct solar radiation, but overall, the observed global mean temperature cannot be explained by solar radiation. Atmospheric radiation would also not keep the mean temperature above freezing point (0°C) either.

    Hence we need to consider a totally different paradigm (based on entropy maximization and the laws of thermodynamics) which can and does explain the actual observed temperatures, not only for Earth and Venus, but for all planets and even the regions below any solid surface. Correct physics produces correct results that agree with data from the real Solar System.

    The breakthrough has come in this 21st Century and the science stands up to the test, being supported by copious evidence from planetary data, studies and experiments such as outlined at http://climate-change-theory.com so you will learn what is really happening if you read and study such.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: