UPDATE: I’ve added a comment by Marcel Crok to the end of the post.
The climate models being used by the IPCC for their upcoming 5th Assessment Report show little skill at being able to simulate a number of variables. We can now add Scandinavian land surface air temperature anomalies to the list.
Figure 1 compares the UKMO CRUTEM4-based land surface air temperature anomalies for Scandinavia, starting in January 1930, and the CMIP5-archived multi-model ensemble mean of the climate models prepared for the IPCC’s upcoming 5th Assessment Report (AR5). Both the data and the model outputs are available to the public through the KNMI Climate Explorer. The coordinates used were 55N-72N, 2E-30E. The climate model hindcasts are based on the historic forcings and RCP6.0 scenario. And both data and model outputs have been smoothed with 13-month running-average filters to reduce the monthly variations—what some would call weather noise. Also note that the linear trends were calculated from the monthly data and model outputs, not the smoothed versions. As shown, according to the models, if greenhouse gases were responsible for the warming of land surface air temperature anomalies in Scandinavia, they would have warmed at a rate that’s about 2 times faster than the rate observed. Or phrased another way, since 1930, Scandinavian land surface air temperatures warmed at rate that was half the rate simulated by the models. Not a very good showing on the parts of the models.
But it gets worse.
Look closely at the surface temperature data in Figure 1. There’s a very obvious upward shift in the data in the late 1980s. It may be hard to see with the model output in Figure 1, so I’ve highlighted the 2-year period of January 1987 to December 1988 in Figure 2.
From 1930 through 1986, Scandinavian land surface air temperatures cooled. (Note the large dip and rebound at the time of World War II.) Then there was a very strong warming surge in 1987 and 1988, which was followed by the period of 1989 to present when surface temperatures don’t appear to have warmed much at all. So let’s compare the models to the data for the periods before and after that upward shift.
Figure 3 compares the modeled and observed Scandinavian land surface temperature anomalies for the period of Jan 1930 to December 1986. Scandinavian surface temperatures cooled at a rate of -0.132 deg C per decade, but the models say they should have warmed at a rate of +0.058 deg C per decade. The models missed the mark by a rate of about 0.19 deg C per decade. Or, phrased another way, based on the linear trends, the models say Scandinavian temperatures should have warmed about 0.33 deg C from 1930 to 1986, but surface temperatures there actually cooled approximately 0.75 deg C.
Then there’s the period of January 1989 to present. The climate model simulations indicate that Scandinavian land surface temperature anomalies should have warmed about 1.3 deg C over the past 23+ years (based on the linear trend) if manmade greenhouse gases were responsible for the warming. But, based on the linear trend of the observed temperatures, Scandinavian land surface air temperatures have not warmed.
WHAT CAUSED THE SHIFT?
It’s probably a combination of a couple of natural factors. I did a quick search for papers that explained the shift but didn’t find anything conclusive. Scandinavian visitors may know of some and hopefully they’ll provide us with links.
The Arctic Oscillation appears to have had a strong influence around that time. The Arctic Oscillation Index is based on sea level pressures north of 20N. Wikipedia has a good overview here. Also see the NOAA webpage here, and the Arctic Oscillation Index data here. According to the annual Arctic Oscillation Index data, Figure 5, there was a significant spike in 1988 and 1989. The change in the Arctic Oscillation was most prevalent during the winter months, regardless of whether you define winter as December to February (Figure 6) or January to March (Figure 7).
For the period of 1985 to 1995, the Arctic Oscillation Index correlates very well with Scandinavian land surface air temperatures. See Figure 8. (The correlation maps were created at the KNMI Climate Explorer.)
(Note that I’ve also marked the coordinates used in the post for the data and model outputs.)
But using the full term of the Arctic Oscillation Index data, 1950 to present, the correlation between Scandinavian land surface temperature anomalies and the Arctic Oscillation is not as strong, Figure 9. That’s why I noted that it was probably influenced by a number of natural factors.
Some of you may be concerned that the shift shows up in the Scandinavian surface temperatures from January 1987 to December 1988, while the spike appears in the Arctic Oscillation Index data in 1988 and 1989. Sorry, I had used monthly data for the surface temperatures and annual data for the Arctic Oscillation Index. With the Scandinavian surface temperature anomalies in annual form, Figure 10, the shift appears in 1988 and 1989.
Regardless of the cause of the shift, the cooling of Scandinavian land surface temperatures before the shift, and the flat temperatures after it, were not captured by the climate models.
STANDARD BLURB ABOUT THE USE OF THE MODEL MEAN
We’ve published numerous posts that include model-data comparisons. If history repeats itself, proponents of manmade global warming will complain in comments that I’ve only presented the model mean in the above graphs and not the full ensemble. In an effort to suppress their need to complain once again, I’ve borrowed parts of the discussion from the post Blog Memo to John Hockenberry Regarding PBS Report “Climate of Doubt”.
The model mean provides the best representation of the manmade greenhouse gas-driven scenario—not the individual model runs, which contain noise created by the models. For this, I’ll provide two references:
The first is a comment made by Gavin Schmidt (climatologist and climate modeler at the NASA Goddard Institute for Space Studies—GISS). He is one of the contributors to the website RealClimate. The following quotes are from the thread of the RealClimate post Decadal predictions. At comment 49, dated 30 Sep 2009 at 6:18 AM, a blogger posed this question:
If a single simulation is not a good predictor of reality how can the average of many simulations, each of which is a poor predictor of reality, be a better predictor, or indeed claim to have any residual of reality?
Gavin Schmidt replied with a general discussion of models:
Any single realisation can be thought of as being made up of two components – a forced signal and a random realisation of the internal variability (‘noise’). By definition the random component will uncorrelated across different realisations and when you average together many examples you get the forced component (i.e. the ensemble mean).
To paraphrase Gavin Schmidt, we’re not interested in the random component (noise) inherent in the individual simulations; we’re interested in the forced component, which represents the modeler’s best guess of the effects of manmade greenhouse gases on the variable being simulated.
The quote by Gavin Schmidt is supported by a similar statement from the National Center for Atmospheric Research (NCAR). I’ve quoted the following in numerous blog posts and in my recently published ebook. Sometime over the past few months, NCAR elected to remove that educational webpage from its website. Luckily the Wayback Machine has a copy. NCAR wrote on that FAQ webpage that had been part of an introductory discussion about climate models (my boldface):
Averaging over a multi-member ensemble of model climate runs gives a measure of the average model response to the forcings imposed on the model. Unless you are interested in a particular ensemble member where the initial conditions make a difference in your work, averaging of several ensemble members will give you best representation of a scenario.
In summary, we are definitely not interested in the models’ internally created noise, and we are not interested in the results of individual responses of ensemble members to initial conditions. So, in the graphs, we exclude the visual noise of the individual ensemble members and present only the model mean, because the model mean is the best representation of how the models are programmed and tuned to respond to manmade greenhouse gases.
The list of model-data posts that show models performing poorly grows and grows. We can add Scandinavian land surface temperatures to the list of variables the CMIP5 climate models show no skill at simulating. The others include:
And we recently illustrated and discussed in the post Meehl et al (2013) Are Also Looking for Trenberth’s Missing Heat that the climate models used in that study show no evidence that they are capable of simulating how warm water is transported from the tropics to the mid-latitudes at the surface of the Pacific Ocean, so why should we believe they can simulate warm water being transported to depths below 700 meters without warming the waters above 700 meters?
Marcel Crok writes:
Jos de Laat (of KNMI) and I (a science writer) just have a paper out on the European temperature shift of 1987/1988. The title is A Late 20th Century European Climate Shift: Fingerprint of Regional Brightening? and can be downloaded (freely) at http://www.scirp.org/journal/acs/
We link it to the NAO and to the transition from dimming to brightening.