A Quick Look at the HADGEM2-ES Simulations of Sea Surface Temperatures

There’s been an online disagreement between skeptics and the UKMO in recent days. See Bishop Hill’s post here, and Nic Lewis’s full discussion here. Nic’s discussion was also referenced in David Rose’s article in the Mail on Sunday. The UKMO’s response to David’s article is here.

I was asked to take a look at how well (better written: how poorly) the HADGEM2-ES simulates sea surface temperatures.

For the first few comparisons, we’ll be examining the Reynolds OI.v2 sea surface temperature data (anomalies) and model mean of the HADGEM2-ES’s 3 ensemble members for the satellite era; i.e., the period of January, 1982 to August, 2013. During this period, the model outputs include historic simulations (hindcasts) through 2005 and projections from 2006 to 2013. I’ve used the RCP6.0 scenario for projections because it’s close to the A1B scenario that was popular in past IPCC reports.

Let’s start with Global sea surface temperature data and the model simulations of it.

GLOBAL

The model mean of the HADGEM2-ES more than doubles the warming rate of the observed global sea surface temperatures from January, 1982 to August, 2013. See Figure 1.

Figure 1

Figure 1

PACIFIC OCEAN

For the model-data comparison of the Pacific (60S-65N, 120E-80W), see Figure 2. The HADGEM2-ES simulated a warming rate of about 0.19 deg C/ decade for the sea surface temperature of the Pacific from January, 1982 to August, 2013, but the observed Pacific sea surface temperatures warmed at a rate that was less than 1/3 of the rate guesstimated by the UKMO’s HADGEM2-ES.

Figure 2

Figure 2

Also, apparently the UKMO didn’t get the memo that the sea surface temperatures of the Pacific Ocean stopped warming 2 decades ago. See Figure 3.

Figure 3

Figure 3

The Pacific Ocean covers more of the surface of the Earth than all of the continental land masses combined. When you look at a globe or a global map, it’s the big blue thing (see Figure 4) that stretches almost half way around the globe at the equator. It’s difficult to miss. Maybe the UKMO is simulating the Pacific Ocean on some other planet, where sea surface temperatures are warmed by manmade greenhouse gases.

Figure 4 pacific_ocean

Figure 4

The map of the Pacific Ocean is available from mapsof.net.

MULTIDECADAL VARIABILITY

We examined the multidecadal variations in the sea surface temperature anomalies of the North Atlantic and North Pacific in a recent post: Multidecadal Variations and Sea Surface Temperature Reconstructions.

In the following graphs, we’re presenting the UKMO’s new and improved HADSST3 data and the HADGEM2-ES simulations of sea surface temperatures for the Northern Hemisphere. In the graphs, the models and data have been detrended over the period of January 1870 to June 2013. The data and models have been smoothed with 61-month filters to reduce the large variations caused by El Niño and La Niña events.

In Figure 5, it’s very obvious that the HADGEM2-ES model mean does not fully capture the cooling that took place from the late 1870s to 1910, and, as a result, they underestimate the warming that occurred from 1910 to the early 1940s. Over the second “cycle”, the HADGEM2-ES model mean cools and rebounds about a decade earlier than the observations. All in all, the HADGEM2-ES does show multidecadal variability, though it’s out of synch with the real world.

Note also that the data appears as though it may have peaked about 2004/05, while the models continue warming on their merry ways.

Figure 5

Figure 5

Do the models continue to show multidecadal variations into the future? Nope. See Figure 6.

Figure 6

Figure 6

And for those concerned that the model mean presents an average and therefore doesn’t fully capture the multidecadal variations of the individual ensemble members, see Figure 7, 8, and 9.

Figure 7

Figure 7

# # #

Figure 8

Figure 8

# # #

Figure 9

Figure 9

The individual simulations are poor representations of the sea surface temperatures, and as a result, so is the model mean. The average of bad simulations is not going to be a good simulation—no matter how hard the climate science community believes an average will be a good representation. The average simply smooths out the inherent internal noises within the models—noises that are not true representations of coupled ocean-atmosphere processes.

SOURCE

The data and model outputs are available through the KNMI Climate Explorer.

0 0 votes
Article Rating
25 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
BBould
September 17, 2013 6:43 am

HADGEM2-ES simulates sea surface temperatures.
A model simulates sea surface temp and they use it as gospel? You have got to be kidding me.

Latitude
September 17, 2013 6:51 am

well why not….they are using models to tell them what CO2 does too
…actual measurements be damned (unless you can get away with jiggling them too)

Rgeo
September 17, 2013 7:18 am

AGW – All Gone Wrong
CAGW – Catastrophically All Gone Wrong

September 17, 2013 7:26 am

Bob Tisdale:
Thankyou for a good article. However, I think the more important fact is the conclusion of Bishop Hill’s post linked from your article.
It also assesses the same dispute and its concluding paragraphs say

The aerosol cooling that appears in the Met Office model when climate sensitivity is low is very strong, completely inconsistent with recent observationally-based estimates of aerosol forcing. In fact it is so strong that it would also have prevented most of the temperature rise seen since industrialisation. In other words, the virtual climate produced with these settings of the model doesn’t match the real one. This means any scenario in which climate sensitivity is low – as indeed the observational studies suggest it is – gets downweighted in the final analysis to the extent that it doesn’t really show up in the final results. The effect is essentially to rule out low climate sensitivity as a possibility.
I’ll say that again in a slightly different way. The Met Office’s model, one used generate the official climate projections, has big temperature rises built in a priori.

As I have repeatedly stated on WUWT, as long ago as 1999 I published a paper which reported that same problem with the Met Office’s model.
(ref. Courtney RS An assessment of validation experiments conducted on computer models of global climate using the general circulation model of the UK’s Hadley Centre Energy & Environment, Volume 10, Number 5, pp. 491-502, September 1999).
More recently, in 2007, Kiehle published a paper that assessed 9 GCMs and two energy balance models.
(ref. Kiehl JT,Twentieth century climate model response and climate sensitivity. GRL vol.. 34, L22710, doi:10.1029/2007GL031383, 2007).
Kiehl found the same as my paper except that each model he assessed used a different aerosol ‘fix’ from every other model. This is because they all ‘run hot’ but they each ‘run hot’ to a different degree.
Richard

Sweet Old Bob
September 17, 2013 7:58 am

Looks like another stab at the hockeystick meme.

September 17, 2013 9:12 am

Fascinating how ever model representation, by UKMO, of future temps. are always warmer and never allowed to dip even a minute percentage of a degree. UKMO has failed to predict the weather accurately for the past three years, even with their “Supa-puta”, thereby embarrassed themselves and destroyed their own credibility by their religious adherence to the CAGW doctrine.
They are way past repair and can only be fixed by the total and complete removal of everyone responsible for their guessing game excuse for forecasts and their completely irrelevant graphs that once again only confirms their ineptitude, their inability to function normally.
Great work Bob..

Pamela Gray
September 17, 2013 9:25 am

Thanks Bob. Your post sent me on a search for research related to GCM’s and SST interactions. I found plenty. I am reminded of the Y2K issues with MSDos/Windows computers. The issue was buried in miles and miles of code and no one knew particularly where to find all the errors, how to find them, and what they would look like. Mac operating systems did not have this particular error in their code and thus were considered immune to the Y2K issue. To be sure, while the Y2K error was a known potential problem from its inception, it also created downstream errors in the code that were unexpected and thus hard to find and fix entirely.
Some of the research I found was about how the biased forcings driving the GCMs create resultant downstream errors and biases inside the string of coupled programs and that can be difficult to find or even describe. There maybe errors happening they don’t even know about. Some are on the strengths and weaknesses within the models themselves (such as cloud feedbacks), or how they are strung together.
Which has led to a plethora of articles examining these issues. We have oldsters and youngsters in the Ivory tower combing through the models and driving them with all kinds of forced “scenario” input settings to find these elusive errors and unintended (and/or otherwise) biases that are producing both wide spread ranges between runs and between models, and producing the ever rising heat that is not materializing in the observations.
Here is just one of many pdfs I have found on this very interesting topic.
http://www.met.reading.ac.uk/~ericg/publications/Lloyd_al_JC11r.pdf
It sort of reminds me of the diagnostic work I have watched in a local messy, grease monkied, yet highly trained genius’ populated machinery shop dedicated to working on broken down ever more complicated farming and forestry equipment. The errors are hard to find, especially when you don’t even know exactly what you are looking for, and getting harder and harder to get to. Yet with dogged diligence and a tinkerer’s mindset, combined with a continuously trained keen eye to where the errors might be, the behemoth is eventually fixed and out the door.
So where do they think the errors are after a quick walk down journal lane? Understanding that it is the input dials and data that make many of the global circulation models a “Global Warming” model, there are three places. First, the input forcings and fudge factors need to be fixed (for example the CO2 related warming input calculation and aerosol fix needs to change), next the GCM weaknesses need to be fixed (IE cloud feedbacks), and finally how they string together the various models needs to be fixed.
The general opinion also found in the articles is that it could take decades to fix the GCM well buried errors. However, some tinker-minded researchers have suggest an approach used by mechanics. Hobble it back together with duct tape. In other words, fix the coupling codes in a way that also fixes the GCM weaknesses known to exist. Do that first. Then re-run the thing with and without the biased forcing driver inputs to see if the simulated climate acts like a sensitive adolescent youngster, or performs like the hardy aged individuals we find frequenting this blog.

Pamela Gray
September 17, 2013 9:58 am

Yes I agree. And I think that cloud couplings with oceanic parameters that create more or less DSR may be a weakness in the GCMs themselves. Might that be fixed with a fix in the couplings between the atmospheric and oceanic models? In other words, duct tape?

Pamela Gray
September 17, 2013 9:59 am

Would love to see the calculation string for DLR and the undergirding research for that calculation.

September 17, 2013 10:10 am

Bob, thanks for the data just wonderful.

george e. smith
September 17, 2013 10:16 am

Well the astounding lack of any correspondence between the Blue as in planet earth real observed numbers (Fig. 1) and the made up red model numbers; is not a total surprise. When you make stuff up, it usually doesn’t agree with anything real.
But now I find myself with an even more intriguing mystery in Fig. 1 Red.
If you run the same model over again, does it still make up the exact same set of faux numbers; or does it give a completely different and unrelated set of fictional output ?
And if it does exactly replicate the same red curve; just how does it do that ??

September 17, 2013 10:17 am

Aerosol cooling does not fly. Volcanic activity has been quite low over the past several years as shown through volcanic aerosol opitical thickness charts.

Pamela Gray
September 17, 2013 12:17 pm

Do they re-run several times and take the average? Or is it a single run one time through? When I did my brainwave research the equipment was coded to collect a 1000 runs and average the entire thing. Random noise cancelled out while elicited brainwaves rose out of the noise. My understanding of GCMing is that they are sampled many times and averaged to give a final graphable data image plus error bars. Else there is no way to use the results for statistical comparisons.

Pamela Gray
September 17, 2013 12:24 pm

OMG! They flew a satellite into a global circulation model! Now that is cool. Bob you are going to love this paper. Novel approach to testing the models. Very cool experiment!
http://kiwi.atmos.colostate.edu/pubs/flying.pdf

September 17, 2013 12:38 pm

I saw a comment about “Macs were immune to Y2K”, which is true I suppose, if you don’t use any software with any date tracking in it.
Mac hardware was fine, but it was never a hardware bug in the first place outside of some old DOS boxes.
Just had to correct a bit of silly misinformation, back to your scheduled discussion of the oceans and models.

Pamela Gray
September 17, 2013 2:13 pm

Max, more than a few of us had those old DOS boxes. It is true that the newer versions already had a patch in place and the old ones patched up before the Y2K struck.

KNR
September 17, 2013 2:24 pm

It will not change at UKMO until the top level of management goes , fully committed politically animals , they see their job as not to use the best science but the best ‘means’ to achieve support for ‘the cause ‘ because they consider that approach is best for the MET office .

Kev-in-Uk
September 17, 2013 2:59 pm

KNR says:
September 17, 2013 at 2:24 pm
As a taxpayer and UK citizen, I feel I must publicly apologise for our pseudoscientific MetOffice.
Unfortunately, there is only one word to describe the UKMO management
It begins with W
has an S at the end
and sounds like ‘anchor’, in the middle……

RoHa
September 17, 2013 8:31 pm

From those graphs it looks as though there is something wrong with the sea. What can we do to fix it?

September 17, 2013 10:06 pm

For comparisons to model trend to data trend, where weather is involved, I like consideration for 2 methods of determining linear trends. One is the usual one, where “the best-fit straight line” is drawn where root-mean-square deviations from it are minimized. The other is where average deviations from it are minimized.
It seems to me that considering average deviation in comparison to considering RMS deviation reduces (but does not eliminate) “outlier events”. I consider “outlier and semi-outlier events” as being somewhat common when the notorious weather of the atmosphere and oceans comes into play, and I think it would be good to downplay there extreme brief outbursts.
As for how I see trends by “eyeball estimate”, roughly minimizing average as opposed to RMS deviations, in Figures 1 and 2:
Figure 1 model trend: Reduce from .185 to ~ .172 – .175 degree/decade
Figure 1 data trend: Increase from .083 to ~.09 – .091 degree/decade.
Figure 2 model trend: Reduce from .192 to ~.189 degree/decade
Figure 2 data trend: Increase from .059 to ~.067 – .068 degree/decade
It appears to me that the models considered are at least mostly poorly, but the degree of their failure is slightly overstated. I advise both sides of the manmade climate change to make their statements on the cautious side, lest they lose credibility for being caught overstating their cases.

DaveS
September 18, 2013 4:42 am

Kev-in-Uk says:
September 17, 2013 at 2:59 pm
But Dr Betts has assured us that the principal output of the MO is real science…..

John Edmondson
September 18, 2013 11:25 am

Thanks Bob, nice work. Total sh*te from the met office, no surprises there. Good to see the Met Office putting my hard earned money to such good use.
I think the Met office would do better to use seaweed as the basis for their long term forecasts, it would certainly help the ever growing national debt. I don’t think Exeter is far from the sea, so at least the carbon footprint for the journey would be quite low.

David Cage
September 25, 2013 11:50 pm

Surely any model that does not explain the hot spots that pop up in the Arctic region but not in most of the world is not a viable model at all. These hot spots obvious in the NASA sea anomaly data in AMSRE_SSTAn_M clearly are a major influence on global sea temperatures so how come modellers in that field feel able to ignore them when any computer modeller in other fields would consider them a starting point in any investigation? For climate scientists to suggest that these could be caused by a global warming effect would suggest that a modern climate science degree is lower in standard than an Ordinary level science qualification of the fifties where we did an experiment to show that heat spreads out from high to low, this when aged about twelve.