Preview of CMIP5/IPCC AR5 Global Surface Temperature Simulations and the HadCRUT4 Dataset

Guest post by Bob Tisdale

INTRODUCTION

As a preview of the upcoming Intergovernmental Panel on Climate Change (IPCC) 5th Assessment Report (AR5) due out in September 2013, in this post, we’ll take a brief look at the multi-model ensemble mean of the Coupled Model Intercomparison Project Phase 5 (CMIP5) simulations of global surface temperature anomalies through the year 2100. The four new scenarios are discussed and shown. We’ll also compare CMIP5 and CMIP3 hindcasts of the 20thCentury to see if there have been any improvements in how well climate models simulate the rates at which global surface temperatures warmed and cooled since 1901. For the observations data in another comparison, we’ll use a weighted average of the Met Office’s new HADSST3 and CruTEM4 surface temperature datasets, approximating the HadCRUT4 data, which has yet to be released formally in an easy-to-use format.

The KNMI Climate Explorer Monthly CMIP5 scenario runswebpage was used for RCP global surface temperature hindcast and projection data. Keep in mind that it’s still a little early. As KNMI notes:

The collection here changes almost daily, it is not definitive by any means. The CMIP5 system itself is in flux at the moment.

But this post will give us a reasonable idea of the direction the researchers are taking the hindcasts and projections.

A REMINDER

Figure 1 is Figure SPM.5 from the Summary for Policymakers of Working Group 1 of the Intergovernmental Panel on Climate Change’s (IPCC’s) 4th Assessment Report (AR4). It shows hindcasts and projections of global surface temperatures for a number of scenarios. The scenarios are explained on page 18 of the linked Summary for Policymakers. Scenario A1B is commonly referenced. In fact, that is the only scenario provided as merged hindcast-projection data (the first 3 fields) at the Monthly CMIP3+ scenario runs webpage at the KNMI Climate Explorer. For a full-sized version of the IPCC’s Figure SPM.5, see here. As shown, for scenario A1B, the models are projecting a rise in surface temperatures (relative to the base years of 1980 to 1999) of about 2.8 deg C.

Figure 1

CMIP5 PROJECTIONS OF GLOBAL SURFACE TEMPERATURE ANOMALIES

The Lawrence-Livermore National Laboratory (LLNL) Program for Climate Model Diagnosis and Intercomparison (PCMDI) maintains archives of climate models used in the IPCC’s assessment reports. These archives are known as Coupled Model Intercomparison Project (CMIP). The 3rd phase archive (CMIP3) served as the source of climate models for the IPCC AR4, and the 5th phase archive (CMIP5) is the source of models for the IPCC’s upcoming 5thAssessment Report (AR5).

It appears the IPCC will be presenting four scenarios in AR5, and those scenarios are called Representative Concentration Pathways or RCPs. The World Meteorological Organization (WMO) writes on the Emissions Scenariowebpage:

The Representative Concentration Pathways (RCP) are based on selected scenarios from four modelling teams/models working on integrated assessment modelling, climate modelling, and modelling and analysis of impacts.  The RCPs are not new, fully integrated scenarios (i.e., they are not a complete package of socioeconomic, emissions, and climate projections). They are consistent sets of projections of only the components of radiative forcing (the change in the balance between incoming and outgoing radiation to the atmosphere caused primarily by changes in atmospheric composition) that are meant to serve as input for climate modelling. Conceptually, the process begins with pathways of radiative forcing, not detailed socioeconomic narratives or scenarios. Central to the process is the concept that any single radiative forcing pathway can result from a diverse range of socioeconomic and technological development scenarios. Four RCPs were selected, defined and named according to their total radiative forcing in 2100 (see table below). Climate modellers will conduct new climate model experiments using the time series of emissions and concentrations associated with the four RCPs, as part of the preparatory phase for the development of new scenarios for the IPCC’s Fifth Assessment Report (expected to be completed in 2014) and beyond.

Table 1.1: Overview of Representative Concentration Pathways (RCPs)

RCP 8.5 Rising radiative forcing pathway leading to 8.5 W/m² in 2100.
RCP 6 Stabilization without overshoot pathway to 6 W/m² at stabilization after 2100
RCP 4.5 Stabilization without overshoot pathway to 4.5 W/m² at stabilization after 2100
RCP 3-PD2 Peak in radiative forcing at ~ 3 W/m² before 2100 and decline

NOTE: RCP 3-PD2 is listed as “RCP 2.6” at the KNMI Climate Explorer Monthly CMIP5 scenario runsWebpage, and will be referred to as RCP2.6 in this post.

Further information about the individual RCPs can be found at the International Institute for Applied Systems Analysis (IIASA) webpage here.

Figure 2 compares the multi-model mean of the global surface temperature hindcasts/projections for the 4 RCPs, starting in 1861 and ending in 2100. (The use of the model mean was discussed at length in the post Part 2 – Do Observations and Climate Models Confirm Or Contradict The Hypothesis of Anthropogenic Global Warming?, under the heading of CLARIFICATION ON THE USE OF THE MODEL MEAN.) The base years are 1980 to 1999, same as those used by the IPCC in AR4. Also listed in the title block are the numbers of models and ensemble members that make up the model mean as of this writing, and as noted above, the numbers are subject to change. Based on the models that presently exist in the CMIP5 archive at the KNMI Climate Explorer, the IPCCs projected rises in global surface temperature by the year 2100 in AR5 should range from about 1.3 deg C for RCP 2.6 to a whopping 4.4 deg C for RCP 8.5. At 2.7 deg C in 2100, RCP 6.0 projects about the same warming of surface temperature as SRES A1B, and if memory serves, the SRES A1B forcing in 2100 was about 6.05 watts/m^2, comparable to RCP 6.0.

Figure 2

Notice, however, that RCP 6.0 has received the least attention by the modelers, even though it’s about the same as SRES A1B. Based on the number of models on the KNMI Climate Explorer, RCP 6.0, as of now, has been simulated by only 13 models with a total of 28 ensemble members, while RCP 8.5 is getting the most interest, 29 models with 59 ensemble members. Is the IPCC going to follow suit and spend most of its time discussing RCP 8.5 in AR5? The projected warming of RCP 8.5 appears to be in the neighborhood of the old SRES A1F1.

Note: For a detailed comparison of SRES and RCP projections, refer to Roglj et al (2012) Global warming under old and new scenarios using IPCC climate sensitivity range estimates, and its Supplementary Information.

CMIP5 HINDCASTS OF GLOBAL SURFACE TEMPERATURE ANOMALIES

The model mean of the CMIP5 simulations of 20thCentury global surface temperature anomalies for the four RCPs are shown in Figure 3. The data runs from 1901 to 2012. The base years for anomalies are (and for the remainder of this post) 1901 to 1950, which are the base years the IPCC used for their Figure 9.5 in AR4. All but RCP 6.0 are closely grouped; RCP 6.0 diverges from the others starting at about 1964. Is this caused by the limited number of models simulating RCP 6.0? It’s still early. The modeling groups have some time to submit models to CMIP5 for inclusion in AR5.

Figure 3

The model mean of the global surface temperature anomaly hindcasts of the 12 models used by the IPCC in their Figure 9.5 cell a of AR4 has been added in Figure 4. The RCP hindcasts of global surface temperature anomalies appear to differ most from the AR4 hindcast during the 1960s and 70s, as though the newer RCP-based models are exaggerating the impacts of the eruption of mount Agung in 1963/64. Other than that period, the model mean of the newer RCP-based models appear to mimic the older model mean.

Figure 4

CMIP3 VERSUS CMIP5 DURING WARMING AND COOLING/FLAT TEMPERATURE PERIODS

In AR4, the IPCC identified four periods during the 20th Century when global surface temperatures rose and when they remained flat or cooled slightly. Refer to Chapter 3 Observations: Surface and Atmospheric Climate Change. Those periods are loosely defined by the IPCC as follows:

Clearly, the changes are not linear and can also be characterized as level prior to about 1915, a warming to about 1945, leveling out or even a slight decrease until the 1970s, and a fairly linear upward trend since then (Figure 3.6 and FAQ 3.1).

We have in past posts used HadCRUT3 land plus sea surface temperature anomalies, the same dataset presented by the IPCC in AR4 for comparisons to models, and have further clarified those warming and “flat temperature” periods. The years that marked the transitions were 1917, 1944, and 1976.

For the following four comparison graphs of CMIP3- and CMIP5-based global temperature anomaly hindcasts, we’ll use RCP 8.5, and the simple reason is, it’s the scenario that was modeled most often and it has the most ensemble members. And we’ll use the multi-model ensemble mean of the 12 models the IPCC used in their Figure 9.5 cell a.

Figures 5 through 8 compare global surface temperature anomaly hindcasts and linear trends of the CMIP3 (20C3M) and CMIP5 (RCP 8.5) multi-model mean over the 20th Century (1901-2000). The data have been broken down into the two warming and two “flat temperature” periods. The linear trends of the CMIP3- and CMIP5-based models are reasonably close during the early “flat temperature” period (1901-1917), the early warming period (1917-1944), and the late warming period (1976-2000). Any changes in forcings used by the modelers during those periods do not appear to have had any major impacts on the rates at which modeled global surface temperatures warmed. On the other hand, as shown in Figure 6, there is a significant difference in the trends during the mid-20thCentury “flat temperature” period (1944-1976). The CMIP3 hindcast shows a slight positive trend during this period, while the CMIP5 (RCP 8.5) trend shows a moderate rate of cooling.

Figure 5

HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH

Figure 6

HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH

Figure 7

HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH

Figure 8

HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH

COMPARISON OF CMIP5 HINDCAST TO APPROXIMATION OF HADCRUT4 SURFACE TEMPERATURE DATA

We presented and discussed the recent updates of the Hadley Centre’s HADSST3 sea surface temperature anomaly dataset here, and introduced the recent updates to their CruTEM4 land surface temperature anomaly dataset here. Unfortunately, the Hadley Centre has not yet released its new HadCRUT4 land plus sea surface temperature data through its HadCRUT4 webpagein a form that’s convenient to use. We can approximate the global HadCRUT4 data using a weighted average of HADSST3 and CruTEM4 data, using the same weighting as older HadCRUT3 data. I relied on annual HADSST2, CruTEM3, and HadCRUT3 data from 1901 through 2011 to determine that weighting, and used the linear trend of a weighted average of the HADSST2 and CruTEM3, comparing it to the trend of the HAdCRUT3 data. The weighting determined was 28.92% land surface temperature and 71.08% sea surface temperature, and has been used in the approximation of the HadCRUT4 data that follows.

Note: The CruTEM4 data is available at the Hadley Centre’s webpage here, specifically the annual data here, and the HADSST3 data is available through the KNMI Climate Explorer here.

Figure 9 compares the approximated HadCRUT4 land plus sea surface temperature data to the 4 RCP-based hindcasts from 1901 to 2006. The end date of 2006 is dictated by the HADSST3 data, which still (as of now) has not been brought up to date by the Hadley Centre. The models appear as though they are capable of reproducing the rate at which global temperatures warmed during the late warming period of 1976 to 2006, but it looks like they are still not capable of reproducing the rates at which global temperature anomalies warmed and cooled before that. Let’s check.

Figure 9

We’ll again use the multi-model ensemble mean of the CMIP5-based RCP 8.5 global surface temperature hindcast available through the KNMI Climate Explorer, simply because that’s the scenario the modelers have simulated most. Figures 10 through 13 compare the linear trends of the of the model mean to the approximated HadCRUT4 global surface temperatures during the 2 warming periods and 2 “flat temperature” periods acknowledged by the IPCC. Starting with the late warming period (Figure 10), the models do a reasonable job of approximating the rate at which global surface temperatures warmed. But based on the model mean, the CMIP5-based hindcasts of the 20thCentury are:

1. not able to simulate the rate at which global surface temperatures cooled from 1944 to 1976 (Figure 11),

2. incapable of simulating how quickly global surface temperatures warmed from 1917 to 1944 (Figure 12), the observations warmed at a rate that’s more than 3 times faster than simulated by the models, and,

3. not capable of simulating the low rate at which global surface temperatures warmed from 1901 to 1917 (Figure 13).

Figure 10

HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH

Figure 11

HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH

Figure 12

HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH

Figure 13

HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH

CLOSING

This was a preview. The intent was to give an idea of the directions of the IPCC’s projections of future global surface temperatures and a glimpse at the hindcasts to see if there were any improvements. According to the schedule listed on the IPCC/CMIP5 AR5 timetable, papers to be included in the IPCC’s 5th Assessment Report (AR5) are to be submitted by July 31, 2012. Therefore, for projections of future global temperatures, there may be a few models that have not yet made it to the CMIP5 archive at the KNMI Climate Explorer. The IPCC also could select specific models for their presentation of 20thCentury global surface temperatures as they had with AR4. But there are a good number of models and ensemble members that presently exist in the AR5 archive at the KNMI Climate Explorer. Adding a few models should not alter the results of the multi-model ensemble mean too much.

I won’t speculate whether the IPCC intends to make RCP 8.5 its primary forcings for its discussions of future climate, but the modelers sure did seem enthusiastic about it, with its projections of a 4.4 deg C rise in global temperatures by 2100.

With respect to the simulations of the 20th Century, it appears the modelers did change some forcings during the mid-20th Century “flat temperature” period, in an effort to force the models to show more of a decrease in temperature between 1944 and 1976. Yet the models still have difficulties simulating the rates at which global surface temperatures warmed and cooled since 1901. Compared to the weighted average of HADSST3 and CruTem4 data (used to approximate HadCRUT4 global surface temperature data), the models are still only able to simulate the rate at which global surface temperatures rose during the late 20thCentury warming period of 1976 to 2006. They still cannot simulate the rates at which global surface temperatures warmed and cooled before 1976.

As illustrated and discussed in my book and in a number of posts over the past few months (see here, here, here, here, and here), for many reasons, it is very difficult to believe the IPCC’s claim that most of the warming in the late 20thCentury is caused by manmade greenhouse gases. One of the reasons: there were two warming periods since 1901. As further illustrated in this post, the increases in manmade greenhouse gases and other forcings caused modeled global surface temperatures (the RCP 8.5-based multi-model mean of the CMIP5/AR5 climate models) to warm at a rate during the late warming period that’s 3+ times faster than the early warming period. Yet the observed global surface temperatures during the late warming period, based on the approximation of HadCRUT4 data, warmed at a rate that was only 27% higher than the early warming period.

And those who have read my book or my posts for the past three years understand that most if not all of the rise in satellite-era global sea surface temperatures can be explained as the aftereffects of strong El Niño-Southern Oscillation (ENSO) events. That further contradicts the IPCC’s claims about the anthropogenic cause of the warming since 1976.

MY FIRST BOOK

My recently published book is available in pdf and Kindle formats. See If the IPCC was Selling Manmade Global Warming as a Product, Would the FTC Stop their deceptive Ads?

SOURCES

The data sources for this post are linked within it.

0 0 votes
Article Rating
62 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
tallbloke
April 5, 2012 7:23 am

Oh how the future historians of science will laugh.

Chuck
April 5, 2012 7:27 am

Yet the models still have difficulties simulating the rates at which global surface temperatures warmed and cooled since 1901.
This I see as key. If the models don’t hindcast the warming and cooling of the 20th century correctly then the models have little or no predictive use.

April 5, 2012 7:41 am

for hindcasts you should know that all rcps have the same forcing. you should use then all.

Curfew
April 5, 2012 7:43 am

“the modelers sure did seem enthusiastic about it, with its projections of a 4.4 deg C rise in global temperatures by 2100”
Hehe…..I bet!

Tenuk
April 5, 2012 7:44 am

Interesting. I don’t think any of the GCMs will ever get it right while they continue to make the assumption that CO2 has a major effect on temperature. Instead, they should be trying to get to grips with natural climate change by developing the tools to understand spatio-temporal chaos.

April 5, 2012 7:52 am

It has been several years since I posted a version of this explaination, how the accuracy of hindcasting past temperatures with the computer models is meaningless. I think I should ressurect my explaination here.
Each computer model is composed of dozens of mathematical equations representing known scientific laws, theories, and hypotheses. Each equation has one or more constants. The constants associated with known laws are very well defined. The constants associated with known theories are generally accepted but probably some of them may be off by a factor of 2 or more, maybe even an order of magnitude. The equations representing hypotheses, well, sometimes the hypotheses are just plain wrong. Then each of these equations has to be weighted against each other for use in the computer models, so that adds an additional variable (basically an educated guess) for each law, theory, and hypothesis. This is where the models are tweaked to mimic past climate measurements.
The SCIENTIFIC METHOD is: (1) Following years of academic study of the known physical laws and accepted theories, and after reviewing some data, come up with a hypothesis to explain the data. (2) Develop a plan to obtain and analyze new data. (3) Collect and analyze the data, this may even require new technology not previously available. (4) Determine if the hypothesis is correct, needs refinement, or is wrong. Either way, new data is available for other researchers. (5) Submit results, including data, for peer review and publication.
The output of the computer models run out nearly 90 years forward is considered to be data, but it is not a measurement of a physical phenomenon. Also, there is no way to analyze this so called data to determine if any or which of the hypotheses in the models are correct, need refinement, or are wrong. Also, this method cannot indicate if other new hypotheses need to be generated and incorporated into the models. IT JUST IS NOT THE SCIENTIFIC METHOD.
The worst flaw in the AGW argument is the treatment of GCM computer generated outputs as data. They then use it in follow on hypotheses. For example, if temperature rises by X degrees in 50 years, then Y will be effected in such-and-such a way resulting in Z. Then the next ‘scientist’ comes along and says, well, if Z happens, the effect on W will be a catastrophe. “I need (and deserve) more money to study the effects on W.” Hypotheses, stacked on hypotheses, stacked on more hypotheses, all based on computer outputs that are not data, using a process that does not lend to proof using the SCIENTIFIC METHOD. Look at their results, IF, MIGHT, and COULD are used throughout their news making results. And when one of the underlying hypotheses is proven incorrect, well, the public only remembers the doomsday results 2 or three iterations down the hypotheses train. The hypotheses downstream are not automatically thrown out and can even be used for more follow on hypotheses.

Patrick Davis
April 5, 2012 7:53 am

Future historians might have a laugh at this too…
http://climaterealists.com/index.php?id=9400

Victor Barney
April 5, 2012 7:58 am

[snip – you’ve been BANNED for your over the top and IN ALL CAPS POLITICAL/RELIGIOUS RHETORIC – get out and stay out – Anthony]

Steve Oregon
April 5, 2012 8:03 am

The models have difficulty?
Not nearly the difficulty the modelers do.
What a pity taxpayers are funding so many layers and arms of sloppy science run amok and the useless busy work for countless bureaucrats and academics.
Add it all up and what does the public get out of it? Nothing.
Worse yet we are left to only imagine what those vast resources could have achieved had they been appropriated and utilized by honorable people.
But this is now and looking back. What of tomorrow? Is there no way to curb the waste and redirect public resources to where need and legitimacy can produce what producers of those resources prefer? Genuine and spectacular progress.

Editor
April 5, 2012 8:06 am

Thanks, Anthony.

KR
April 5, 2012 8:08 am

“…it appears the modelers did change some forcings during the mid-20th Century “flat temperature” period, in an effort to force the models to show more of a decrease in temperature between 1944 and 1976.” (emphasis added)
That’s, well, an interesting statement – Forcing estimates get updated due to better data regarding those forcings, not to modify model results. The changes seen are the results of both refined measurements and improved modeling of the physics.
Unless you have some support for that claim, I would have to consider it both unreasonable and a smear on the people doing the research.

Editor
April 5, 2012 8:09 am

Steven Mosher says: “for hindcasts you should know that all rcps have the same forcing. you should use then all.”
I’m using the RCP with the greatest number of models and ensemble members. Averaging all of the ensemble members from all of the RCPs would then cause the models that simulated all of the RCPs to carry more weight than the models that didn’t.

Hoser
April 5, 2012 8:45 am

If you get the right answers for the wrong reasons, then even if you can hindcast, the forecasts are unlikely to be correct.

FrankK
April 5, 2012 8:48 am

I don’t really see why you bother with this Bob. The “measured” temperatures on which these models attempt to “calibrate” are just fictitious concoctions particularly from 1999 onwards. Where are the “calibrations” of the lower atmospheric temp record? Lets see how well they are simulated.
I’d have more confidence in remedies brewed up by a witchdoctor than these model “projections”.

catweazle666
April 5, 2012 8:53 am

Still pretending they can produce computer models of non-linear open-ended chaotic systems which are inter alia subject to extreme sensitivity to initial conditions, are they?
Jolly good.
Carry on….

gallopingcamel
April 5, 2012 8:57 am

When it come to computer models GIGO (Garbage In, Garbage Out) still rules.

Brandon
April 5, 2012 9:07 am

They key element is the slope difference in the 17-44 period. The models hindcast less than a third of the observed warming when they dont have CO2 driving the change. That one period alone is enough to invalidate the models and show a CO2 bias in warming from late 70’s and future projections. When you remove the CO2 forcing from the models they fail to show “natural” variation. If we assume (and it is possible based on this data) that .15 per decade is natural, then we are adding .05/decade above the expected natural variation in the 76-2006 period. That translates to .5-.6 increase in 100 years over the expected “natural” increase. Or that of the .6 warming in 1976-2006, only .15 of that warming is likely caused by increased CO2.
These models are not proof of serious global warming, but actual evidence against it. There is no way around the fact that the early 20th century warming was 75% of the late 20th century warming trend, without CO2. No matter how you try to spin or just pretend this problem doesn’t exist, it won’t go away.

Stephen Wilde
April 5, 2012 9:17 am

“most if not all of the rise in satellite-era global sea surface temperatures can be explained as the aftereffects of strong El Niño-Southern Oscillation (ENSO) events”
Absolutely right.
The next step is to ascertain how all that energy got into the oceans to fuel both that period of strong El Ninos AND cause a continuing rise in ocean heat content despite those strong discharges of energy to the air.
I have set out my views on that elsewhere.

Latitude
April 5, 2012 9:19 am

And those who have read my book or my posts for the past three years understand that most if not all of the rise in satellite-era global sea surface temperatures can be explained as the aftereffects of strong El Niño-Southern Oscillation (ENSO) events.
====================================================
Bob, all of the rise in satellite global sea surface temperatures…..
….can be explained as the aftereffects of adjustments to the satellite outputs after the launch of Envisat
read the 2008 Envisat working papers……………….

Steve C
April 5, 2012 9:51 am

My overall impression is of “just another” mishmash of calculations based on the usual mutilated data; it’s certainly not much like a passable representation of 20th century temperatures.
As for that WMO introduction …

“The Representative Concentration Pathways (RCP) are based on selected scenarios from four modelling teams/models working on integrated assessment modelling, climate modelling, and modelling and analysis of impacts.”

… it gave me “model overload”. Five times in one sentence! Ye Gods and little fishes! Still, fair warning, I suppose.

hagendl
April 5, 2012 9:58 am

Thanks Bob for the update.
Sometime, please comment on the chaotic uncertainty and the number of runs reported.
See Fred Singer Addressing the Disparity between Climate Models and Observations: Testing the Hypothesis of AGW, Conference on Global and Regional Climate Variability, Santa Fe, NM Oct 31-Nov 4, 2011

the 5 runs of the Japan MRI model show trends ranging from 0.042 to 0.371 K/decade. . . .
In a synthetic experiment we show that at least 40 runs (of 20-yr length) are necessary to get convergence of the ‘cumulative ensemble-mean – and >20 runs of 40-yr long runs. . . .
1. The US-CCSP report shows major differences between observed temp trends and those from GH models. These disagreements are confirmed and extended by Douglass et al [in IJC 2007] and by NIPCC 2008. Claims of “consistency’” between models and obs by Santer et al [in IJC 2008] are shown to be spurious
2. IPCC-4 [2007] climate models use an insufficient number of runs to overcome “chaotic uncertainty”
3. We find no evidence in support of the surface warming trend claimed by IPCC-4 as evidence for AGW

Editor
April 5, 2012 10:06 am

Latitude says: “Bob, all of the rise in satellite global sea surface temperatures…..
….can be explained as the aftereffects of adjustments to the satellite outputs after the launch of Envisat”
The AVHRR and AMSR sensors used for the Reynold OI.v2 sea surface temperature I was referring to are housed in NOAA satellites. Envisat is a European Space Agency satellite.

Nick Stokes
April 5, 2012 10:07 am

“for many reasons, it is very difficult to believe the IPCC’s claim that most of the warming in the late 20th Century is caused by manmade greenhouse gases. One of the reasons: there were two warming periods since 1901.”
That’s a non sequitur. But in fact, despite common assertions, CO2 forcing was quite substantial during the early 20th Century. Here’s a plot of forcing and Hadcrut during that time.
In the AR4 SPM, the IPCC summarised this:
“It is very unlikely that climate changes of at least the seven centuries prior to 1950 were due to variability generated within the climate system alone. A significant fraction of the reconstructed Northern Hemisphere inter-decadal temperature variability over those centuries is very likely attributable to volcanic eruptions and changes in solar irradiance, and it is likely that anthropogenic forcing contributed to the early 20th-century warming evident in these records.”

David L. Hagen
April 5, 2012 10:09 am

For Singer’s publication summarizing chaos or run uncertainty see: NIPCC vs. IPCC. Addressing the Disparity between Climate Models and Observations: Testing the Hypothesis of Anthropogenic Global Warming (AGW), Interim Science Update, 2011 S. Fred Singer, Presented at Majorana Conference in Erice, Sicily, August 2011

April 5, 2012 10:10 am

Latitude, are you saying that the satellite data is now under going adjustment ? As in Hansen style adjustment? I’ve expected it to happen, but I am unaware of it up to now.

Konrad
April 5, 2012 10:15 am

AR, AR , AR, AR5 splutter , clunk….
Hoax won’t restart? Shouldn’t be a problem. Just ignore the satellite temps, hold the aerosol button firmly down and pump the press release a few times… should restart easy. But don’t worry if it doesn’t, the route 20 sustainability bus to Rio should be along shortly. They should be able to give you a ride…

Patrick Davis
April 5, 2012 10:34 am

“Nick Stokes says:
April 5, 2012 at 10:07 am”
Your link to “forcings” is garbage! Why do you post this rubbish?

P. Solar
April 5, 2012 10:58 am

There’s detailed discussion with John Kennedy of Hadley center here:
http://judithcurry.com/2012/03/15/on-the-adjustments-to-the-hadsst3-data-set-2/#comment-187595
discussing how Hadley processing removes more than half the variation from the record and the circular logic being applied in “validating” these adjustments.
Bob, you are generously comparing the models to data where that has already been adjusted to better fit the models. The models are then used to “validate” the adjustments, which not surprisingly “works”.

Editor
April 5, 2012 11:01 am

Nick Stokes says: “That’s a non sequitur.”
Would you have felt differently if you had not clipped the rest of the paragraph, or if I had used a semicolon instead of a period between “there were two warming periods since 1901,” and “As further illustrated in this post…”

cui bono
April 5, 2012 11:08 am

Thanks Bob, Anthony. Very comprehensive and clear!
And loads of questions, not specifically for Bob, but for the model-defenders and others:
(1) “HADSST3 data, which still (as of now) has not been brought up to date by the Hadley Centre.” Why the hell not? Have they got something better to do? And if we know (we do, don’t we) what data points they’re using, and if we know (ditto) how they’re putting them together to create the series, could this not be done by a retiree with an Excel spreadsheet?
(2) “Climate modellers will conduct new climate model experiments ” (from the IPCC). Will these “experiments” explain the early 20th century warming or the mid-20th century cooling? Rhetorical question, as you’ve made clear.
(3) What are the current figures for RCP radiative forcing pathways for the 20th Century in W/m²?
Exactly how realistic or outlandish is 8.5 W/m²?
(4) What are the estimates for climate sensitivity of the models? Are they roughly the same, or a wide spread? How much is this changed by adjusting the parameters in the same model and re-running?
(5) Why does AGW always come into play in the 1970s/80s in these scenarios, rather than before? And if, as in Nick Stokes citation from the IPCC (April 5, 2012 at 10:07 am) “it is likely that anthropogenic forcing contributed to the early 20th-century warming evident in these records”, how do we explain the cooling from the 40s to the 70s?
PS: my personal econometric model predicts economic growth for the UK in 2100 will be 0.78539%. Now where’s that Nobel prize?

Ed Hawkins
April 5, 2012 11:47 am

I do think you need to show the ensemble spreads as well as the mean. You might not expect the ensemble mean to match the observations because of natural climate variability – what is more important is whether the ensemble members reliably encompass the observations.
Ed.

KNR
April 5, 2012 12:41 pm

The Bottom line remains . no AGW then no IPCC , anyone want to guess if the turkeys will vote for Chirstmas this year ?

Ian W
April 5, 2012 12:54 pm

tallbloke says:
April 5, 2012 at 7:23 am
Oh how the future historians of science will laugh.

Roger I don’t think so – the past will have been adjusted to match whatever the ‘Team’ of the day has the media saying.

April 5, 2012 1:01 pm

Nick Stokes you said “But in fact, despite common assertions, CO2 forcing was quite substantial during the early 20th Century. Here’s a plot of forcing and Hadcrut during that time.”
-Thanks for pointing to this new plotter
However if you average ALL forcings from 1900 to 1940 they are aprox zero (-0.07 w/m2)
Whilst the average of all forcings over the period 1960 to 2010 is ~ 0.75 w/m2
Despite this, the warming is not substantially different over the two periods, as Bob points out.
What is a non sceptical explanation for this ?

Follow the Money
April 5, 2012 1:49 pm

“I won’t speculate whether the IPCC intends to make RCP 8.5 its primary forcings for its discussions of future climate, but the modelers sure did seem enthusiastic about it”
It’s not science, per se. It is gravy train science. They are getting tingly with excitement like dogs hearing a can opener whirring.
Here is an appropriate video for “gravy train.” The table scene could be photo shopped with the heads of famous gravy train warmists:

Here’s one for a different dog food brand:

michael hart
April 5, 2012 2:43 pm

Thanks for doing all this, Bob. It’s good to know there are people on the case.

Doug Proctor
April 5, 2012 3:42 pm

If, by 2015, the global mean temp drops to 0.4C, as some suggest, AR5 will be in trouble, the only way out will be to say that 25 years is what you need for a trend.
That will work: Mann and Jones will be headed into retirement by then, speaking engagements only. Same with Hansen.

Doug Proctor
April 5, 2012 3:44 pm

Typo on previous:
If, by 2015, the global mean temp drops by 0.6C, i.e. a 0.2 C dop, as some suggest, AR5 will be in trouble, the only way out will be to say that 25 years is what you need for a trend.
That will work: Mann and Jones will be headed into retirement by then, speaking engagements only. Same with Hansen.

April 5, 2012 3:46 pm

Chas says: April 5, 2012 at 1:01 pm
Chas, here’s the corresponding plot of total forcing vs Hadcrut 3. It also rises in the early 20th cen. It’s true that the later rise is steeper, but interrupted by down spikes from volcanoes, which were mostly absent in the earlier period.

Nick Stokes
April 5, 2012 3:48 pm

Chas says: April 5, 2012 at 1:01 pm
Oops, link problem. Here (I hope) is total forcing vs Hadcrut 3.

jimash1
April 5, 2012 3:50 pm

” A significant fraction of the reconstructed Northern Hemisphere inter-decadal temperature variability over those centuries is very likely attributable to volcanic eruptions and changes in solar irradiance, and it is likely that anthropogenic forcing contributed to the early 20th-century warming evident in these records.””
Because… those other factors don’t exist anymore ?

John Trigge
April 5, 2012 4:07 pm

Re figure 2 – CMIP5 Global Surface Temperature Anomaly Simulations:
Colour [sic – I’m an Aussie] me confused.
How is it that the hindcasts can be so close to each other from 1860 to the present but diverge so much, and so quickly, in their forecasts?
Is this a case of ‘tweaking’, fudge factors, ‘forcing the models’, etc to match hindscasts to measurements then using different formulas for forecasts?

Latitude
April 5, 2012 5:04 pm

Bob Tisdale says:
April 5, 2012 at 10:06 am
Latitude says: “Bob, all of the rise in satellite global sea surface temperatures…..
….can be explained as the aftereffects of adjustments to the satellite outputs after the launch of Envisat”
The AVHRR and AMSR sensors used for the Reynold OI.v2 sea surface temperature I was referring to are housed in NOAA satellites. Envisat is a European Space Agency satellite.
=============================
Bob, I’m aware of that..Envisat didn’t show what they thought it should show (the first 22 passes showed sea level/temps falling), so they used Jason/s as reference…
Once they finally got Envisat to show what they wanted it to show….they back tuned Jason to match it………..
…it’s all in the adjustments…..in the 2008 working papers
James has several posts about it on his blog….put it there, and asked to you read it at one time

April 5, 2012 5:39 pm

If the data doesn’t match the output of the computer models, the data must be wrong and the satellite has to be adjusted until it matches the output of the computer. I see. Garbage out, garbage in, garbage out…a complete and self contained recycling process in the best Green Tradition. Thanks for clearing that question up Bob Tisdale. It explains a lot of things.
I just hope someone is keeping the real numbers noted down somewhere so that we can go back to use them again them when the madness passes away.

April 5, 2012 5:58 pm

In this thread, commenters have advanced fanciful ideas regarding the process by which a model is statistically tested. Contrary to popular opinion, this process is not an IPCC-style “evaluation.”
In the actual process, the predicted outcomes of events are compared to the observed outcomes of the same events in a sampling of events that are drawn from the underlying statistical population. For the IPCC climate models, this process cannot take place because: a) the models make “projections” rather than the required predictions and b) the IPCC has not yet told us what the population is.

P. Solar
April 5, 2012 8:31 pm

John Trigge says “Is this a case of ‘tweaking’, fudge factors, ‘forcing the models’, etc to match hindscasts to measurements then using different formulas for forecasts?”
I again refer people to this discussion with J. Kennedy of UK Hadley Centre:
http://judithcurry.com/2012/03/15/on-the-adjustments-to-the-hadsst3-data-set-2/#comment-187595
Models are optimised to reproduce as best they can the historic surface record from 1960 -1990. Speculative “bucket” adjustments that are applied to actual surface records and reduce the variations in the data before that period by over 50%, are then “validated” by comparing to computer model hindcast. Models optimised on such a short period, almost by definition, do not produce long term variability. A point the Bob is underlining here. So the models agree better with the “corrected” surface temps than with the real data.
This is taken to be a “validation” of the adjustments which then become the new “historic record” against which the models are tested and developed.
The authors of this methodology seem unable to see the circular logic. At least John Kennedy has not come back on that criticism…
Even after reducing the variability in the original SST data the models are still unable to reproduce the long term variability, this is Bob’s main point. They do not catch long term variability since they have no mechanism to produce one. The feeble agreement they get on some individual runs are noise and random variation, not correct modeling.
In science you adjust your model to fit the data. In climate science you adjust data to fit the model. That is the fundamental reason their models have failed so thoroughly since the end of that last century.

P. Solar
April 5, 2012 8:41 pm

A snip from the discussion at JudithCurry.com that I refered to:
Greg Goodman | March 27, 2012 at 9:02 am |
Good day John [Kennedy],
“The first statement that the adjustments remove the majority of the variation from the majority of the record is not one I contest although I disagree with you about what that means. ”
OK, so we are agreed on my first point about the extent of the changes.
“Your characterisation of the assumptions made in the analysis as “speculation and hypothesis” is your choice of words. I would say that hypothesis is a fair description. ”
We are also agreed that it is hypothesis and you chose not to agree on “speculation”.

LazyTeenager
April 6, 2012 12:41 am

Ahhhhh!!!! so Bob is still trying the strategy of ignoring the values ( which agree very well) and emphasizing the gradients.
Let me explain why this is misleading. Gradients/trends/slopes are calculated from the differences in values. This means that trends are very sensitive to noise and random variation in the values. So it’s quite possible to select artificial short ranges in the time series that maximise the trend differences and thereby exaggerate the differences between the data sets.
People who want to make trend comparisons that are not misleading and which are valid will typically incorporate some least squares fitting process into calculation of the gradient. This will typically apply some convolution kernel of sufficient width to the data to suppress random noise.
I don’t believe Bob had done that [snip]

DirkH
April 6, 2012 1:51 am

LazyTeenager says:
April 6, 2012 at 12:41 am
“Ahhhhh!!!! so Bob is still trying the strategy of ignoring the values ( which agree very well) and emphasizing the gradients.
Let me explain why this is misleading. Gradients/trends/slopes are calculated from the differences in values. This means that trends are very sensitive to noise and random variation in the values. So it’s quite possible to select artificial short ranges in the time series that maximise the trend differences and thereby exaggerate the differences between the data sets.”
Let me explain why this “explanation” from our lazy teenager is stupid, wrong, misleading, and you shouldn’t believe him.
A linear trend is computed by fitting a line through an interval of a time series, minimizing the sum of the squares of the differences of the trend line to the data at each point. So all the data points in the interval excert an influence on the slope of the trend line.
Do we know other operators that share this property? Yes, for instance moving averages. What do we know about moving averages with regards to their frequency response? Yes, they are LOW PASS filters; meaning that they DAMPEN the high frequencies.
Lazy teenager, you’ve been too lazy again; please repeat your signal theory lessons over Eastern and we’ll have an exam after that.

DirkH
April 6, 2012 2:11 am

Further reading for the teenager; this is actually a nicely done page even though it is from wikipedia:
http://en.wikipedia.org/wiki/Linear_regression
Of course, their funny attitude about everything shines through at the end:
“Environmental science
[icon] This section requires expansion.
Linear regression finds application in a wide range of environmental science applications. In Canada, the Environmental Effects Monitoring Program uses statistical analyses on fish and benthic surveys to measure the effects of pulp mill or metal mine effluent on the aquatic ecosystem”
Yeah, we most definitely need more examples about how linear regression is used in environmental science. Spoil a perfectly good page with some politically correct drivel, uh, and maybe, we need a picture of an oiled seagull on the page about the Riemannian manifold. /sarc
Ye

Editor
April 6, 2012 2:48 am

LazyTeenager says: “Ahhhhh!!!! so Bob is still trying the strategy of ignoring the values ( which agree very well) and emphasizing the gradients…”
Ahhhhh!!!! so LazyTeenager is still trying the strategy of misdirection, which doesn’t work. With respect to the remainder of your comment, you know very well, or SHOULD know, that adjusting temperature data for ENSO and volcanic eruptions, minimizing their noise, has little impact on these trend comparisons. We’ve shown and discussed this already. Refer to the discussion under the heading of ENSO- AND VOLCANO-ADJUSTED OBSERVATIONS AND MODEL MEAN GLOBAL SURFACE TEMPERATURE DATA from this post:
http://bobtisdale.wordpress.com/2011/12/12/part-2-do-observations-and-climate-models-confirm-or-contradict-the-hypothesis-of-anthropogenic-global-warming/
And your closing comment of “I don’t believe Bob had done that [snip]”, with respect to the trend analysis, broadcasts your ignorance of the methods employed by the producer of the spreadsheet software (EXCEL) I use to create the graphs.
Someone making a comment on a blog usually studies a subject before making erroneous statements, unless that commenter is simply trying to mislead the readers, as you’ve tried with your comment.
Goodbye, LazyTeenager.

P. Solar
April 6, 2012 6:18 am

Bob, I think the LazyTeenager’s attack does not show much understanding of signal processing or stats but he is not totally wrong.
Firstly , I have been encouraging you for years (well seems like it) to use a real filter instead of running mean and have pointed out its crappy and misleading frequency response. It seems despite the huge amount of time you put into all this you are not prepared to get beyond clicking a button in Excel. Please work out how to apply a real filter (you can even do it in excel if you really must). I’ve posted on your blog to you have my email. I can send you an example of a filter in excel it you wish.
Also if you want to study rate of change then do so directly by differentiating , not by sloppy averaging LSQ etc.. If your data is continuous and equally spaced, all you need to do is take the difference of each successive pair of points.
Any difference in rate of change will then stand out as a vertical offset and won’t depend on your choice of period over which you calculate your slope. That would remove some lazy critics.
There’s plenty to criticise in these models and you are basically correct. I’d like to see you make a more convincing job of it.
Best regards.

P. Solar
April 6, 2012 6:34 am

Dirk: Do we know other operators that share this property? Yes, for instance moving averages. What do we know about moving averages with regards to their frequency response? Yes, they are LOW PASS filters; meaning that they DAMPEN the high frequencies.
Yes R-M is a low pass filter, trouble is its also a high pass filter , as and when it feels like it.
http://oi41.tinypic.com/nevxon.jpg
Now look at the same data filtered with these two filters (done in excel 😉 ).
http://i44.tinypic.com/351v6a1.png
And, yes, that really did start off from the same column in my spreadsheet, though you’d hardly believe it to look at the results.
Look at what happens to the running means in 1940 and 1960 for example. Now if you’re going to say someone’s model does not match the data you’d do well not to start by using a filter that turn a peak into a trough or bend it sideways.
It’s sad to see how many people with letters after their names make the same mistake as well as doing illegitimate OLS regression on scatter plots and getting totally aberrant values for climate sensitivity.
RUNNING MEAN MUST DIE!

Editor
April 6, 2012 5:13 pm

P. Solar: We’ve been through this before. I present data in fashions that are easily reproducable by laypersons so that they can duplicate and verify. A running-mean filter is commonly used in climate science, regardless of your preference.
If a reader wishes to use different methods, like using another method to determine linear trends, that’s fine. I’ve initiated that investigation.
Regards

April 6, 2012 6:15 pm

Nick Stokes says:
April 5, 2012 at 3:48 pm
Chas says: April 5, 2012 at 1:01 pm
Oops, link problem. Here (I hope) is total forcing vs Hadcrut 3.

Nick
Your link still doesn’t explain the warming up to 1940. However, I think it’s worse than that. I’m guessing that “ALL Forcings” includes obsolete solar data. I think Leif Svalgaard would argue about the change in solar activity in the early 20th century.
It’s back to the drawing board, Nick.

climateprediction
April 7, 2012 7:47 am

The Pacific Decadel Oscillation (PDO) is a likely explanation for the negative and positive deviation periods between the hindcasts and observations. Notice that the deviation periods last about 30 years as does the PDO half-cycle and the direction matches the PDO cycles. Since we have entered a period of negative PDOs, the models will accordingly overestimate the warming over the 2007-2037 period.

Editor
April 7, 2012 12:30 pm

climateprediction says: “The Pacific Decadel Oscillation (PDO) is a likely explanation for the negative and positive deviation periods between the hindcasts and observations…”
There is no mechanism through which the PDO can alter global surface temperatures. The PDO does NOT represent the sea surface temperature of the North Pacific, north of 20N. The PDO is actually inversely related to the sea surface temperature anomalies of the North Pacific.

P. Solar
April 8, 2012 12:48 am

Bob Tisdale says:
April 6, 2012 at 5:13 pm
P. Solar: We’ve been through this before. I present data in fashions that are easily reproducable by laypersons so that they can duplicate and verify. A running-mean filter is commonly used in climate science, regardless of your preference.
You probably do that because you are a layperson yourself.
That everyone can reproduce and “verify” a bad method hardly seems to be a valid reasoning. Especially because you don’t point out the short-comings, you are just inviting others to copy your own mistakes.. Last time you said it was “easy to understand”, an equally poor excuse. In fact it is easy to *misunderstand* because if you do not look at the frequency response (and most laypersons would not even know what one is), it is easy to imagine you applying a valid low pass filter.
Just look at the 1970’s on this graph, the running mean actually gets the peaks and troughs 100% upside down !! http://i44.tinypic.com/351v6a1.png
You defend using that kind of filter to show the work of others is not reproducing the troughs and peaks in the right places. Hardly credible.
This is not a case of personal preference as you try to suggest. There are several filters you could choose to use if you could be bothered. How can you justify using a filter that distorts the data to the point of inverting peaks and troughs as can be seen in the example plots I posted above to criticise the work of others ?

Editor
April 8, 2012 1:24 pm

P. Solar says: “Just look at the 1970′s on this graph, the running mean actually gets the peaks and troughs 100% upside down !! http://i44.tinypic.com/351v6a1.png
“You defend using that kind of filter to show the work of others is not reproducing the troughs and peaks in the right places. Hardly credible.”
Your linked example does not show the raw data. Yet you somehow claim the troughs and peaks are not in the right places. Kinda tough to confirm your claims, P. Solar.

climateprediction
April 9, 2012 1:40 pm

Bob Tisdale says….There is no mechanism through which the PDO can alter global surface temperatures. The PDO does NOT represent the sea surface temperature of the North Pacific, north of 20N. The PDO is actually inversely related to the sea surface temperature anomalies of the North Pacific.
With all due respect, I don’t accept any of your arguments. The undrlying mechanisms for PDO and ENSO are not well understood. That is not a proof that the don’t exist. ENSO doesn’t represent the sea surface temperatues north of 20N either. But it correlates well with global temperatures and it correlates well with the PDO in that there is a high ratio of La Ninas to El ninos during cool PDO periods and vice versa. And for the sake of determing the effects on global temperatures how can the fact that the PDO is inversely related to the sea surface temperatures of the North Pacific be any more significant than the fact that the PDO correlates closely with global temperatures?

Richard S Courtney
April 13, 2012 10:46 am

John Trigge:
I write because nobody has answered your question at April 5, 2012 at 4:07 pm which asks:
“Re figure 2 – CMIP5 Global Surface Temperature Anomaly Simulations:
Colour [sic – I’m an Aussie] me confused.
How is it that the hindcasts can be so close to each other from 1860 to the present but diverge so much, and so quickly, in their forecasts?
Is this a case of ‘tweaking’, fudge factors, ‘forcing the models’, etc to match hindscasts to measurements then using different formulas for forecasts?”
The simple answer is, yes. Today Allan MacRae and I have discussed this in more detail in the WUWT thread at
http://wattsupwiththat.com/2012/04/11/pat-michaels-on-the-death-of-credibility-in-the-journal-nature/
and the pertinent discussion begins with my post at April 13, 2012 at 1:55 am.
That discussion includes this quotation from Kiehle’s 2007 paper which is another formulation of your question;
“The question is: if climate models differ by a factor of 2 to 3 in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy. Kerr [2007] and S. E. Schwartz et al. (Quantifying climate change–too rosy a picture?, available at http://www.nature.com/reports/climatechange, 2007) recently pointed out the importance of understanding the answer to this question. Indeed, Kerr [2007] referred to the present work and the current paper provides the ‘‘widely circulated analysis’’ referred to by Kerr [2007]. This report investigates the most probable explanation for such an agreement. It uses published results from a wide variety of model simulations to understand this apparent paradox between model climate responses for the 20th century, but diverse climate model sensitivity.”
The discussion concludes with my statements that say;
“Long after my paper about the Hadley GCM, in 2007 Kiehle (see reference in my above post) showed that all other climate models also ‘ran hot’ but by different amounts. And he showed that they each adopt the aerosol fix. But they each adopt a different amount of aerosol cooling to compensate for the different degree of ‘ran hot’ they each display.
This need for a unique amount of aerosol cooling in each climate model proves that at most only one (and probably none) of the models emulates the climate system of the real Earth (there is only one Earth).”
Simply, the models each emulate a different (and unreal) climate system so they indicate different reactions to the same input change to the climate system, and they are especially sensitive to changes in the projected ratio of anthropogenic aerosol and GHG emissions.
I think this will be clear if you read the discussion.
Richard

rsnautilus
May 4, 2012 4:49 am

Hi! My question is not related to the content of your post, but to a figure mentioned. Can you tell me, where I can get the data of figure SPM 5 so that I can redraw it on my own? Xls or csv would do the job. Thanks in advance for your help! Felix