No Matter How the CMIP5 (IPCC AR5) Models Are Presented They Still Look Bad

UPDATE: I’ve added a comment to the end of the post about the use of 1990 as the start year.

# # #

After an initial look at how the IPCC elected to show their model-data comparison of global surface temperatures in Chapter 1, we’ll look at the CMIP5 models a couple of different ways. And we’ll look at the usual misinformation coming from SkepticalScience.

Keep in mind, the models look best when surface temperatures are presented on a global land-plus-sea surface temperature basis. On the other hand, climate models cannot simulate sea surface temperatures, in any way, shape or form, or the coupled ocean-atmosphere processes that drive their warming and cooling.

# # #

There’s a big hubbub about the IPCC’s change in their presentation of the model-data comparison for global surface temperatures. See the comparison of before and after versions of Figure 1.4 from the IPCC’s 5th Assessment Report (My Figure 1). Steve McIntyre commented on the switch here. (Cross post at WattsUpWithThat here.) Judith Curry discussed it here. The switch was one of the topics in my post Questions the Media Should Be Asking the IPCC – The Hiatus in Warming. And everyone’s favorite climate alarmist Dana Nuccitelli nonsensically proclaimed the models “much better than you think” in his posts here and here, as if that comparison of observed and modeled global surface temperature anomalies is an true indicator of model performance. (More on Dana’s second post later.)

Figure 1

Figure 1

Much of what’s presented in the IPCC’s Figure 1.4 is misdirection. The models presented from the IPCC’s 1st, 2nd and 3rd Assessment Reports are considered obsolete, so the only imaginable reason the IPCC included them was to complicate the graph, redirecting the eye from the fact that the CMIP3/AR4 models performed poorly.

Regardless, what it boils down to is the climate scientists who prepared the draft of the IPCC AR5 presented the model-data comparison with the models and data aligned at 1990 (left-hand cell), and that version showed the global surface temperature data below the model ranges in recent years. Then, after the politicians met in Stockholm, that graph is replaced by the one in the right-hand cell. There they used the base years of 1961-1990 for the models and data, and they presented AR4 model outputs instead of a range. With all of those changes, the revised graph shows the data within the range of the models…but way down at the bottom edge with all of the models that showed the least amount of warming. Regardless of how the model-data is presented, the models looked bad…they just look worse in the original version.

While that revised IPCC presentation is how most people will envision model performance, Von Storch, et al. (2013) found that the two most recent generations of climate models (CMIP3/IPCC AR4 and CMIP5/IPCC AR5) could NOT explain the cessation of warming.

Bottom line: If climate models can’t explain the hiatus in warming, they can’t be used to attribute the warming from 1975 to 1998/2000 to manmade greenhouse gases and their projections of future climate have no value.

WHAT ABOUT THE CMIP5/IPCC AR5 MODELS?

Based on von Storch et al. (2013) we would not expect the CMIP5 models to perform any better on a global basis. And they haven’t. See Figures 2 and 3. The graphs show the simulations of global surface temperatures. Included are the model mean for the 25 individual climate models stored in the CMIP5 archive, for the period of 1950 to 2035 (thin curves), and the mean of all of the models (thick red curve). Also illustrated is the average of GISS LOTI, HADCRUT4 and NCDC global land plus sea surface temperatures from 1950 to 2012 (blue curve). In Figure 2, the models and data are presented as annual anomalies with the base years of 1961-1990, and in Figure 3, the models and data were zeroed at 1990.

Figure 2

Figure 2

# # #

Figure 3

Figure 3

Note how the models look worse with the base years of 1961-1990 than when they’ve been zeroed at 1990. Curious.

The data and model outputs are available through the KNMI Climate Explorer.

NOTE: Every time I now look at a model-data comparison of global land plus sea surface temperatures, I’m reminded of the fact that the modelers had to double the observed rate of warming of sea surface temperatures over the past 31 years to get the modeled and observed land surface temperatures to align with one another. See my post Open Letter to the Honorable John Kerry U.S. Secretary of State. That’s an atrocious display of modeling skills.

UNFORTUNATELY FOR DANA NUCCITELLI, HE DOES NOT APPEAR TO BE KIDDING

In his post Why Curry, McIntyre, and Co. are Still Wrong about IPCC Climate Model Accuracy, Dana Nuccitelli stated (my boldface):

Global mean surface temperature data are plotted not in absolute temperatures, but rather as anomalies, which are the difference between each data point and some reference temperature. That reference temperature is determined by the ‘baseline’ period; for example, if we want to compare today’s temperatures to those during the mid to late 20th century, our baseline period might be 1961–1990. For global surface temperatures, the baseline is usually calculated over a 30-year period in order to accurately reflect any long-term trends rather than being biased by short-term noise.

It appears that the draft version of Figure 1.4 did not use a 30-year baseline, but rather aligned the models and data to match at the year 1990. How do we know this is the case? Up to that date, 1990 was the hottest year on record, and remained the hottest on record until 1995. At the time, 1990 was an especially hot year. Consequently, if the models and data were properly baselined, the 1990 data point would be located toward the high end of the range of model simulations. In the draft IPCC figure, that wasn’t the case – the models and data matched exactly in 1990, suggesting that they were likely baselined using just a single year.

Mistakes happen, especially in draft documents, and the IPCC report contributors subsequently corrected the error, now using 1961–1990 as the baseline. But Steve McIntyre just couldn’t seem to figure out why the data were shifted between the draft and draft final versions, even though Tamino had pointed out that the figure should be corrected 10 months prior. How did McIntyre explain the change?

Dana’s powers of observation are obviously lacking.

First, how do we know the IPCC “aligned the models and data to match at the year 1990”? Because the IPCC said they did. The text for the Second Order Draft discussing Figure 1.4 stated:

The projections are all scaled to give the same value for 1990.

So Dana Nuccitelli didn’t need to speculate about it.

Second, Figure 4 is a close-up of view of the “corrected” version of the IPCC’s Figure 1.4, focusing on the models and data around 1990. I’ve added a fine line marking that year. And I’ve also altered the contrast and brightness of the image to bring out the model curves during that time. Contrary to the claims made by Nuccitelli, with the 1961-1990 base years, “the 1990 data point” WAS NOT “located toward the high end of the range of model simulations”.

Figure 4

Figure 4

“Mistakes happen?” That has got to be the most ridiculous comment Dana Nuccitelli has made to date. There was no mistake in the preparation of the original version of Figure 1.4. The author of that graph took special steps to make the models align with the data at 1990, and they aligned very nicely, focusing right in at a pinpoint. And the IPCC stated in the text that the “projections are all scaled to give the same value for 1990.” There’s no mistake in that either.

The only mistakes have been Dana Nuccitelli’s misrepresentation of reality. Nothing new there.

# # #

UPDATE: As quoted above, Dana Nuccitelli noted (my boldface):

At the time, 1990 was an especially hot year. Consequently, if the models and data were properly baselined, the 1990 data point would be located toward the high end of the range of model simulations.

“Especially hot?” Utter nonsense.

Dana appears to be parroting Tamino from Tamino’s blog post here.

The reality: 1990 was an ENSO-neutral year, according to NOAA’s

Oceanic NINO Index. Therefore, “1990 was…” NOT “…an especially hot year”. It was simply warmer than previous years because surface temperatures were warming then. I’m not sure why that’s so hard a concept for warmists to grasp. The only reason it might appear warm is that the 1991-94 data were noticeably impacted by the eruption of Mount Pinatubo.

Tamino was simply playing games with data as Tamino likes to do, and Dana Nuccitelli bought it hook, line and sinker.

Or Dana Nuccitelli hasn’t yet learned that repeating bogus statements doesn’t make them any less bogus.

0 0 votes
Article Rating
103 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
October 4, 2013 10:11 am

Dana is just awful.

October 4, 2013 10:13 am

Doesn’t Dana work for Big Oil? We will call him Oily-Dan for now on.

DHR
October 4, 2013 10:22 am

And how do the models compare with the un-homogenized (original) surface temperature data sets?

G. Karst
October 4, 2013 10:23 am

The games people play

Sweet Old Bob
October 4, 2013 10:32 am

Alarmists have cried WOLF! WOLF! WOLF! so long that it sounds like WOOF! WOOF! WOOF! (and sometimes like YIPE! YIPE! YIPE!) Are these models the porch they are preparing to crawl under?

daniel
October 4, 2013 10:38 am

Reading the Tamino post he refers to, I think what he meant was that the draft version, starting from 1990, was the one that should have been aligned differently and thus treated an especially warm year as a normal one. Or am I reading it wrong?

katabasis1
October 4, 2013 10:41 am

Is this another contender for “worst distortion ever” by Dana? –
“the IPCC says that humans have most likely caused all of the global warming over the past 60 years.”
http://www.skepticalscience.com/ipcc-ar5-human-caused-global-warming-confidence.html
Given how carefully many of the IPCC statements are worded I would have thought that if that is actually the case, they would have said as much.

Pippen Kool
October 4, 2013 11:00 am

This was all hashed out over at McIntyre’s site, in the comments section, where there are several people who seem to know what they are talking about. Using the 1990 temp as the ref was clearly a mistake in the first graph; the new graph corrects that by setting the start to the tread line. The bottom line is that it was changed to a more logical starting point, whether or not you think the first graph was a mistake.
Now either you like or don’t like the spaghetti graph, that is personal taste, but the actual world temp is actually inside the model’s envelope, albeit on the low side. I don’t think that justifies the title of this post.

rogerknights
October 4, 2013 11:09 am

Bob Tisdale: “Then, after the politicians met in Stockholm, . . . .”

They aren’t politicians, they’re dyed-in-the-wooly greenie-regulators:

http://wattsupwiththat.com/2013/08/31/can-the-ipcc-do-revolutionary-science/
Guest Essay by Barry Brill
On 23-26 September, scores of representatives of the world’s Environment Ministries are scheduled to meet in Stockholm . . . .

October 4, 2013 11:13 am

A notable feature of these models is that none of them make predictions or a predictive inference. Thus, none of them are falsifiable or convey information to a policy maker about the outcomes from his or her policy decisions.

Latitude
October 4, 2013 11:15 am

Using the 1990 temp as the ref was clearly a mistake in the first graph; the new graph corrects that by setting the start to the tread line.
==
So they did it the first time to show the models were right…
…and changed the second one to show the models were right
They can’t both be right….and in that case they both show the models were wrong again

Theo Goodwin
October 4, 2013 11:16 am

The Nuccitelli Principle 1: If the IPCC publishes something that deeply embarrasses the IPCC then some mistake happened in the IPCC.
Corollary 1: If some mistake happened in the IPCC and something deeply embarrassing to the IPCC was published then the IPCC is not responsible for the content of the deeply embarrassing thing that was published.
The Nuccitelli Principle 2: Mistakes happen.
Conclusion: The IPCC is not responsible for its deeply embarrassing publications.

Matt Skaggs
October 4, 2013 11:18 am

Pippen,
Since you read the comments at CA, you must have seen my analogy:
“The soccer player launches the penalty kick and it misses the goal to the right by one foot. Tamino sprints along the end line with his measuring tape and discovers that the goal was actually placed three feet closer to the left corner of the field than the right. Now that the discrepancy has been rectified, we are being told that the proper thing to do is credit the kicker with the goal.”
Let’s see if we can fit your statement to the analogy:
“Using the [original location of the goal] as the ref was clearly a mistake [when the ball was kicked]; the new [location] corrects that by setting the [goal where its should have been]. The bottom line is that it was changed to a more logical [location], whether or not you think the first [kick missed the goal].
Now either you [think it ws a a goal or you don’t], but the [kick was actually inside the envelope of where the goal should have been, so it should be credited as a goal].”
Seems to fit OK.

Theo Goodwin
October 4, 2013 11:18 am

Pippen Kool says:
October 4, 2013 at 11:00 am
You are reporting half the debate at McIntyre’s site. The glib half.

October 4, 2013 11:20 am

Fig 9.8 in chapter 9 of AR5 shows the correct comparison between CMIP5 models and the observed temperature trend. The discrepancy after 1998 is very clear. the graph itself can be seen here . This is particularly clear in the comparison between measured and predicted temperature trends from 1998 to 2012.
Using the same parlance as the ISCCP we can state : It is “extremely unlikely” that AR5 models can explain the hiatus in global warming (at 95% confidence) !

October 4, 2013 11:21 am

please read lucia on how to “zero” models. you’ve bodged it as badly as tamino

Zek202
October 4, 2013 11:44 am

What happens to the models if the earth starts to cool again? Could the models account for that? Would the cooling be anthropgenic?

leon0112
October 4, 2013 11:46 am

Pippen Kool – I agree about the 1990 versus the 30 year part of the discussion on McIntyre’s site. However, the professor from Duke pretty much destroys the spaghetti chart. And it isn’t personal taste.

Gail Combs
October 4, 2013 11:49 am

Terry Oldberg says: @ October 4, 2013 at 11:13 am
A notable feature of these models is that none of them make predictions or a predictive inference. Thus, none of them are falsifiable or convey information to a policy maker about the outcomes from his or her policy decisions.
>>>>>>>>>>>>
Great, Good. Not only can’t the models can’t make PREDICTIONS but the earth has stopped warming for the past couple of decades in spite of a continued increase in CO2 suggesting saturation of the greenhouse effect or at least a major slow down due to the logrithmic nature of the ‘Forcing’ allowing negative feed backs to swamp the effect of CO2.
Geologists looking into the factors causing the descent into glaciation proclaim that CO2 instead of being a cause for alarm is saving us from glaciation.
The latest IPCC says not only can they not come up with a climate sensitivity but that there is no increase in droughts, hurricanes, tornadoes etc. etc. Other reports show the world is greening. Agricultural crops have higher crop yields per acre.
The crisis has been called off, CO2 is saving the earth, lets all go home and celebrate.

October 4, 2013 11:56 am

I’m sorry to appear confused but it makes sense to fix the model to 1990 especially for FAR. Anything before this is hindcasting – i.e. not real – and used for initialisation. After 1990 is projection. The key is picking a long enough period to be the baseline but essentially all that matters is that your model matches the real at 1990. It doesn’t matter if that year was cold or hot – that’s the year you use.
The same applies for SAR, TAR and AR4. The data should only be presented for the projection part not the hindcast.
Personally I think that the first graph was fine. It showed enough detail and conveyed a clear enough message rather than the hodpodge of the second. Adding more error and squiggles demonstrates that you know LESS than before – hardly congruent with the 95% certainty.

JimS
October 4, 2013 12:09 pm

@Zek202, who said; “What happens to the models if the earth starts to cool again? Could the models account for that? Would the cooling be anthropgenic?”
Now those are excellent questions. If we could only get for the record a response from the IPCC and hold it accountable to the answers it gives, because global temperatures could very well decline for the next few decades. As far as I can see, the IPCC can not accommodate for any such cooling given the models it uses.

chris y
October 4, 2013 12:20 pm

You know the temperature in 1990. You should zero the models to the known temperature in 1990.
Each model has an uncertainty range.
Each model is the result of hundreds of runs to get to the best performance.
There are now enough years to start tossing most of the models into the rubbish bin.
The IPCC should pick the model that comes closest to the actual data, and report the predicted climate sensitivity, aerosol forcings, etc for that model. I suspect the crisis is much less than we thought.
The rest is handwaving to maintain grant support for the modeling groups, and retain the high-end predictions, as silly as they are at this stage.

Bryan A
October 4, 2013 12:23 pm

Another interesting DATA shift is apparent in the “Figure 1” side-by-side comparison. The 1990 FAR has a Temp Anomoly of almost 0.3 as the starting point in the AR4 graph but the 1990 FAR anomoly starting point has been shifted to <0.2 in the AR5 Spaghetti Chart. Must be how they lowered the bar

JDN
October 4, 2013 12:33 pm

I have to disagree. The trick was eliminating the error bars on the observed data and zooming out on the scale of the graph. No error bars allows them to plot a rising mean trend line, but, it would be obvious that there is no rising mean for the last 15 years if the error bars are added back. They are making the data points as inconspicuous as possible so that your eye only sees the trend line. And for some reason zooming out also gives you the impression that the trend line is right.
Someone should help out the IPCC by recoloring their graph for them. If it becomes known that the color scheme of a graph is essential to its acceptance, well, maybe they might have to add the error bars back themselves.

Resourceguy
October 4, 2013 12:40 pm

Too late, the global emissions scheme for airlines is moving ahead. Remember to follow the money always.

catweazle666
October 4, 2013 2:00 pm

Jon Gebarowski says:
“Doesn’t Dana work for Big Oil?”
Yes, he does indeed.
” We will call him Oily-Dan for now on.”
Why not use the name he is known by in the Big Oil business – Drillbit?

October 4, 2013 2:13 pm

If I point the tip of my pen on today’s temperature and drew a bunch of squiggly lines in the same general direction as the last 100 years I would have a more accurate spaghetti graph “projection” than 99% of the model runs.
Their giant swath of possible future predictions include such a wide variety of possibilities it like saying the temperature tomorrow will be between 0 and 100F. And then they still got it wrong.

RC Saumarez
October 4, 2013 2:19 pm

Predictions make statements about the future.
The IPCC predictions, I mean projections, were explicit. OK, these don’t fit data so we’ll do a post hoc redefinition of the projections.
This is part of the shifting sands of post-normal science and would be ethically and intellectually unacceptable in other branches of science. Now that PNS is getting into real difficulty, let’s hope that we can retreat into traditional science.

Bill Illis
October 4, 2013 4:10 pm

Some people just “like” to mislead themselves into believing the climate models have been accurate so far.
Sorry to burst your self-made bubble, but they are not.
The only accurate global warming predictions made so far are from climate models that have FLAT temperature increases. All 1 of them and this one just has huge decadal variability.
The RCP 4.5 scenario from IPCC AR5 has temperatures at 0.76C this month (using a 1961-1990 baseline. Wake me up when Hadcrut4 gets up to 0.76C – current trends have that happening in about 20 years.

Pamela Gray
October 4, 2013 4:17 pm

Some say that the IPCC model ensembles make projections based on scenarios of CO2 emissions and therefore cannot be falsified or called predictions because they do not in any way resemble reality. Dead dog. Won’t bark. Dead horse. Stop beating it.
Common sense trumps semantics every time.

Bill H
October 4, 2013 5:00 pm

Figure 1 should read.. Before Manipulation and After Manipulation..
These people have no shame. we’re going to plot the observations after we warm them up a bit.. is there any level to which they will not stoop to continue the lie?

Richard M
October 4, 2013 5:01 pm

Bob is correct. 1990 is an especially good year as it was ENSO neutral all year. In many ways it could not have been better for a baseline. The IPCC clearly made the change for political purposes.

Two Labs
October 4, 2013 6:07 pm

Statistically, there was nothing wrong with choosing 1990 as the base year. Nothing wrong with choosing the 61-90 average, either. But if changing the base year (or range) changes the forcast result significantly, that’s a statistical red flag.
From what I could tell, IPCC simply increased the confidence range of the AR4 forecasts so that post-2010 average temps could fall within that range. But since these confidence ranges are not calculated statistically, IPCC is certainly free to do this, but not free to do this without admitting that they are less confident in their modeling. Too bad they weren’t honest about that…

Jeff Alberts
October 4, 2013 6:39 pm

catweazle666 says:
October 4, 2013 at 2:00 pm
” We will call him Oily-Dan for now on.”
Why not use the name he is known by in the Big Oil business – Drillbit?

He’s too obtuse for such a name to stick.

Jeff Alberts
October 4, 2013 6:41 pm

Pippen Kool says:
October 4, 2013 at 11:00 am
Now either you like or don’t like the spaghetti graph, that is personal taste, but the actual world temp is actually inside the model’s envelope, albeit on the low side. I don’t think that justifies the title of this post.

There is no global temperature. It’s an utterly meaningless statistical construct.

megawati
October 4, 2013 7:17 pm

Somebody please explain how it can be allowable at all to offset a curve or zero it at some arbitrary year, ex post facto. Is the issue here not what the curves showed at the time they were first published?

gopal panicker
October 4, 2013 8:07 pm

an amazing amount of supercomputer time wasted on these nonsense models

Leo
October 4, 2013 8:15 pm

Our present reality over the past 70 years appears to me to lie within the noise cast by so many of these very sophisticated, quanitative models. From my experience of reservoir production modeling, which can be tweaked to provide a very large range of possible outcomes ….the ones you tend to believe are the ones that fall out from first principles, with minimal assuptions. They are directionally correct with the least amount of forcing or curve fitting. IFrom my exeperience if the trend is wrong, it is time to go back and revisit your assumptions. What strikes me is if that if someone (a public company for example, with public shareholders) was paying for the directional accuracy of these climate models to predict the future physical and therefore finacial behaviour of a producing asset, a lot of these scientific types would be out of business very quickly.
Leo

Greg Goodman
October 4, 2013 9:44 pm

Richard Betts of Hadley Centre commented on Climate Audit, saying the revised AR5 figure1.4 was presenting it “just as” done in AR4 and provides a link: http://www.ipcc.ch/publications_and_data/ar4/wg1/en/figure-1-1.html
However, it we look at that graph, we note considerable differences in the how the ranges of predictions from the various reports overlap compared to how they are show in AR5.
It’s not “just as” , there is wholesale shifting of, not only the observational data but also the individual reported projections.
It is pretty obvious that if you can find a logic that allows shifting all the data and projections up and down it is a trivial result that they overlap. It demonstrates nothing about the data but a lot about the revisionist nature of the IPCC.
Who was it said: “The future is certain, it is only the past that is unpredictable.”?

dalyplanet
October 4, 2013 9:53 pm

Thank you Clivebest 11:20

temp
October 4, 2013 10:15 pm

Matt Skaggs says:
October 4, 2013 at 11:18 am
Ever hear the saying “moving the goal posts”.
What is the “correct location of the goal”…. to the cultists it where the ball goes in. Thats the only ref point that matters.
The non-stopped moving of goal posts such as suddenly you need 30 years of flat temps for it to be a trend but only 7-12 of warming years to be a trend is classic goal post moving.

Michael Asten
October 4, 2013 10:39 pm

I fear the IPCC authors made the mistake with their earlier AR5 draft but are not letting on. If I take AR4 WG1 Fig 1.1 and overlay on AR5 WG1 Fig 1.4 then the uncertainty bounds for TAR temperature projections overlay reasonably closely. However as pointed out above the draft figure (now abandoned) for AR5 as annotated by Steve McIntyre, does not show the TAR uncerainty bounds as overlaying. So rather than a fudge in revising AR5, perhaps a sloppy author made a mistake in preparing the earlier Fig 1.4 of AR5, then fixed it for the current final draft. That said, I dont excuse the use of the spagetti plot – I take a somewhat uncharitable view that use of a completely different plot format may have been a ploy to hide an earlier error, and allow a bit of disinformation to circulate.
I find it very curious that IPCC authors (unlike accountants) feel no need at all to provide comparisons of results for the current time period versus equivalent for the past time period. An accountant who changed formats, baselines, etc and deliberately ignored past results/projections would be shot at dawn, professionally speaking. What a pity scientists cant enforce similar standards.

barry
October 5, 2013 12:15 am

1990 was a warm year in all data sets. Here’s the HADCru record.
(Had4 global temps)
I plotted the trend prior to the apparent slowdown beginning in 1998, and from 1972 so as not to overemphasise the 1990 anomaly.
To show the problem with centring on 1990, here is a straightforward adjustment centring the trend on 1990, but this time I’ll include all the years 1972 through 2012. (period chosen because the slope has strong statistical significance).
(example)
If the blue line was the averaged linear trend from the models (it isn’t of course), this shows why baselining comparisons on a year that lies off the long-term trend presents problems.
You could show the same problem using a cooler year baseline (1985).
(example)
Now the post-2000 temps appear warmer – that can’t be right!
A better way would be to select a statisitcally significant long-term trend from observed data (eg, from 1950 to through 1999), and choosing a year that lies on or close to that trend. If you baseline the model ensemble average to that year, you’d at least avoid the problem of biasing the results on a single year anomaly that was warmer or cooler than average.
1982 seems like a good choice.

Rabe
October 5, 2013 1:18 am

megawati, you are right. I wondered why they didn’t use 2013.

October 5, 2013 1:23 am

There is one clear fix in the new IPCC graph and that is the AR4 predictions. These were made after 2000 and if you look at the “before politicians” graph you see how well they track from the 1990 data the consequent downward trend and then up to 2000. Tamino had to leave out AR4 from his “re-alignment” for this reason. Both the AR4 and AR5 model predictions are above the data. The clever optical illusion in the new graph is to move down FAR, SAR and TAR and smudge everything out with bland colors so this contradiction is invisible.

Reply to  clivebest
October 5, 2013 9:13 am

clivebest:
Those “predictions” aren’t predictions but rather are projections. While predictions are falsifiable and convey information to a policy maker about the outcomes from his or her policy decisions, projections are non-falsifiable and convey no information to a policy maker. Thus, to distinguish between predictions and projections is important.

Another Gareth
October 5, 2013 1:29 am

By revising the chart to zero the models at 1990 it makes the warming before then look like a return to a normal rather than a dangerous shift from a previous normal. The IPCC has sacrificed the observed warming pre-1990 in order to protect the models from appearing to be falsified.
Is this something sceptics could exploit? We need to insist that the IPCC be consistent – they can say the warming pre-1990 is nothing exceptional and the models are still worthy of consideration *or* that the pre-1990 warming is the beginning of a man made climate trend and admit the models are not good enough. They cannot say both (but they will).

October 5, 2013 2:39 am

Bob,
In your depiction of temperature as average of GISS, HADCRUT4 and NCDC, the region around 2010 shows as higher than 1998. It does not show higher on, for example, RSS. There are reasons to expect a difference, as we know, but this is a rather critical difference when one comes to look at the hiatus.
I’m still left with an impression that the small positive slope upwards in the averaged data is, in part, due to adjustments +/- UHI and the difficulty of assessing it.
Therefore, I have a preference for the UAH or RSS data over surface-based observation, particularly because the satellite data has a better chance over the poles, Africa & Sth America.
If you could see in detail how the Aussie record is adjusted by the time the adjusters finish with it, I’d think you might have similar preferences.
So, do you have a strong reason to stick with the average?

Cheshirered
October 5, 2013 2:57 am

Dana doesn’t like it when people question him or his orthodoxy. Almost every post of mine – currently on pre-mod’ at The G, gets deleted now, even the funny ones that take just a little dig at him or agw.
What’s happening here is that as one after another alarmist claims turn to rubble then the louder they squeal and shout. Diversion tactics. (“If the law *and* the evidence are against you – bang the table”) Hence the current advanced spate of ‘worse than we ever thought possible’ articles.
They’re losing the argument because the data isn’t falling their way, and they know they’re losing.

barry
October 5, 2013 3:05 am

Somebody please explain how it can be allowable at all to offset a curve or zero it at some arbitrary year, ex post facto. Is the issue here not what the curves showed at the time they were first published?

Models are not run baselined to recent temps, so you have to make a choice. My two cents about that choice is here.

mwhite
October 5, 2013 3:28 am

“Let’s be honest – the global warming debate isn’t about science”
http://www.theguardian.com/environment/climate-consensus-97-per-cent/2013/oct/04/global-warming-debate-not-about-science#comment-27639487
Dana Nuccitelli

barry
October 5, 2013 3:51 am

mwhite @ here.

“Let’s be honest – the global warming debate isn’t about science”
http://www.theguardian.com/environment/climate-consensus-97-per-cent/2013/oct/04/global-warming-debate-not-about-science#comment-27639487
Dana Nuccitelli

I wonder how many will read the full article, which includes,
“The scientific evidence on human-caused global warming is clear. Opposition stems from politics, not science.”
and
“There are of course open questions yet to be answered by climate scientists – precisely how sensitive the climate is to the increased greenhouse effect, for example.”

October 5, 2013 4:20 am

barry quotes Nutticelli:
“The scientific evidence on human-caused global warming is clear.”
That is a baseless assertion.
There is no testable, measurable scientific evidence proving that human CO2 emissions are the cause of global warming. None.
What is it about “none” does barry and Nutticelli not understand?

Richard M
October 5, 2013 5:14 am

barry says:
October 5, 2013 at 12:15 am
1990 was a warm year in all data sets.

barry, thanks for showing your religious approach to science. When you start calling an ENSO neutral year “warm” it is obvious you have given up on logic.

October 5, 2013 5:27 am

barry on October 5, 2013 at 3:51 am

mwhite @ here.
“Let’s be honest – the global warming debate isn’t about science”
http://www.theguardian.com/environment/climate-consensus-97-per-cent/2013/oct/04/global-warming-debate-not-about-science#comment-27639487
Dana Nuccitelli

I wonder how many will read the full article, which includes,
“The scientific evidence on human-caused global warming is clear. Opposition stems from politics, not science.”
and
“There are of course open questions yet to be answered by climate scientists – precisely how sensitive the climate is to the increased greenhouse effect, for example.”

– – – – – – –
barry,
You, of course, may wonder that.
I, on the other hand, wonder how any reasonably normal rational human being cannot see that it is clear that there is little credibility in exclamations like this: AGW is unambiguous in the scientifically documented observational record.
I pity Nuccitelli, it is a difficult time to be an apprentice apologist trying to ‘rationalize’ an excuse for the IPCC’s publicly exposed integrity failure.
John

Bill Illis
October 5, 2013 6:10 am

Comment from Jochem Marotzke of the Max Planck Institute in a presentation at the Royal Society about the IPCC report.
“As a result of the hiatus, explained Marotzke, the IPCC report’s chapter 11 revised the assessment of near-term warming downwards from the “raw” CMIP5 model range. It also included an additional 10% reduction because some models have a climate sensitivity that’s slightly too high.”
http://environmentalresearchweb.org/cws/article/news/54904

barry
October 5, 2013 7:47 am

1990 was preceded by the strong 1988/89 La Nina and followed by the eruption of Mount Pinatubo. Therefore, 1990 stands out.

Even detrended, 1990 is a warmer year than average.
http://www.woodfortrees.org/plot/hadcrut4gl/from:1972/to:1999/mean:12/detrend:0.482/plot/hadcrut4gl/from:1972/to:1999/trend/detrend:0.482

But it was an ENSO-neutral year, and as a result, it was a prime year to start a model-data comparison, because it was NOT exceptionally warm in response to an El Nino.

ENSO is not the only factor that accounts for interannual global temperatures. I’m not persuaded that we should baseline to the ENSO indices alone. Still think it’s better to determine a long term temerature trend, and baseline by selecting a year that lies on the trend, which evens out all the wiggles in the long-run, not just ENSO.
If, say, the above-the-trend warmth of 1990 was caused by massive, once-a-century solar flare activity, it would not be reasonable to use 1990. Seeing as we don’t know what what caused 1990 to pop out above the trend, we are left to make a purely statistical decision. If ENSO is a vital consideration, then select a year that satisfies both requirements – must be ENSO neutral and lie on the long-term trend line. That should not be hard to do if ENSO is overwhelmingly the principal driver of interannual fluctuations. ENSO indices are, after all, trendless over the long-term – by design. And it also has the virtue of being less biased by other interannual influences.
(I didn’t introduce Nuticelli’s article here, nor would I have. I don’t think it’s a good article, but I took more exception to the slanted way in which it was introduced, as if Nuticelli thinks the debate should be political. He’s saying the opposite. At the same time, Nuticelli and SkS certainly have a political agenda. And ‘political’ is not referring to governments, but the political ideology of individuals.)

barry
October 5, 2013 7:48 am

“And it also has the virtue of being less biased by other interannual influences.”
“it” = “this method”

Pamela Gray
October 5, 2013 7:58 am

There should be at least four sets of graphs. Each one depicting the modeled output for the 4 different model ensembles (SAR, TAR, FAR, and AR4) marking the hindcasting period and then changing colors to mark the beginning of the “projection” period. Range of runs should be shaded in. Plot the average and range of real observations and add to the graph. Statistical error bars should be calculated and depicted for both models and real observations. If anomalies and robustness are important, than the climatological average should be more than 30 years. Should be at least 50. These researchers shouldn’t be afraid of doing this. That they are speaks volumes about their own doubts.
Why four? There are 4 different investigations here, each with two parts: hindcast and projection periods. So there should be 4 separate graphs which clarify the two phased experiments of each model ensemble. Why more than four? Because within the ensembles, it is possible that input parameter scenarios may be different, IE CO2 percent increase stays at zero, or increases by 1 percentage point each year, or increases by 2 percentage points each year, etc.
The way the current graph of either version is done leaves out important methodological information.

Steve Obeda
October 5, 2013 8:05 am

If the current “pause” is due to natural variation, then the forecasts for the next 20 years should show a much steeper increase than they did five years ago. That’s because we’ll soon have not only the reversal of the natural variation but also the cumulative effects of the CO2, no?

barry
October 5, 2013 9:06 am

Regardless of how the model-data is presented, the models looked bad…they just look worse in the original version.

Yes, they do. The graphic from the leaked report is 25 years long, and emphasises the recent apparent downturn. The approved graphic is 85 years long (40 more years of hindcast, 20 more of forecast), and therefore gives more context. As global climate change is a long-term (multi-decadal) phenomenon, the second graphic is more appropriate. Regardless of whether scientists or politicians changed it.

barry
October 5, 2013 12:07 pm

Those “predictions” aren’t predictions but rather are projections. While predictions are falsifiable and convey information to a policy maker about the outcomes from his or her policy decisions, projections are non-falsifiable and convey no information to a policy maker.

Falsifiable predictions are a function of science, not policy-making. They are called projections because the policy makers wanted to know what might happen under different forcing scenarios. So they are given a series of ranges – CO2 increase at various different rates, or stabilising at a certain value. This provides more, not less information to policy makers. Commonly decision-makers on any issue at least want to know the ‘best case/worst case’ scenario to get an idea of the range. Individuals frequently weigh decisions on this basis for ordinary life stuff. We try to pick options that balance cost and outcome.

Reply to  barry
October 5, 2013 4:46 pm

Barry:
Thanks for giving me an opportunity to clarify. It is a fact that no events underlie the IPCC climate models. However, it is by counting events of various descriptions that one arrives at the entities which statisticians call “frequencies.” The ratio of two frequencies of particular descriptions is called a “relative frequency.” A relative frequency is the empirical counterpart of a probability. As there are no frequencies or relative frequencies, there are no probabilities. It is by comparison of probability values to relative frequency values that a model is falsified. Thus, the claims that are made by the IPCC climate models are not falsifiable. Also, as “information” is defined in terms of probabilities, “information” is not a concept for the IPCC climate models.
Predictions have a one-to-one relationship with events. As no events underlie the IPCC climate models, there can be no predictions from them. As there are no predictions, the methodology of the associated research cannot truthfully be said to be “scientific.”

October 5, 2013 1:05 pm

barry says:
“…decision-makers on any issue at least want to know the ‘best case/worst case’ scenario…”
That is not what the IPCC does. When have they ever made a “best case scenario”?
‘Best case’ is that a couple of degrees of global warming is a net benefit to humanity. ‘Best case’ is that more CO2 is beneficial to the biosphere.
Give it up, barry. The IPCC never provides a “best case scenario”. Their scenarios go from very, very bad, to Catastrophic.

wrecktafire
October 5, 2013 1:15 pm

I’m with JDN: the zoom out makes the flat spot look much less “significant” (in the subjective sense).
http://www.amazon.com/How-To-Lie-With-Charts/dp/1419651439

barry
October 5, 2013 7:55 pm

Terry,
I disagree that models are not falsifiable. But they are complex, and describe much more than a one to one relationship. A failure of a particular component of climate models (say, the replicability of cloud behaviour) only tells us that cloud modeling is poor (or falsified, if you want to express it in a binary way). Other components do well, like predicting the cooling of the stratosphere. Should I assume you are focussed exclusively on the evolution of global surface temperatures?
Most commenters in the mainstream (such as realclimate) agree that if something like the trajectory of surface temperatures deviated over a sufficient amount of time from the models, then the ability of models to predict surface temps would be falsified.
Predictions and events are not always a one to one relationship, especially not for modeling of complex systems exhibiting chaotic tendencies. Most modeling is probabilistic. There is usually a range given in the prediction. Falsifying occurs not when the real trajectory deviates from the central estimate, but when it consistently falls outside the range.
The envelope for an ensemble at a particular rate of CO2 rise is fairly broad, but not infinite. A year or two of temps outside the envelope would not falsify the models, but a decade of annual temperatures centred around the 0.3% probability range would falsify the models that had the same forcings trajectory as the real world.
Seems to me that people get disgruntled that falsification hasn’t been conceded yet, based on the last few years lying near the bottom of the envelope. But they are too hasty. Time is an important component of climate model prediction/projections. On related, 5, 10, or 15 years of an apparent flat trend of global surface temperatures is not falsification of AGW. Plenty of commenters in the debate aligned with the mainstream view (eg, Tamino) have stated what they think would be the conditions – how long with no global warming, or how many years outside the range – that would falsify predictions and put current understanding of AGW into serious doubt.
Regarding the oft-cited trend from 1998 – the huge el Nino anomaly – my own conditions for falsifying understanding of the relationship between global temp change and CO2 increase is this: 25 years is a fair length of time to get a statistically significant trend from surface data, so if the global surface temperature has not increased by a statistically significant margin from 1998 to 2023, then the central estimates of the relationship of CO2/global temps have been falsified.
This is assuming that no freakish, non-CO2 events have an influence (this cuts both ways, whether a strong forcing event warms or cools the planet late in the trend), just the normal interannual fluctuations.

Reply to  barry
October 6, 2013 9:09 am

Barry:
Thanks for taking the time to reply. In the literature of climatology, “predict” and “prediction” are polysemic. In other words, they have more than one meaning. When a word changes meaning in the midst of an argument, this argument is an example of an “equivocation.” By logical rule, one cannot draw a proper conclusion from an equivocation. To draw an IMPROPER conclusion is the equivocation fallacy. By drawing conclusions from equivocations, climatologists are repeatedly guilty of instances of the equivocation fallacy in making arguments about global warming. For details, please see my peer-reviewed article at http://wmbriggs.com/blog/?p=7923 .
The equivocation fallacy may be avoided through disambiguation of terms the language in which an argument is framed such that each term of significance to the conclusion is monosemic (has a single meaning). When this is done in reference to arguments about global warming, logically valid conclusions emerge about the nature of the research that is described by the IPCC in its recent assessment reports, One such conclusion is that the methodology of this research was not truly scientific (ibid).
Many of the methodological shortcomings of global warming climatology stem from the absence of reference by the models to the events that underlie them. In the absence of these events it is not possible for one of these models to make a predictive inference. Thus, it is not possible for one of them to make an unconditional predictive inference, that is, “prediction.” A predictive inference is an extrapolation from one observable state of nature to another; conventionally, the first of the two states is called the “condition” while the second is called the “outcome.” In a “prediction,” the condition is observed and the outcome is inferred.
In the falsification of a model, one or more predicted probability values belonging to outcomes are shown not to match observed relative frequency values of the same outcomes in a randomly selected sampling of the events. Absent these events, to falsify a model is obviously impossible.
By the way, events are the entities upon which probabilities are defined. Absent these events, there is no such thing as a probability. Mathematical statistics, which incorporates probability theory as a premise, is out the window.

Sedron L
October 5, 2013 8:43 pm

If there was anything to Bob Tisdale’s book, it would have been put out by a real publisher, and not via a vanity press.

Patrick
October 5, 2013 11:26 pm

I wonder what the graph in figure 1.4 would look like if the temperature scale was not so granular? My guess would be it would not look scary enough.

urederra
October 6, 2013 3:22 am

Bob Tisdale says:
October 4, 2013 at 12:44 pm
Steven Mosher says: “please read lucia on how to “zero” models. you’ve bodged it as badly as tamino”
I haven’t bodged anything. I presented it exactly as I wanted to present it.

So, can we say that AR4 graphs were Mann-made whereas AR5 graphs are con-Taminated?

rogerknights
October 6, 2013 4:58 am

Sedron L says:
October 5, 2013 at 8:43 pm
If there was anything to Bob Tisdale’s book, it would have been put out by a real publisher, and not via a vanity press.

Tisdale’s book has lots of color charts in it. Bob has explained that printing a color book would cost $40, so sales would be low.

barry
October 6, 2013 8:17 pm

Bob,

barry says: “Even detrended, 1990 is a warmer year than average.”
Of course it is. What parts of the impacts of the eruptions of El Chichon and Mount Pinatubo don’t you understand?

The 1990 temperature anomaly was well above the trend, no matter what statisitcally significant linear period you choose. You can’t wish that away by pointing at other indices or events.

Reply to  barry
October 6, 2013 8:48 pm

barry:
Please identify the events that underlie the IPCC climate models of AR4 and AR5..

barry
October 6, 2013 9:15 pm

Terry,
Yes, agreement on the definition of terms is vital. (As is context) discussion like ours often lead to a semantic quagmire.
GCMs are sets of equations based on physics, parametrised processes and (for hindcasting) observed forcing indices. Please define ‘events’ – not in the general scope of knowledge, but specifically regarding climate.

Sedron L
October 7, 2013 9:36 am

Bob Tisdale wrote:
Everyone’s buying ebooks.
Again: If there were anything to your book, it would have been published by a real publisher. Self-publications is easy. Publishers have standards.
I’m not buying your book because I have seen too many basic and trivial errors from you on this blog. Your work isn’t even peer reviewed — the minimum necessary to ensure basic standards of scholarship. Afraid to try and play in the big leagues?

October 7, 2013 9:44 am

Sedron L,
Of course you won’t buy Bob’s book, because you might learn something.
And please, troll elsewhere. There are many more climate books written by non-pal-reviewed authors than by pal-reviewed authors.
You also don’t give a single example of “trivial errors”. Name one “error”. Put up or shut up.

October 7, 2013 6:33 pm

The reality: 1990 was an ENSO-neutral year, according to NOAA’s
Oceanic NINO Index. Therefore, “1990 was…” NOT “…an especially hot year”. It was simply warmer than previous years because surface temperatures were warming then. I’m not sure why that’s so hard a concept for warmists to grasp. The only reason it might appear warm is that the 1991-94 data were noticeably impacted by the eruption of Mount Pinatubo.

This is nonsense. If we truncate the temperature record at 1990 – before Pinatubo – we see that 1990 was the warmest year ever in the instrumental record in both GISS and HADCRUT.
Got that? No year preceding 1990 was warmer.

October 7, 2013 6:57 pm

Kevin O’Neill says:
October 7, 2013 at 6:33 pm
The GISS & HadCRUT data have been shamelessly manipulated. Some years in the 1930s were warmer than 1990, not to mention lots of years between c. AD 950 & 1250, not covered by those “adjusted” figures.

barry
October 8, 2013 8:17 pm

Bob, if you run a regression up to 1990 – the ENSO neutral year – you avoid the trend-flattening Pinatubo even and still 1990 is above the trend line. By about 0.1 deg.
graph
(I even ran a trend from 1982 – the El Chichon explosion – to pre-Pinatubo, so that fiddling with data gave the volcanic effects the best chance of increasing the trend. 1990 was even warmer than the trend by that method).

If you were to volcano adjust the data, 1990 may not fall exactly on the trend line, but it is nowhere near the 0.1 deg offset chosen by Tamino.

If you’ve done that sufficient to estimate the result, you could update your post or share the results here. Otherwise it’s guess-work.
Alternatively, adjust the temperature record by subtracting volcanos and ENSO and see what results.
But if you do that the temperatures go up in the latter part of the record and are no longer outside model results.
http://contextearth.com/2013/10/04/climate-variability-and-inferring-global-warming/

barry
October 8, 2013 8:21 pm

I’d be interested to see you results, Bob, for defluctuating the record of volcano effects – but do it over the whole record, so that the results are not skewed by other short-term fluctuation, or at least from 1950, so that we have a strongly significant trend perid to work with. And as ENSO is a primary contributor to interannual global temperatures, subtracting that, too from the temperature record would give a better approximation to the underlying warming trend, no?

October 8, 2013 9:04 pm

barry:
The notion that one can “defluctuate” the global temperature time series from the volcano or ENSO effect by subtracting this effect from the global temperature is logically and scientifically flawed. In logic and in science, the most that can theoretically be accomplished is for an observable but unobserved state of nature to be inferred from an observed state of nature. Thus, for example, it is conceivable for the observable but unobserved state “time averaged over 30 years global temperature greater than the median” to be inferred from the observable state “time averaged over 30 years CO2 concentration greater than the median.” As neither the volcano nor ENSO effect is observable, neither effect can properly be subtracted from the global temperature in arriving at the defluctuated global temperature.

RACookPE1978
Editor
October 8, 2013 9:39 pm

from barry says:
October 8, 2013 at 8:21 pm

I’d be interested to see you results, Bob, for defluctuating the record of volcano effects – but do it over the whole record, so that the results are not skewed by other short-term fluctuation, or at least from 1950, so that we have a strongly significant trend [period] to work with.

and
Terry Oldberg says:
October 8, 2013 at 9:04 pm
OK.
So, look carefully at the WUWT Solar Page:
http://wattsupwiththat.com/reference-pages/solar/
On that page:
Apparent Atmospheric Transmission of Solar Radiation at Mauna Loa, Hawaii
There are explicit solar radiation “drops” from three different volcanoes. Two, of course, are greater than the first in Guam (southern hemisphere!), and all are “measured” at the Hawaii observatory: up across the equator.
But! To attempt to show ANY relationship between solar radiation and the earth’s climate or temperature history over time, you MUST include known volcano eruptions.
Now, HOW you do that, and HOW MUCH each eruption changes the potential inbound solar radiation?
But you DO have to show those impacts the temperature record, and you cannot excuse the temperature record (post 1996 for example!) or model failures by claiming volcanic eruptions that do NOT show up on a similar clarity measurement.

barry
October 9, 2013 4:12 am

Terry,
Your comment implies that Bob’s whole argument is untenable. I disagree.
Removing noise from trends is a regular process in statistical analyses. Seasonal adjustment is a very common process, applied for understanding economic trends, short-term sea level trends and a host of other applications. While we don’t know what causes every fluctuation in global temperature, we know that strong el Ninos cause warm years, and volcanos and strong la Ninas cause cool years. Removing estimated noise from the trend brings us closer to what the signal actually is. Not perfectly, but better.
Voocanic effects are observable, both in the temperature record and from aloft, where satellites have ovserved the change in radiance through the atmosphere (there are posts on satellite-observed changes to radiative forcing from volcano emissions at this site). It is one of the corroborating features of modeling that includes volcanic forcing. Hansen’s 1988 model successfully predicted the amplitude and duration of a Pinatubo-like event (but not, of course, the timing, which is essentially random). Models that include the aerosol loading for Pinatubo in hindcasts all feature a dip very similar to what actually happened. ENSO has a number of corroborating indices, not just the excellent agreement with interannual temps when a strong ENSO event occurs. The observed data doesn’t perfectly capture the anomalies, but well enough to distinguish it from (and improve) the long-term signal.
This is implicit in Bob’s thesis, which hinges on volcanic and ENSO effects on the temperature record. Do you think his premises are flawed?

barry
October 9, 2013 4:23 am

RACookPE1978,
I hope Terry read your post, as you pointed out another observation of volcanic effects on global temperature from the ground.
I tend to agree with your thesis, but I would go further. If you want to isolate solar influence on global temperature, don’t just filter out volano events, also filter out ENSO (and any other known influence).
You’ll end up with an approximation, of course, but it would be an improvement on no filtering at all.

barry
October 9, 2013 4:47 am

Addendum to my last post to Bob.
Yes, if you use a short-term linear trend, the volcanic events would could make it lower. But if you use a long-term trend, statistically significant trend, these effects will be barely noticeable. That is the method I first argued for – to remove the potential bias of a single year’s fluctuation, baseline according to a long-term average or trend.
(trend without Pinatubo + trend with Pinatubo)
Now, the amazingly close agreement for that 20-year period is a bit of a fluke. You could choose another (longer) period and you could see more difference, but it wouldn’t be much. Eg,
example
But no matter which way you slice the observed temperature record, 1990 pops out over the trend.
But if you do remove ENSO and volcanic effects, you’d have to follow through on the exercise and compare the new filtered series with the models, which is probably a good idea in it’s own right if you want to compare recent trends over periods that are not statistically signficant. I’d wager that the filtered series would now be statstically significant from 1996/7/8 for any data set.

richardscourtney
October 9, 2013 4:52 am

barry:
At October 9, 2013 at 4:23 am you suggest

also filter out ENSO (and any other known influence).

How?
Richard

barry
October 9, 2013 8:26 am

Richard,

also filter out ENSO (and any other known influence).
How?

Various methods are described in the scientific literature.
eg, http://iopscience.iop.org/1748-9326/6/4/044022/pdf/1748-9326_6_4_044022.pdf
Alternatively, ask Bob Tisdale, who argues in his article that global temps should be seen through the ENSO filter. If he reads the current conversation, perhaps he’ll explain how that can be done, answering your question.

richardscourtney
October 9, 2013 10:14 am

barry:
I write to congratulate you on your evasion at October 9, 2013 at 8:26 am.
I asked you how to fulfill your suggestion that said

also filter out ENSO (and any other known influence).

and you have replied

Various methods are described in the scientific literature.
eg, http://iopscience.iop.org/1748-9326/6/4/044022/pdf/1748-9326_6_4_044022.pdf
Alternatively, ask Bob Tisdale, who argues in his article that global temps should be seen through the ENSO filter. If he reads the current conversation, perhaps he’ll explain how that can be done, answering your question.

Yes, “Various methods are described in the scientific literature” and that illustrates my point. The “various methods” each provide different results because nobody really understands the effect of ENSO on global temperature.
In my opinion Bob Tisdale provides a better understanding of ENSO than is available “in the scientific literature” but I doubt he would be willing to provide the quantification which you suggest.
Science starts from admitting what we don’t know. Climastrology assumes whatever it wants to ascribe instead of trying to replace ignorance with knowledge.
Richard

barry
October 9, 2013 3:49 pm

Richard,
How different are the other results, in your opinion? Could you link the ones you are familiar with, pointing out the magnitude of the differences?