Shades of Foster Grant

http://libertyboy.files.wordpress.com/2009/11/grant-foster.jpg?resize=553%2C415
Are those shades he's wearing, or blinders? Image from libertyboy.wordpress.com

Tamino Misses The Point And Attempts To Distract His Readers

By Bob Tisdale

The obvious intent of my recent post 17-Year And 30-Year Trends In Sea Surface Temperature Anomalies: The Differences Between Observed And IPCC AR4 Climate Models” was to illustrate the divergence between the IPCC AR4 projected Sea Surface Temperature trends and the trends of the observations as presented by the Hadley Centre’s HADISST Sea Surface Temperature dataset. Tamino has written a response with his post Tisdale Fumbles, Pielke Cheers.” Obviously he missed the point of the post. Since he does not address this divergence, his post is simply a distraction. That fact is blatantly obvious. Everyone reading his post will realize this, though it is doubtful his faithful followers will call his attention to it. Tamino resorts to smoke and mirrors once again. But let’s look at a few of the points he tries to make.

Tamino objects to this statement that is included on all of the graphs in the “17-year and 30-year trends post”:

The Models Do Not Produce Multidecadal Variations In Sea Surface Temperature Anomalies Comparable To Those Observed, Because They Are Not Initialized To Do So. This, As It Should Be, Is Also Evident In Trends.

The reason I included that statement was because I have illustrated and discussed the lack of multidecadal variability in the IPCC AR4 models in earlier posts and I wanted to draw the readers’ attention to the difference between the trends of the model mean and the observed trends. It’s really that simple.

Tamino makes the following statement toward the end of the post:

“There are definitely problems with the models. For one thing, they don’t reproduce the rapid warming of sea surface temperature from 1915 to 1945 as strongly as the observed data indicate. But overall they’re not bad, and the amount of natural variability they show is realistic.”

But the fact that “For one thing, they don’t reproduce the rapid warming of sea surface temperature from 1915 to 1945 as strongly as the observed data indicate” means the Sea Surface Temperatures of the models also don’t flatten from 1945 to 1975 as the observations do, and it’s those two portions of the multidecadal variations in sea surface temperatures that are known to be missing in the models. That’s what’s being referred to on each of the graphs in red. The models capture the rise in temperature from 1975 to 2000, but they do not capture the rise and flattening from 1910 to 1975.

Tamino presents a comparison of 30-year trends for HADISST, the model mean, and the 9 runs of the GISS Model ER, which I’ve reproduced here as Figure 1. He then writes:

Note that the individual model runs show much more variability than the multi-model mean. In fact they show variability comparable to that shown by the observed data.

I’ve highlighted a portion of his graph in Figure 1 that he obviously overlooked. Look closely at the significant rise in trends of the HADISST data in the early 20th century, and then the equally impressive decline in trends. Do any of the GISS model runs produce the “Multidecadal Variations In Sea Surface Temperature Anomalies Comparable To Those Observed” during the early part of the 20thcentury? No. So thank you for confirming one of my points, Tamino. It also contradicts your nonsensical statement, “In fact they show variability comparable to that shown by the observed data.”

Figure 1

Tamino also goes into a detailed discussion of how the model mean can obscure any multidecadal variations in the individual model runs. But note that he doesn’t use the actual model runs. He uses “Artificial Models”. Refer to Figure 2. Artificial models?

Figure 2

Why doesn’t Tamino use the real models instead of artificial ones? Because then Tamino would have to show you that the majority of the models do not have multidecadal variations in trend that are similar in timing, frequency, and magnitude of the observation-based SST data. Refer to Animation 1.

Animation 1

I could have provided that animation in my post, but I elected not to present it because it added no value to the post.

CLOSING

As I noted earlier, Tamino’s post is simply a distraction from my post 17-Year And 30-Year Trends In Sea Surface Temperature Anomalies: The Differences Between Observed And IPCC AR4 Climate Models”, which showed the divergence between the trends of the IPCC AR4 model mean for global Sea Surface Temperatures and the observed Sea Surface Temperature trends.

Tamino makes a few statements in his post that I will be happy to agree with:

There are definitely problems with the models.

And:

Certainly the models need more work.

Thanks for the opportunity to call attention to my post once again, Tamino.

0 0 votes
Article Rating
81 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Camburn
November 20, 2011 10:08 am

Funny Mr. Tisdale. I posed on Grant’s site to thank him for stating the obvious, but for some reason the post hasn’t shown up.
And that last graph in Grant’s post to your article is priceless.
I do have a question tho.
How do you get a mean value way above the multiple model run values? The methodology is not explained so that a layman could understand this.

Anteros
November 20, 2011 10:12 am

The eagle-eyed among us will have spotted the empty wine glass in the photo. This doesn’t excuse dogmatic fundamentalism or the demonisation of dissenters on his very closed-minded blog, but it may well explain the inability to distinguish between reality and the products of an Xbox..

REPLY:
No, that’s a water glass, typical of what many hotels who host conferences use. I’ve had many like that myself at tables during conferences. I may disagree with what Mr. Foster publishes, but I won’t suggest he’s drinking while presenting, That’s a bridge too far. – Anthony

Ryan Welch
November 20, 2011 10:13 am

So, does Tamino suffer from Cogitative Dissonance, or Confirmation Bias, or both?

Craig Moore
November 20, 2011 10:19 am

IT grows ever more tiresome to see such misdirection. We have the EU forbidding bottled water companies from claiming water hydrates. We have Australia sending out Sharia like police to stop businesses from claiming their prices are rising due to carbon taxes. Perhaps Naomi Klein captures the ultimate purpose: http://www.thenation.com/article/164497/capitalism-vs-climate

Responding to climate change requires that we break every rule in the free-market playbook and that we do so with great urgency. We will need to rebuild the public sphere, reverse privatizations, relocalize large parts of economies, scale back overconsumption, bring back long-term planning, heavily regulate and tax corporations, maybe even nationalize some of them, cut military spending and recognize our debts to the global South.

Thanks Bob for continuing to educate the rest of us on the truth about the over-hyped claims. Whatever choices that are at hand to make future world better than today I hope those decisions are made from the best science known rather than propaganda.

bubbagyro
November 20, 2011 10:21 am

Ryan:
No, he is using the famous yet effective “Chewbacca Defense”.
“Look at the monkey—look at the crazy monkey!”

Camburn
November 20, 2011 10:41 am

Ryan:
No, Tamino seems to be getting it.
“There are definitely problems with the models”
And then:
“Certainly the models need more work”
I can even understand that. Ok…..I will admit that it seemed to take an aweful long time for someone as smart as Tamino thinks he is, to realize this. I give him Kudo’s for reading Mr. Tisdale and learning what Mr. Tisdale and the rest of us have known for 2 decades.

D. Cohen
November 20, 2011 10:50 am

What is an “artificial model”? Aren’t all models artificial?

Editor
November 20, 2011 10:53 am

Camburn says: “How do you get a mean value way above the multiple model run values? The methodology is not explained so that a layman could understand this.”
Please expand on your question and refer to a graph, please.

Olavi
November 20, 2011 10:53 am

Camburn says:
November 20, 2011 at 10:08 am
Funny Mr. Tisdale. I posted on Grant’s site to thank him for stating the obvious, but for some reason the post hasn’t shown up.
And that last graph in Grant’s post to your article is priceless.
I do have a question tho.
How do you get a mean value way above the multiple model run values? The methodology is not explained so that a layman could understand this.
The answer is in figure 2, there is all the model runs.
Good post again Bob. 🙂

FergalR
November 20, 2011 10:55 am

Fantastic work as always Mr. Tisdale.
But I’m a little curious – would that be the same mysterious Foster Tamino that the BBC’s Richard Black refers to as “the enigmatic climate blogger who runs the Open Mind site and keeps his identity deeply under wraps?!?
Who could that conundrum of a man really be?
http://tinyurl.com/6pnww28
http://www.jamstec.go.jp/frcgc/research/d5/jdannan/comment_on_schwartz.pdf
REPLY:He’s not under wraps and hasn’t been for sometime, though why he keeps the moniker presently is a curiousity. His identity is published right on his website, on the sidebar at his website, see his book “NOISE” which has his name on it. He self outed. – Anthony

George E. Smith;
November 20, 2011 10:57 am

“”””” Tamino makes the following statement toward the end of the post:
“There are definitely problems with the models. For one thing, they don’t reproduce the rapid warming of sea surface temperature from 1915 to 1945 as strongly as the observed data indicate. But overall they’re not bad, and the amount of natural variability they show is realistic.”
But the fact that “For one thing, they don’t reproduce the rapid warming of sea surface temperature from 1915 to 1945 as strongly as the observed data indicate” means the Sea Surface Temperatures of the models also don’t flatten from 1945 to 1975 as the observations do, and it’s those two portions of the multidecadal variations in sea surface temperatures that are known to be missing in the models. “””””
“”””” they don’t reproduce the rapid warming of sea surface temperature from 1915 to 1945 as strongly as the observed data indicate. “””””
Talk about weasel words. What the hell is wrong with saying: ‘ They don’t reproduce the rapid warming of sea surface Temperature from 1915 to 1945.’
Other than (presumably) there was OBSERVED ‘rapid warming of sea surface Temperature from 1915 to 1945’ What OTHER evidence for ‘ ‘rapid warming of sea surface Temperature from 1915 to 1945’ is Tamino suggesting that may or may not have been a reason for saying: “as strongly as the observed data indicate.”
Isn’t the WHOLE IDEA of modelling, to reproduce the OBSERVED DATA; nothing else matters !
Yeah I would say that all that extra garbage is a distraction. Glad you are on the case Bob.

NyqOnly
November 20, 2011 10:57 am

“Why doesn’t Tamino use the real models instead of artificial ones?”
Gosh – I thought he explained EXACTLY why in his post. Perhaps you missed it. He used artifical models that INTENTIONALLY matched the natural variability to demonstrate that when you average those models the result does NOT match the natural variability. This is a simple, logical, demonstration that EVEN IF the models were perfect, the technique you used would not show a good match between a multi-model mean and the natural variability.
It is a good point. It would be interesting to see you address it and it is odd that you currently seem to not understand it.

George E. Smith;
November 20, 2011 11:06 am

I noticed the lack of reproducibility in Tamino’s model. So how come a model gets different results each time you run it. (his GISS-ER Realizations) Some gret model if you can’t depend on the answer.

DirkH
November 20, 2011 11:15 am

FergalR says:
November 20, 2011 at 10:55 am
“But I’m a little curious – would that be the same mysterious Foster Tamino that the BBC’s Richard Black refers to as “the enigmatic climate blogger who runs the Open Mind site and keeps his identity deeply under wraps“?!?”
Great, Fergal; the investigative minds of the Beeb! LMAO… Missed that!

kadaka (KD Knoebel)
November 20, 2011 11:17 am

Uh-oh Bob, I watched your Animation and kept track. While apparently showing Ensemble Members 0 to 31, the numbers 18 and 30 are missing. Hope there’s a good explanation or Tami’s Troupe will pounce on you for cherrypicking!

Mac the Knife
November 20, 2011 11:17 am

Bob,
Your ‘Animation 1’ is priceless! Any idea of how much money, time, and human effort was wasted on these 30 ‘Ensemble Member’ hind casts, that seem to have all of the hind cast accuracy of a 3 year old scrawling on the wall with their favorite crayon? If I tried to use analogous model results, perhaps to convince my management that we must spend huge sums of money on a radical diversion from current aircraft design, I would be dismissed as incompetent and a danger to the corporation. I would be pointedly encouraged to seek medical evaluation for my irrational behavior.
The only aspect of your post I disagree with is your statement “Obviously he missed the point of the post.” It sure appears to me that Mr. Tamino is a sophist. He could not refute your analysis and chose to attack through niggling distraction. This is a classic tactic of the sophist school of debate. If you can’t refute the facts and conclusions, then distract attention to some obtuse sideshow or personally attack the messenger. Then pretend these irrelevancies are valid ‘reasons’ to dismiss the solid science at the core of the debate. Perhaps you were just being momentarily kind to Mr. Tamino, though?
Thank You, for your many solid contributions to this debate!
MtK

Camburn
November 20, 2011 11:20 am

NygOnly:
I think you miss the point. Even artificial models have problems, along with the currently used models. What is there to address???

Camburn
November 20, 2011 11:24 am

NygOnly:
The point is the models, as structured, artificial and supposedly non-artificial can not do hindcast reliably. This proves the structural flaws of the models as written.
We need to examine the models, evaluate the models, rewrite the models so that they work. As the models are currently presented they are useless. Tamino has proved that point in his post.

D. J. Hawkins
November 20, 2011 11:33 am

Bob Tisdale says:
November 20, 2011 at 10:53 am
Camburn says: “How do you get a mean value way above the multiple model run values? The methodology is not explained so that a layman could understand this.”
Please expand on your question and refer to a graph, please.

I believe he’s refering to Figure 1. At around 1970, all the individual model runs displayed appear to be well below the graphed multi-model mean.

Al Gore's Holy Hologram
November 20, 2011 11:43 am

Makes no difference what Tamino thinks. He is to climate science what a homeopath is to medicine – on a different reality with a mind bending idea that zero to trace amounts of a molecule can effect an organism or eco system many millions of of times larger, even when far below recognised toxic levels.

DirkH
November 20, 2011 11:43 am

NyqOnly says:
November 20, 2011 at 10:57 am
““Why doesn’t Tamino use the real models instead of artificial ones?”
Gosh – I thought he explained EXACTLY why in his post. Perhaps you missed it. He used artifical models that INTENTIONALLY matched the natural variability to demonstrate that when you average those models the result does NOT match the natural variability.”
Nyq Only; if it is possible to create an “artificial model” (whatever that is.) that “matches the natural variability” (I guess you want to say that it shows the right amplitude of the multidecadal variability); and we then run the model several times, randomly initialized, we would get those multidecadal swings out of phase and so on, and an averaging would lead to some cacelling out of those swings, ok, I can follow you there.
But let’s think a step further. Imagine somebody would want to use such a model to forecast, say, a future climate. Would this person then not choose the model runs who are in phase with the observations, and discard the ones that are out of phase, and declare the initial states of the runs that are in phase as the more correct initialisations?
Surely.
But what we have is a bunch of modelers who continue to through all kinds of random runs of random models into one big average, calling it the ensemble mean, and keep on insisting that that’s the way you get reliable future forecasts.
Obviously that’s a rather unwise apprach if one wants to arrive at a future forecast, no?
I think what they really want to achieve with this climatological sausage machine approach is explicitly to arrive at forecasts that cannot be validated by comparison with observations – they use this averaging as protection against scrutiny; to protect their funding. They have learned that as long as they can’t be questioned the money will flow.

Editor
November 20, 2011 11:50 am

NyqOnly says: “Gosh – I thought he explained EXACTLY why in his post. Perhaps you missed it.”
I replied to your comment at my blog, NyqOnly. There’s no reason to post it in to places.
Here’s what I wrote there in reply:
I did not miss it. It’s a distraction from the actual models and a distraction from my post. Nothing more, nothing less.

Jack Greer
November 20, 2011 11:59 am

NyqOnly says:
November 20, 2011 at 10:57 am
… This is a simple, logical, demonstration that EVEN IF the models were perfect, the technique you used would not show a good match between a multi-model mean and the natural variability.
It is a good point. It would be interesting to see you address it and it is odd that you currently seem to not understand it.

Bingo. But if you’re trying to get BT to admit the process by which he analyzes data is, to put it kindly, faulty … which it is … Good luck with that.
REPLY: Typical. If you are trying to get Jack Greer to agree with anything published on this blog, good luck with that. You have an MO that precedes you. You complain about anything and everything here. – Anthony

Paul Vaughan
November 20, 2011 12:06 pm

Animation 1 is straight-up flat-out creepy. They really believe in that stuff??

Dave
November 20, 2011 12:12 pm

I posted this on the Tamino blog
.
Hi Foster grant aka Tamino.
Would you dare publish this?
I read your post and see a lot of smoke and mirrors. The use of climate model impute data is establish in advance for a predetermined outcome.
As any reasonably intelligent person with an open mind would know and except in the real world models used by engineers and practical professions use mechanisms and endless reviews based on real scientific and mathematical backstops that are there to prevent contaminated theories or desired results from polluting the models outcome.
Otherwise Industry and commerce would grind to a halt, Computers and communication devises, mechanical applications, planes, ships, and satellites would not function. Buildings would collapse. But for Climatologist with a warm bent, flights of fantasy and theoretical models are the normal fare and the only way to achieve the very obvious fraudulent outcomes.
Honest real world scientists, engineers. Architects who can’t and won’t get a way with manipulated data or results. The whole global warming industry has become a what’s in it for me snicker fest, Durban will once again show the hypocrisy and show us all what it is a total fraud it has become.

Editor
November 20, 2011 12:12 pm

D. J. Hawkins says: “I believe he’s refering to Figure 1. At around 1970, all the individual model runs displayed appear to be well below the graphed multi-model mean.”
Figure 1 is Tamino’s graph, not mine. The models he has shown are only the GISS-ER models. I believe the model mean in Tamino’s graph is of all 30+ AR4 models . So the GISS models represent only about a third of the models in the model mean.

Editor
November 20, 2011 12:16 pm

Jack Greer says: “Bingo. But if you’re trying to get BT to admit the process by which he analyzes data is, to put it kindly, faulty … which it is … Good luck with that.”
Jack Greer, refer to the post that’s the basis for this discussion:
http://bobtisdale.wordpress.com/2011/11/19/17-year-and-30-year-trends-in-sea-surface-temperature-anomalies-the-differences-between-observed-and-ipcc-ar4-climate-models/
Under the heading of ABOUT THE GRAPHS IN THIS POST, I discussed what they presented and in doing so, I explained how they were created. If you’re having trouble with that explanation, please advise.

JohnWho
November 20, 2011 12:17 pm

There are definitely problems with the models.
And:
Certainly the models need more work.

Oh, geez –
it’s a shame we are just finding this out.
/sarc
🙂

PaulR
November 20, 2011 12:18 pm

But there is no match between individual models and the natural variability either. They don’t work either singly or averaged together.

November 20, 2011 12:24 pm

George E. Smith; says: November 20, 2011 at 10:57 am
“Isn’t the WHOLE IDEA of modelling, to reproduce the OBSERVED DATA; nothing else matters !”

No, the whole idea of modelling is to figure out what may happen in the future.
Models are based on physics. They can only be expected to reproduce observed data insofar as that data does reflect the physics. Two things happen:
1. The data is noisy. I plotted here three different measures of SST vs the model mean. The difference between the model mean and the observations is comparable to the difference amongst the observations.
2. There are events that we know will occur, and have some idea how often, but don’t know when. Volcanoes are an obvious example. The various oscillations are another. A physical model may reproduce these, but not be specific about the phase. The physics doesn’t tell you that. So when you average over several models, this event information gets lost.

Jack Greer
November 20, 2011 12:28 pm

REPLY: Typical. If you are trying to get Jack Greer to agree with anything published on this blog, good luck with that. You have an MO that precedes you. You complain about anything and everything here. – Anthony

Anthony,
Do you agree the method of analysis used by Bob is faulty, or don’t you?
It’s not okay to simply say “pay no attention to my analysis, here’s the valid point I was trying to convey”.
… and I’ve already expressed the types of posts on your blog that I find valuable.

REPLY:
I agree with his analysis and conclusion, you don’t so we are worlds apart – Anthony

Gail Combs
November 20, 2011 12:33 pm

Jack Greer says:
November 20, 2011 at 11:59 am
NyqOnly says:
November 20, 2011 at 10:57 am
… This is a simple, logical, demonstration that EVEN IF the models were perfect, the technique you used would not show a good match between a multi-model mean and the natural variability.
It is a good point. It would be interesting to see you address it and it is odd that you currently seem to not understand it.
Bingo. But if you’re trying to get BT to admit the process by which he analyzes data is, to put it kindly, faulty … which it is … Good luck with that.
_________________________________
So if you can not get a useful “multi-model mean” then why do the climate scientists use it?
More importantly if you can not get a useful “multi-model mean” (about 50 or so trends) Then how the heck can you take thousands of temperature readings from all over the world put them through a sausage machine (called a computer program) and come up with any useful meaning?
Either you can get a useful mean or you can not get a useful mean.

November 20, 2011 12:40 pm

Using multiple model runs averaged together to create a model mean, then finding the trend of that mean and projecting it into the future as the basis for a prediction of future climate is nonsense.
It makes as much sense as going into a casino, recording all the winning numbers on a roulette table, averaging them together, finding the trend in the mean of all the winning numbers and then projecting that into the future to predict that on August 4th 2035 at 3:14 pm the winning number on that roulette wheel will be 14 Red.
Just what are these guys smoking?
Larry

Editor
November 20, 2011 12:58 pm

kadaka (KD Knoebel) says: “Uh-oh Bob, I watched your Animation and kept track. While apparently showing Ensemble Members 0 to 31, the numbers 18 and 30 are missing. Hope there’s a good explanation or Tami’s Troupe will pounce on you for cherrypicking!”
Thanks for picking up on that, kadaka. I forgot to explain that in the post. The source model data from KNMI for ensemble members 18 and 30 are each missing data. Ensemble Member 18 is missing data in 1918 and, of course, it prevents EXCEL from calculating the trends for 30 years before and after.
http://i39.tinypic.com/2ij2c78.jpg
Ensemble Member 30 is missing data in 1928, and likewise, it prevents EXCEL from calculating the trends for 30 years before and after.
http://i42.tinypic.com/qsjnk4.jpg

Dave Springer
November 20, 2011 1:00 pm

Ruined by teh blinking. Again. Thanks.
REPLY: Then don’t look at it and don’t comment about it. The animation stays, get over it, Anthony

Jon P
November 20, 2011 1:04 pm

“Just what are these guys smoking?”
Who knows, but whatever it is they want to pay for it with tax dollars.

November 20, 2011 1:06 pm

Santer’s PR release says.
They find that tropospheric temperature records must be at least 17 years long to discriminate between internal climate noise and the signal of human-caused changes in the chemical composition of the atmosphere.
Ignoring the unscientific ‘at least’, Santer is saying the models predict that the AGW will always show in 17 years of tropospheric temperature data.
Therefore, 17 years without significant warming falsifies the models, which is what Bob showed in the SST record.
Whether, the models can reproduce natural variability, which Santer calls ‘internal climate noise’ is secondary, and IMO not a significant issue.

jorgekafkazar
November 20, 2011 1:19 pm

Tamino makes the following statement toward the end of the post: “…overall they’re not bad,..”
Right. Not bad. Meaningless.
“…the amount of natural variability they show is realistic…”
They go up and down. Real data go up and down. So the models are realistic.

Editor
November 20, 2011 1:20 pm

Jack Greer says, “Do you agree the method of analysis used by Bob is faulty, or don’t you?”
There’s nothing wrong with the analysis, Jack. It’s your perception of it that’s faulty. I provided you with a link in an earlier comment that explained what was presented. Sorry if you can’t grasp it.

Jack Greer
November 20, 2011 1:29 pm

Tisdale says:
November 20, 2011 at 12:16 pm
Tamino is saying the method of averaging model runs and then commenting on the ability of model run averages to demonstrate natural multidecadal variability, initialized to do so or not, indicates a lack of understanding of how natural variability in the context of models s/b analyzed. You could alter your post to neutralize that valid point.

manicbeancounter
November 20, 2011 1:29 pm

Bob,
You write “Tamino resorts to smoke and mirrors once again.”
For those readers who have not come across this character before, might I refer them to a couple of examples of his work.
In July 2011, Tamino attempted a hatchet job on a study of Australian Sea Levels. This study indicated that sea level rises had “consistent trend of weak deceleration” in the period 1940 to 2000.
http://manicbeancounter.wordpress.com/2011/08/01/tamino-on-australian-sea-levels/
In July 2010, Tamino attempted a hatchet job on AW Montford’s “The Hockey Stick Illusion” at Real Climate blog. http://www.realclimate.org/index.php/archives/2010/07/the-montford-delusion/
Steve McIntyre replies included a re-posting of a piece from two years earlier, where he had answered most of Tamino’s points. http://climateaudit.org/2010/07/25/repost-of-tamino-and-the-magic-flute/.
One discredited tree-ring analysis that Tamino tried to defend was the Gaspé series. This I analysed on my own blog. http://manicbeancounter.wordpress.com/2010/07/24/tamino-v-montford-on-the-gaspe-series/

bubbagyro
November 20, 2011 1:35 pm

Jorge: The models really are not too bad. They virtually reproduce the data almost to a high degree, if not somewhat. I think you are picky. Some of the peaks on the whole nearly on average are close to the original data, virtually congruent with the observational data points, virtually to a large, if not significant degree, notwithstanding the precision of the artificial models.
[/sarc]

DocMartyn
November 20, 2011 1:39 pm

The model runs provide a mean for each year and the variance around the mean. The various temperature series all have global means and tiny variances around the mean.
If the the 90% confidence levels of the models and various ‘global’ temperature series do not overlap we can state that the models do not match reality at the 1% level.
The models clearly fail.

bubbagyro
November 20, 2011 1:40 pm

Jorge: forgot to put (/sarc).

November 20, 2011 1:42 pm

Bob,
As you well know models will probably never get the timing right. that’s the initialization problem.
Further if modelers did ‘fiddle” with the initialization states to get the timing correct people would howl.
one way forward is to look at
1. the distribution of all 17 and 30 years trends in ALL the model runs.
2. the distribution of all 17 and 30 year trends in observations.
that will give some insight as to whether or not models have similar variability.
or look at amplitudes.
But as long as you focus on the timing issue you really cant make the best argument.
what you are showing is a logical consequence of the starting conditions imposed
on the test

DirkH
November 20, 2011 2:03 pm

Jack Greer says:
November 20, 2011 at 1:29 pm
“Tamino is saying the method of averaging model runs and then commenting on the ability of model run averages to demonstrate natural multidecadal variability, initialized to do so or not, indicates a lack of understanding of how natural variability in the context of models s/b analyzed.”
The IPCC is constantly publishing ensemble means and bases its forecasts on them. You say they’re invalid? Tell the IPCC!

Michael Jankowski
November 20, 2011 2:06 pm

“…He’s not under wraps and hasn’t been for sometime, though why he keeps the moniker presently is a curiousity…”
Because he wants to be envisioned as Tamino from Mozart’s “the Magic Flute” – a handsome prince who goes through a number of trials and triumphs over all.

DirkH
November 20, 2011 2:06 pm

steven mosher says:
November 20, 2011 at 1:42 pm
“As you well know models will probably never get the timing right. that’s the initialization problem.”
Why don’t the IPCC climate scientists discard the init state that lead to a wrong timing, and select the init states that lead to a more correct timing? Would that not improve the forecast?
I tell you why they don’t do it: It would make them vulnerable. The “ensemble mean” is a smokescreen.

November 20, 2011 2:27 pm

manicbeancounter says: November 20, 2011 at 1:29 pm
Bob… writes “Tamino resorts to smoke and mirrors once again.” For those readers who have not come across this character before, might I refer them to a couple of examples of his work.

Haha. Here’s some more. IIRC I felt honoured that Tamino thought it worth his while to respond twice to my 2009 piece Circling the Arctic. Unfortunately Tamino’s pages before March 2010 have vanished. But I used ammo from him as good evidence – when put in context. He had his use, even if it was not what he planned.

Editor
November 20, 2011 2:38 pm

Steve Mosher says: “But as long as you focus on the timing issue you really cant make the best argument.”
Steve, this post is a result of a distraction created by Tamino. I really had no reason to respond to Tamino, other than to call attention to the “17-year and 30-year trend post” once again. We’ve discussed initialization and the reasons for the models’ inabilities to recreate the multidecadal variations of the instrument temperature record many other times. There was no reason for me to discuss it in the “17-year and 30-year trend post”.
Regards

HankH
November 20, 2011 2:51 pm

NyqOnly says:
November 20, 2011 at 10:57 am
This is a simple, logical, demonstration that EVEN IF the models were perfect, the technique you used would not show a good match between a multi-model mean and the natural variability.

If the artificial models were “perfect” and initialized perfectly then the multi-model ensemble mean would be a good match of the natural variability. To attempt to match natural variability across a multi-model ensemble mean is meaningless when the models are on a phase walk and the endpoint analysis technique requires agreement on the phase relationship of the signal. In a multi-model ensemble output, the phase relationship between each individual model is phase additive or subtractive from the natural variability the individual model is constrained to, producing a resulting ensemble mean that is unlikely to agree with natural variability. That all makes logical sense.
However, all Tamino is doing is a bunch of statistical handwaving only to prove a well understood behavior of complex wave forms – something any RF or signal processing engineer would understand intuitively. But this useless exercise in no way proves Bob’s technique is wrong. As such, it does seem to be a distraction in that, while making valid assertions about model ensembles, they serve no relevant application to Tamino’s conclusion.

Iskandar
November 20, 2011 3:02 pm

What is a multi model mean? A useless measure to hide the inconvenient truth that models do not live up to their expectations. Therefore we run the models numerous times, wth different parameters, so in the end we will be able to cover any unforeseen modification of the climate system.
Models are useless.
Until you get a grasp at understanding the system you are modelling, eg, the performance of silicon chips, however complicated they may be.
But weather is too complex, 2 days is an impossible task, let alone climate, at 30+ years. People who are working on this (climate predictions) should make a ridicule of their own profession, since they are the ones who know how catastrofically wrong they and their models are.
This is not a question of garbage in garbage out, that what you are throwing your garbage in, is in itself, gargage.
Garbage^3 would be a proper designation.
I really do not give a penny for the models.
And I refuse to pay for my energy more, based on these silly models.

Camburn
November 20, 2011 3:06 pm

NIck Stokes@12:14
Models are based on the physics that we DO know would be a better statement to make.
By not being able to hindcast a known phnemomena deomonstrates that the physics incorporated into climate models lacks a multitude of fundamental understandings.
When you have large scale shifts in climate, such as the approx 60 oscillation, yet can’t hindcast this when the oscillation is known, shows how little veracity the predictions of future events of any type warrant.

November 20, 2011 3:09 pm

For those of you that have come here to try and defend “the enigmatic climate blogger who runs the Open Mind site and keeps his identity deeply under wraps“ should go to his site and donate.
Of course, to do this, you’ll be supporting “Peaseblossom’s Closet” and the donation is for “Mistletoe”.
WUWT?
At least here, if you donate, you KNOW it’s going to surfacestations.org.
The only “Peaseblossom” I know of is a character in “A Midsummer’s Night’s Dream”. There, the character is listed as one of Titania’s fairy servants.
Oh, well…

Jack Greer
November 20, 2011 3:09 pm

@DirkH & Gail,
Evaluating ensemble runs to estimate a mean is a different analysis than evaluating the characteristics of model run variability around the mean and over time. If the timing of variability between models and runs are not precisely in synch, and there’s no reason to expect they would be, doing an out-of-phase averaging basically results in cancellation of variability in averaged results … like a high frequency filter. Steven Mosher described what might be a more fitting approach to evaluate variability in modeled analysis.

Bill Illis
November 20, 2011 3:21 pm

The biggest issue is how far off the models are right now compared to the recent sea surface temperature trends.
The 0.02C per decade actual results of HadISST is very far from the 0.15C per decade predicted by the modelers (other datasets might be a little higher than 0.02C per decade but they are still far off the predictions).
Let’s remember that AR4 climate models had access to actual numbers up until about 2004. So the only part that they were actually predicting was the last 7 years in which they predicted rapidly rising sea surface temperatures. They went down considerably instead.
No climate model that I am aware of, has provided an accurate predicted trend yet (for anything, including surface temperatures, sea surface temperatures, lower troposphere or ocean heat content).
I don’t understand why they believe they are on the right track? If the models are supposed to represent the physics, then they have gotten the physics wrong or the models do not, in fact, represent the physics (but represent why the pro-AGW modelers want the climate to do).
Last week, ocean SSTs went down to just 0.08C, the AMO went into the negatives at -0.05C and the La Nina continued developing at -0.8C. The trend will soon be even farther off the models.

Iskandar
November 20, 2011 3:26 pm

Greer,
You have defined in the model at what frequency you sample. There is no need te reasess that. It is simply Nyquist. You can change the sampling in your model to fit the observations, please.
And running stupid models with new initialisations, so as to extract the ones that have skille in back casting, is fraud.

Hoser
November 20, 2011 3:33 pm

The answer to the criticism of Fig 1 is Animation 1. Out of 30 models, 3 or 4 are somewhat in agreement with the HadISST measured 30 yr anomaly trend in the period 1940 – 1950 (# 19, 20, 25, 26). About a third are somewhat condordant between 1970-2000. Why is anyone excited about this garbage phony data? Because the political issues are important, not any scientific ones. For what it’s worth, I’d consider it a complete embarassment to have any of these models as a prominent part of my scientific career. Although possibly interesting, they are still just toys.

David Ball
November 20, 2011 3:33 pm

Computer modelers know about computers. They do not know or understand the climate at all.
Computers have their uses but we are a LONG way from being able to model the climate with anything approximating reality (short term weather prediction is even extremely difficult .
To base policy on models is absolute insanity.

Iskandar
November 20, 2011 3:33 pm

“Ensemble runs”
This is probably the most bullshit argument that can be made.
If a model has any skill at predicting, it should be perfect at hind casting. Once such a model is established, it will be seeded with parameters for every parametrized property of the model. And using these parameters, the predictions of the model are presented.
Not an avarage of 20 or so runs of anymodel, which is complete garbage.
\
I really do not understand why people with any form of scientific education give any credibility to this form of utter nonsense. This is Las Vegas style of reasoning: as long as we can make a profit of it, we support.

Matt G
November 20, 2011 3:39 pm

Steve Mosher says: “But as long as you focus on the timing issue you really cant make the best argument.”
The timing issue only reveals that the model has no idea what and how the change in the rough sine wave occurs. If the model knows what the mechanisms are and can follow them, the timing wouldn’t be an issue.

Iskandar
November 20, 2011 3:50 pm

The difference between the model outputs and the registered temperature, as depicted in Fig 1 of this post is really telling. It informs e that the modellers are allowed to take into consideration some aspects of climate, but are told not to incorporate other inconvenient truths. The area of climate modelling has become so politicised, that it will cost ones career to speak out against the common belief.
And belief it is, in ever stronger terms.

Editor
November 20, 2011 3:53 pm

Jack Greer says: “Tamino is saying the method of averaging model runs and then commenting on the ability of model run averages to demonstrate natural multidecadal variability, initialized to do so or not, indicates a lack of understanding of how natural variability in the context of models s/b analyzed.”
Jack, thanks for the change in tone. You’re still missing something. My post…
http://bobtisdale.wordpress.com/2011/11/19/17-year-and-30-year-trends-in-sea-surface-temperature-anomalies-the-differences-between-observed-and-ipcc-ar4-climate-models/
..illustrated that the trends of the Sea Surface Temperature data are dropping while the IPCC AR4 model mean trend is rising. I summarized that in the Table I provided in the closing to that post.
http://i44.tinypic.com/bg678o.jpg
What you think I’ve failed to address has no bearing on the intent and outcome of that post.
Regards

Iskandar
November 20, 2011 4:05 pm

Bottom line, as far as I am concerned:
Models are useless, they can be tweaked to any desirable output.
No model has any solid and confirmed hindcasting ability.
These two observations lead to the following conclusions:
No policy should be based on model output!
No more money to modelling studies! Complete waste of money!

Theo Goodwin
November 20, 2011 4:53 pm

Nick Stokes says:
November 20, 2011 at 12:24 pm
George E. Smith; says: November 20, 2011 at 10:57 am
“Isn’t the WHOLE IDEA of modelling, to reproduce the OBSERVED DATA; nothing else matters !”
“No, the whole idea of modelling is to figure out what may happen in the future.
Models are based on physics. They can only be expected to reproduce observed data insofar as that data does reflect the physics. Two things happen:
1. The data is noisy. I plotted here three different measures of SST vs the model mean. The difference between the model mean and the observations is comparable to the difference amongst the observations.
2. There are events that we know will occur, and have some idea how often, but don’t know when. Volcanoes are an obvious example. The various oscillations are another. A physical model may reproduce these, but not be specific about the phase. The physics doesn’t tell you that. So when you average over several models, this event information gets lost.”
Your words do not reflect an understanding of what a model is. Our solar system is a model of the rigorously formulated physical hypotheses that were created by Newton and that have been surpassed by Einstein’s work. A model is a set of objects that render true all the statements (rigorously formulated hypotheses) contained in some physical theory such as Newton’s Theory of Gravitation. It is impossible to specify a model without reference to the set of statements that it renders true.
In the street language of the so-called science of climate all that the phrase “model of climate” might mean is a simulation generated by a computer that reproduces all relevant observations recorded by climate scientists. Reproducibility is the only standard of correctness available to the modelers. It is the only standard because there is no way to specify a model of Earth’s climate for the obvious reason that there is no set of physical hypotheses that are reasonably well confirmed and that can be used to both explain and predict climate changes. Therefore, any model that fails to reproduce all recorded observations is a failed model. The models that Mr. Tisdale discusses in this post and the earlier post that gave rise to this post fall way short of reproducing the past and are tinker toys. They embody wonderful hunches about climate but they are hunches only and not science.
If my statements about the formal specification of a model are difficult to understand, please note that they are fully in line with common sense and even the common sense of so-called climate scientists. When you run a computer model and generate a simulation of past recorded observations of climate, isn’t your goal to simulate exactly the recorded set of observations? Your goal is to reproduce the recorded observations. (The key word here is ‘reproduce’.) If that is not your goal, please explain what your goal is when you create a simulation of past observations.
Finally, you write “The physics doesn’t tell you that.” Sir, the physics is the science. You cannot find something in your model that is not in your physics.

H.R.
November 20, 2011 5:00 pm

“If the trend don’t fit, you must quit.”

JPeden
November 20, 2011 5:42 pm

Bill Illis says:
November 20, 2011 at 3:21 pm
“I don’t understand why they [Climate Science Modelers] believe they are on the right track?”
Because they are not doing real science to begin with, and they know it; and they are getting rewarded for what they are doing instead. They continue to act like they are on the right track because they know this alone will continue to fool people who believe they are really acting like real scientists doing real science, simply because they say so and are treated by some groups of other people toward the same effect, both wittingly and unwittingly – since their primary function is instead as propagandists to make “perception” be “reality” so that they all can continue to benefit in obvious ways, at the expense of others like us.
I didn’t expect that to be the case either, that actual scientists would not be doing real science or that they could get away with it. But it’s really that simple.

Editor
November 20, 2011 6:08 pm

Tamino makes a few statements in his post that I will be happy to agree with:
There are definitely problems with the models.
And:
Certainly the models need more work.

Don’t forget:

Pretty damn good model, right? It should be – we designed it to be correct.

There was also

First let’s look at some artificial data. We’ll start with the global average temperature from GISS and smooth it:

At first I thought that was a dig at GISS, but I guess he’s referring to the smoothed data as artificial.

Stephen Rasey
November 20, 2011 7:31 pm

In the three chart suites on this page there are three different Y-axis. The First is Deg K / Year, The second in Deg C from anomaly. The third is in Deg C / decade. Now there first and third are at least using slightly different units for the same thing, but the second is the integral of the first and third.
This practice does NOT lead to clarity. Graphs are a visual, pattern recognition, means of conveying information. The change of units detracts from valid pattern recognition.

Kim Moore
November 20, 2011 8:16 pm

Foster picked the wrong character from Mozart’s opera when he named himself Tamino. The character he was looking for is Papageno.
“The Papageno character is designed to show the immaturity and manipulability of man—recalling to mind Kant’s famous imperative: Enlightenment is ‘man’s emergence from his self-inflicted tutelage.’ ”
(Helmut Perl’s The Case of Mozart: Testimony about a Misunderstood Genius)

November 20, 2011 10:40 pm

Not an avarage of 20 or so runs of anymodel, which is complete garbage.
Indeed it is. Models are digital software and completely deterministic. Run the same software n times with the same inputs and you will get n identical results.
Except to the extent psuedo–random functions have been programmed into the software. I assume to simulate natural variability.
Any and all variability in model output is wholly the result of these programmed psuedo-random functions. There is no other possible source for variability. To pretend it has meaning is utter nonsense.
When you average models runs all you are doing is averaging this artificial randomnese. Psuedo-science is too nice a word for what they are doing.

MikeN
November 21, 2011 12:30 am

Previously, Tamino responded to my claim that the models were a more sophisticated curve fitting, by saying no they represent everything we know about the physical world. When I pointed out all the variables available for tuning in a paper about MIT’s EPPA model v 1.0 the denizens there responded that I should be looking at a different model, as if the MIT model was of no informative value.

Editor
November 21, 2011 2:17 am

Stephen Rasey says: “In the three chart suites on this page there are three different Y-axis…This practice does NOT lead to clarity.”
I prepared the third graph (animation). The y-axis in it is the same as I used in the “17-year and 30-year trend post” that initiated this post. The first two graphs were prepared by Tamino.

November 21, 2011 3:12 am

steven mosher says:
November 20, 2011 at 1:42 pm
Bob,
As you well know models will probably never get the timing right. that’s the initialization problem.
Further if modelers did ‘fiddle” with the initialization states to get the timing correct people would howl.
one way forward is to look at
1. the distribution of all 17 and 30 years trends in ALL the model runs.
2. the distribution of all 17 and 30 year trends in observations.
that will give some insight as to whether or not models have similar variability.
or look at amplitudes.
But as long as you focus on the timing issue you really cant make the best argument.
what you are showing is a logical consequence of the starting conditions imposed
on the test”
Mr Mosher, if there is a physics reason for the somewhat observed 60 year peak to peak swing, and the models “will probably never get the timing right” then the models are very capable of showing a false trend (false from the standpoint of what the earths climate is actually doing) for over 30 years. If the models do not know why or where we are in this cyclic variability, then logically they will fail to predict the future. The model mean, shown in figure 4 of the original post, shows a continues and invariable rise (with very minor blips flat or a little down) for 100 years, from 1975, a curious time, to 2075. The downturn in the model mean after 2075 is also rather strange.

ldlas
November 21, 2011 3:28 am

Mr. Mosher
Show me that the models work.
Do a guest post instead of making lame (lazy) comments.

Lars P.
November 21, 2011 3:45 am

The models can’t reproduce the history for very simple and obvious reasons: they do not include multidecadal variations as these are not yet good understood and not accepted by the team as such, as could explain the whole recent warming as natural variation. Just some random variations are wrong, but they have no theory and cannot develop one in the frame of their own definition what is acceptable theory. This puts them in the position to have to be passive waiting development from other skeptical scientists and try to fit is as much as possible to their own theory.
They are based on greenhouse adding the extra warming of the surface in comparison with equivalent blackbody which is wrong assumption, the Earth is a mix of atmosphere and solid & liquid surface radiating body. Their defined theoretical temperature does not apply to the surface of the Earth.
They ignore the oceans as direct influencing the temp difference to a blackbody. The oceans being warmed up to 200 m deep by the sun and losing heat only at the very surface skin are the subsystem that defines 90% of Earth temperature. This is totally different to a rock planet with atmosphere. How can they be so ignorant and treat oceans only as heat storage and redistribution? Long range infrared from atmosphere is heat exchange with the very 0,0001 m deep of the ocean.
So playing with aerosols and some variations the models can reproduce up to a limit some pattern of warming but the longer the time period gets the more obvious is their bias towards excessive warming.
Tamino is fighting on a scientific undefendable position and he should know it.

November 21, 2011 3:46 am

In fact figure four shows a flat trend until 1975. It misses the 1945 peak at every point and runs flat for 50 years. The questionable adjustments to the land surface data sets appear to mimic the models sst anomaly mean, relatively flat until CO2 kicks in. In the observational SST chart there is a rise (missed by the models because it was to early for CO2 to kick in), an equal fall (missed by the models because they do not show so much natural varibility), another equal rise matching the previous natural rise (this rise caught by the models because of how the attribute the physics of CAGW), and what has the appearnce of another fall, time will tell, (but so far missed by the models because to them CO2 dominates.)
Looking at the trends no wonder they are starting to say it may be twenty years before the “discernible” human influence on climate is identified.

November 21, 2011 11:34 am

“They find that tropospheric temperature records must be at least 17 years long to discriminate between internal climate noise and the signal of human-caused changes in the chemical composition of the atmosphere.”
That is hilarious! Cicada`s preference for 17yr brood cycle is not based on lucky numbers.

November 22, 2011 1:11 am

Tamino is a tool.
He’s that guy that points fingers at everyone else about doing whatever, but will always have three fingers pointing back at himself.
He also reminds me of those water snakes.
The more you squeeze it the more it tries to get away.
Tamino is no different.
You would have better luck changing the mind of an atheist to believing in God, than getting this real denier of climate change, in admitting he is a cherry-picking, finger-pointing, truth-evading snake-in-the-grass has-been.
And that’s me being nice.
Snip away!

November 22, 2011 8:46 am

Blaming the warming on the oceans is like blaming cars for population. “Look, every time I see a car stop in the parking lot, a person pops out. Obviously cars produce people.”
That’s auto-correlation and auto-eroticism.

LazyTeenager
November 22, 2011 8:41 pm

Why doesn’t Tamino use the real models instead of artificial ones? Because then Tamino would have to show you that the majority of the models do not have multidecadal variations in trend that are similar in timing, frequency, and magnitude of the observation-based SST data. Refer to Animation 1.
—————
Beats me. I look at that animation 1 and see good agreement between many of the models and the observed sea surface temperatures. I can’t quite establish though if these are multiple models or multiple runs of one model. I’ll have to look into that.
My judgement is informed by the idea that even if you had 2 real earths running side by side you would not get an exact match between the sea surface temperatures.