
Tamino Misses The Point And Attempts To Distract His Readers
By Bob Tisdale
The obvious intent of my recent post “17-Year And 30-Year Trends In Sea Surface Temperature Anomalies: The Differences Between Observed And IPCC AR4 Climate Models” was to illustrate the divergence between the IPCC AR4 projected Sea Surface Temperature trends and the trends of the observations as presented by the Hadley Centre’s HADISST Sea Surface Temperature dataset. Tamino has written a response with his post “Tisdale Fumbles, Pielke Cheers.” Obviously he missed the point of the post. Since he does not address this divergence, his post is simply a distraction. That fact is blatantly obvious. Everyone reading his post will realize this, though it is doubtful his faithful followers will call his attention to it. Tamino resorts to smoke and mirrors once again. But let’s look at a few of the points he tries to make.
Tamino objects to this statement that is included on all of the graphs in the “17-year and 30-year trends post”:
The Models Do Not Produce Multidecadal Variations In Sea Surface Temperature Anomalies Comparable To Those Observed, Because They Are Not Initialized To Do So. This, As It Should Be, Is Also Evident In Trends.
The reason I included that statement was because I have illustrated and discussed the lack of multidecadal variability in the IPCC AR4 models in earlier posts and I wanted to draw the readers’ attention to the difference between the trends of the model mean and the observed trends. It’s really that simple.
Tamino makes the following statement toward the end of the post:
“There are definitely problems with the models. For one thing, they don’t reproduce the rapid warming of sea surface temperature from 1915 to 1945 as strongly as the observed data indicate. But overall they’re not bad, and the amount of natural variability they show is realistic.”
But the fact that “For one thing, they don’t reproduce the rapid warming of sea surface temperature from 1915 to 1945 as strongly as the observed data indicate” means the Sea Surface Temperatures of the models also don’t flatten from 1945 to 1975 as the observations do, and it’s those two portions of the multidecadal variations in sea surface temperatures that are known to be missing in the models. That’s what’s being referred to on each of the graphs in red. The models capture the rise in temperature from 1975 to 2000, but they do not capture the rise and flattening from 1910 to 1975.
Tamino presents a comparison of 30-year trends for HADISST, the model mean, and the 9 runs of the GISS Model ER, which I’ve reproduced here as Figure 1. He then writes:
Note that the individual model runs show much more variability than the multi-model mean. In fact they show variability comparable to that shown by the observed data.
I’ve highlighted a portion of his graph in Figure 1 that he obviously overlooked. Look closely at the significant rise in trends of the HADISST data in the early 20th century, and then the equally impressive decline in trends. Do any of the GISS model runs produce the “Multidecadal Variations In Sea Surface Temperature Anomalies Comparable To Those Observed” during the early part of the 20thcentury? No. So thank you for confirming one of my points, Tamino. It also contradicts your nonsensical statement, “In fact they show variability comparable to that shown by the observed data.”
Figure 1
Tamino also goes into a detailed discussion of how the model mean can obscure any multidecadal variations in the individual model runs. But note that he doesn’t use the actual model runs. He uses “Artificial Models”. Refer to Figure 2. Artificial models?
Figure 2
Why doesn’t Tamino use the real models instead of artificial ones? Because then Tamino would have to show you that the majority of the models do not have multidecadal variations in trend that are similar in timing, frequency, and magnitude of the observation-based SST data. Refer to Animation 1.
Animation 1
I could have provided that animation in my post, but I elected not to present it because it added no value to the post.
CLOSING
As I noted earlier, Tamino’s post is simply a distraction from my post “17-Year And 30-Year Trends In Sea Surface Temperature Anomalies: The Differences Between Observed And IPCC AR4 Climate Models”, which showed the divergence between the trends of the IPCC AR4 model mean for global Sea Surface Temperatures and the observed Sea Surface Temperature trends.
Tamino makes a few statements in his post that I will be happy to agree with:
There are definitely problems with the models.
And:
Certainly the models need more work.
Thanks for the opportunity to call attention to my post once again, Tamino.



If the artificial models were “perfect” and initialized perfectly then the multi-model ensemble mean would be a good match of the natural variability. To attempt to match natural variability across a multi-model ensemble mean is meaningless when the models are on a phase walk and the endpoint analysis technique requires agreement on the phase relationship of the signal. In a multi-model ensemble output, the phase relationship between each individual model is phase additive or subtractive from the natural variability the individual model is constrained to, producing a resulting ensemble mean that is unlikely to agree with natural variability. That all makes logical sense.
However, all Tamino is doing is a bunch of statistical handwaving only to prove a well understood behavior of complex wave forms – something any RF or signal processing engineer would understand intuitively. But this useless exercise in no way proves Bob’s technique is wrong. As such, it does seem to be a distraction in that, while making valid assertions about model ensembles, they serve no relevant application to Tamino’s conclusion.
What is a multi model mean? A useless measure to hide the inconvenient truth that models do not live up to their expectations. Therefore we run the models numerous times, wth different parameters, so in the end we will be able to cover any unforeseen modification of the climate system.
Models are useless.
Until you get a grasp at understanding the system you are modelling, eg, the performance of silicon chips, however complicated they may be.
But weather is too complex, 2 days is an impossible task, let alone climate, at 30+ years. People who are working on this (climate predictions) should make a ridicule of their own profession, since they are the ones who know how catastrofically wrong they and their models are.
This is not a question of garbage in garbage out, that what you are throwing your garbage in, is in itself, gargage.
Garbage^3 would be a proper designation.
I really do not give a penny for the models.
And I refuse to pay for my energy more, based on these silly models.
NIck Stokes@12:14
Models are based on the physics that we DO know would be a better statement to make.
By not being able to hindcast a known phnemomena deomonstrates that the physics incorporated into climate models lacks a multitude of fundamental understandings.
When you have large scale shifts in climate, such as the approx 60 oscillation, yet can’t hindcast this when the oscillation is known, shows how little veracity the predictions of future events of any type warrant.
For those of you that have come here to try and defend “the enigmatic climate blogger who runs the Open Mind site and keeps his identity deeply under wraps“ should go to his site and donate.
Of course, to do this, you’ll be supporting “Peaseblossom’s Closet” and the donation is for “Mistletoe”.
WUWT?
At least here, if you donate, you KNOW it’s going to surfacestations.org.
The only “Peaseblossom” I know of is a character in “A Midsummer’s Night’s Dream”. There, the character is listed as one of Titania’s fairy servants.
Oh, well…
@DirkH & Gail,
Evaluating ensemble runs to estimate a mean is a different analysis than evaluating the characteristics of model run variability around the mean and over time. If the timing of variability between models and runs are not precisely in synch, and there’s no reason to expect they would be, doing an out-of-phase averaging basically results in cancellation of variability in averaged results … like a high frequency filter. Steven Mosher described what might be a more fitting approach to evaluate variability in modeled analysis.
The biggest issue is how far off the models are right now compared to the recent sea surface temperature trends.
The 0.02C per decade actual results of HadISST is very far from the 0.15C per decade predicted by the modelers (other datasets might be a little higher than 0.02C per decade but they are still far off the predictions).
Let’s remember that AR4 climate models had access to actual numbers up until about 2004. So the only part that they were actually predicting was the last 7 years in which they predicted rapidly rising sea surface temperatures. They went down considerably instead.
No climate model that I am aware of, has provided an accurate predicted trend yet (for anything, including surface temperatures, sea surface temperatures, lower troposphere or ocean heat content).
I don’t understand why they believe they are on the right track? If the models are supposed to represent the physics, then they have gotten the physics wrong or the models do not, in fact, represent the physics (but represent why the pro-AGW modelers want the climate to do).
Last week, ocean SSTs went down to just 0.08C, the AMO went into the negatives at -0.05C and the La Nina continued developing at -0.8C. The trend will soon be even farther off the models.
@Jack Greer,
You have defined in the model at what frequency you sample. There is no need te reasess that. It is simply Nyquist. You can change the sampling in your model to fit the observations, please.
And running stupid models with new initialisations, so as to extract the ones that have skille in back casting, is fraud.
The answer to the criticism of Fig 1 is Animation 1. Out of 30 models, 3 or 4 are somewhat in agreement with the HadISST measured 30 yr anomaly trend in the period 1940 – 1950 (# 19, 20, 25, 26). About a third are somewhat condordant between 1970-2000. Why is anyone excited about this garbage phony data? Because the political issues are important, not any scientific ones. For what it’s worth, I’d consider it a complete embarassment to have any of these models as a prominent part of my scientific career. Although possibly interesting, they are still just toys.
Computer modelers know about computers. They do not know or understand the climate at all.
Computers have their uses but we are a LONG way from being able to model the climate with anything approximating reality (short term weather prediction is even extremely difficult .
To base policy on models is absolute insanity.
“Ensemble runs”
This is probably the most bullshit argument that can be made.
If a model has any skill at predicting, it should be perfect at hind casting. Once such a model is established, it will be seeded with parameters for every parametrized property of the model. And using these parameters, the predictions of the model are presented.
Not an avarage of 20 or so runs of anymodel, which is complete garbage.
\
I really do not understand why people with any form of scientific education give any credibility to this form of utter nonsense. This is Las Vegas style of reasoning: as long as we can make a profit of it, we support.
Steve Mosher says: “But as long as you focus on the timing issue you really cant make the best argument.”
The timing issue only reveals that the model has no idea what and how the change in the rough sine wave occurs. If the model knows what the mechanisms are and can follow them, the timing wouldn’t be an issue.
The difference between the model outputs and the registered temperature, as depicted in Fig 1 of this post is really telling. It informs e that the modellers are allowed to take into consideration some aspects of climate, but are told not to incorporate other inconvenient truths. The area of climate modelling has become so politicised, that it will cost ones career to speak out against the common belief.
And belief it is, in ever stronger terms.
Jack Greer says: “Tamino is saying the method of averaging model runs and then commenting on the ability of model run averages to demonstrate natural multidecadal variability, initialized to do so or not, indicates a lack of understanding of how natural variability in the context of models s/b analyzed.”
Jack, thanks for the change in tone. You’re still missing something. My post…
http://bobtisdale.wordpress.com/2011/11/19/17-year-and-30-year-trends-in-sea-surface-temperature-anomalies-the-differences-between-observed-and-ipcc-ar4-climate-models/
..illustrated that the trends of the Sea Surface Temperature data are dropping while the IPCC AR4 model mean trend is rising. I summarized that in the Table I provided in the closing to that post.
http://i44.tinypic.com/bg678o.jpg
What you think I’ve failed to address has no bearing on the intent and outcome of that post.
Regards
Bottom line, as far as I am concerned:
Models are useless, they can be tweaked to any desirable output.
No model has any solid and confirmed hindcasting ability.
These two observations lead to the following conclusions:
No policy should be based on model output!
No more money to modelling studies! Complete waste of money!
Nick Stokes says:
November 20, 2011 at 12:24 pm
George E. Smith; says: November 20, 2011 at 10:57 am
“Isn’t the WHOLE IDEA of modelling, to reproduce the OBSERVED DATA; nothing else matters !”
“No, the whole idea of modelling is to figure out what may happen in the future.
Models are based on physics. They can only be expected to reproduce observed data insofar as that data does reflect the physics. Two things happen:
1. The data is noisy. I plotted here three different measures of SST vs the model mean. The difference between the model mean and the observations is comparable to the difference amongst the observations.
2. There are events that we know will occur, and have some idea how often, but don’t know when. Volcanoes are an obvious example. The various oscillations are another. A physical model may reproduce these, but not be specific about the phase. The physics doesn’t tell you that. So when you average over several models, this event information gets lost.”
Your words do not reflect an understanding of what a model is. Our solar system is a model of the rigorously formulated physical hypotheses that were created by Newton and that have been surpassed by Einstein’s work. A model is a set of objects that render true all the statements (rigorously formulated hypotheses) contained in some physical theory such as Newton’s Theory of Gravitation. It is impossible to specify a model without reference to the set of statements that it renders true.
In the street language of the so-called science of climate all that the phrase “model of climate” might mean is a simulation generated by a computer that reproduces all relevant observations recorded by climate scientists. Reproducibility is the only standard of correctness available to the modelers. It is the only standard because there is no way to specify a model of Earth’s climate for the obvious reason that there is no set of physical hypotheses that are reasonably well confirmed and that can be used to both explain and predict climate changes. Therefore, any model that fails to reproduce all recorded observations is a failed model. The models that Mr. Tisdale discusses in this post and the earlier post that gave rise to this post fall way short of reproducing the past and are tinker toys. They embody wonderful hunches about climate but they are hunches only and not science.
If my statements about the formal specification of a model are difficult to understand, please note that they are fully in line with common sense and even the common sense of so-called climate scientists. When you run a computer model and generate a simulation of past recorded observations of climate, isn’t your goal to simulate exactly the recorded set of observations? Your goal is to reproduce the recorded observations. (The key word here is ‘reproduce’.) If that is not your goal, please explain what your goal is when you create a simulation of past observations.
Finally, you write “The physics doesn’t tell you that.” Sir, the physics is the science. You cannot find something in your model that is not in your physics.
“If the trend don’t fit, you must quit.”
Bill Illis says:
November 20, 2011 at 3:21 pm
“I don’t understand why they [Climate Science Modelers] believe they are on the right track?”
Because they are not doing real science to begin with, and they know it; and they are getting rewarded for what they are doing instead. They continue to act like they are on the right track because they know this alone will continue to fool people who believe they are really acting like real scientists doing real science, simply because they say so and are treated by some groups of other people toward the same effect, both wittingly and unwittingly – since their primary function is instead as propagandists to make “perception” be “reality” so that they all can continue to benefit in obvious ways, at the expense of others like us.
I didn’t expect that to be the case either, that actual scientists would not be doing real science or that they could get away with it. But it’s really that simple.
Don’t forget:
There was also
At first I thought that was a dig at GISS, but I guess he’s referring to the smoothed data as artificial.
In the three chart suites on this page there are three different Y-axis. The First is Deg K / Year, The second in Deg C from anomaly. The third is in Deg C / decade. Now there first and third are at least using slightly different units for the same thing, but the second is the integral of the first and third.
This practice does NOT lead to clarity. Graphs are a visual, pattern recognition, means of conveying information. The change of units detracts from valid pattern recognition.
Foster picked the wrong character from Mozart’s opera when he named himself Tamino. The character he was looking for is Papageno.
“The Papageno character is designed to show the immaturity and manipulability of man—recalling to mind Kant’s famous imperative: Enlightenment is ‘man’s emergence from his self-inflicted tutelage.’ ”
(Helmut Perl’s The Case of Mozart: Testimony about a Misunderstood Genius)
Not an avarage of 20 or so runs of anymodel, which is complete garbage.
Indeed it is. Models are digital software and completely deterministic. Run the same software n times with the same inputs and you will get n identical results.
Except to the extent psuedo–random functions have been programmed into the software. I assume to simulate natural variability.
Any and all variability in model output is wholly the result of these programmed psuedo-random functions. There is no other possible source for variability. To pretend it has meaning is utter nonsense.
When you average models runs all you are doing is averaging this artificial randomnese. Psuedo-science is too nice a word for what they are doing.
Previously, Tamino responded to my claim that the models were a more sophisticated curve fitting, by saying no they represent everything we know about the physical world. When I pointed out all the variables available for tuning in a paper about MIT’s EPPA model v 1.0 the denizens there responded that I should be looking at a different model, as if the MIT model was of no informative value.
Stephen Rasey says: “In the three chart suites on this page there are three different Y-axis…This practice does NOT lead to clarity.”
I prepared the third graph (animation). The y-axis in it is the same as I used in the “17-year and 30-year trend post” that initiated this post. The first two graphs were prepared by Tamino.
steven mosher says:
November 20, 2011 at 1:42 pm
Bob,
As you well know models will probably never get the timing right. that’s the initialization problem.
Further if modelers did ‘fiddle” with the initialization states to get the timing correct people would howl.
one way forward is to look at
1. the distribution of all 17 and 30 years trends in ALL the model runs.
2. the distribution of all 17 and 30 year trends in observations.
that will give some insight as to whether or not models have similar variability.
or look at amplitudes.
But as long as you focus on the timing issue you really cant make the best argument.
what you are showing is a logical consequence of the starting conditions imposed
on the test”
Mr Mosher, if there is a physics reason for the somewhat observed 60 year peak to peak swing, and the models “will probably never get the timing right” then the models are very capable of showing a false trend (false from the standpoint of what the earths climate is actually doing) for over 30 years. If the models do not know why or where we are in this cyclic variability, then logically they will fail to predict the future. The model mean, shown in figure 4 of the original post, shows a continues and invariable rise (with very minor blips flat or a little down) for 100 years, from 1975, a curious time, to 2075. The downturn in the model mean after 2075 is also rather strange.
Mr. Mosher
Show me that the models work.
Do a guest post instead of making lame (lazy) comments.