By Steve Goddard
Yesterday, the Guardian reported :
Meteorologists have developed remarkably effective techniques for predicting global climate changes caused by greenhouse gases. One paper, by Stott and Myles Allen of Oxford University, predicted in 1999, using temperature data from 1946 to 1996, that by 2010 global temperatures would rise by 0.8C from their second world war level. This is precisely what has happened.
Huh?
The temperature rise since WWII reported by CRU is 0.4C (not 0.8C) and it occurred prior to the date of the study. Climate models use thousands of empirically derived back-fit parameters. Given that fact, the only thing remarkable is that their prediction was so far off the mark. Their forecast is the equivalent of me predicting that Chelsea wins 12-0 yesterday. Off by a factor of two, and after the fact.
I recently attended a meeting of weather modelers, who told me that their models are effective for about 72 hours, not 60 years. GCMs use the same underlying models as weather modelers, plus more parameters which may vary over time.
h/t to reader M White

@Leif Svalgaard
‘It is standard practice in most science that if you state a number like 0.8, it could mean anything between 0.750000… and 0.849999… If one really had it down to a hundredth of a degree, it would be stated 0.80.’
I’m betting you mean in most climate science, and no wonder climate science is a bunch of crap. People being concerned with point oh of a degree but think it’s ok to scrap point oh oh of a degree and disregarding the uncertainty of point oh oh to point oh.
If scientist ain’t concerned with the minute details, as you think, then we all can go head and use the politician version of accuracy of about one degree.
StevenGoddard wrote:
“GCMs us the same core code as weather models. They have to. They are modeling the same processes. ”
Yes, I know they do; however for weather forecasting they are used to predict the exact trajectory of the atmosphere, in climate prediction they are used to simulate weather with statistical properties governed by our understanding of climate physics. The same type of models, used in very different ways.
“The code you imagine would be completely useless. Weather/climate consists of massive amounts of detail.”
Yes, and the reason climate modellers don’t attempt to predict weather is becuase (amongst other things) it would be impossible to measure the initial conditions in sufficient detail. Which is why they simulate weather instead of predicting it. I have made that point several times.
Did you look at the double pendulum example I linked to earlier, which explains why the long term statistical behaviour of chaotic systems can be predicted via simulation, even though the exact trajectory can’t be predicted?
“Again, it is quite clear that you are not familiar with how GCMs work.”
Blogsphere bluster, nothing more.
Leif Svalgaard says:
August 15, 2010 at 9:53 pm
Jaye says:
August 15, 2010 at 9:05 pm
Yes but in the absence of substantive comments as one’s “corrections” approach minutia, then one has likely lost the argument.
There really is no argument:
1) The starting year for the analysis is 1946.
2) There are strong indications that 2010 will be the hottest year ‘ever’ [ http://www.physorg.com/news200991063.html ],
3) From 1) and 2) it follows that the increase is 0.8C as the Article referenced in the Guardian states is correct [to the extent that our data is correct – but Goddard assumes that too, so no argument there either].”
1) the analysis was for the yearS 1946 through 1996, not “from 1946 on” or for individual year comparisons, like 1946 and 1998 or 1946 and 2010.
2) the “hottest year” is not what the prediction presumes to address, according to the Guardian article: “predicted in 1999, using temperature data from 1946 to 1996, that by 2010 global temperatures would rise by 0.8C from their second world war level.”
Paraphrased this means that BY 2010 GLOBAL TEMPERATURES will rise by .8C.
This can only be taken to mean, the mean global temperature. And as Steven has identified more than once, and on the CRU graph he posted, mean temps have not risen by .8C. Just by eyeballing the graph no more than .6C increase can be seen from 1946 to 2009.
3) Actually, no. That one year meets the prediction is not relevant, as 1998 temp should demonstrate, which was the year before the prediction was made. Look to Steven’s claims about mean temperatures, or just look at the graph at top, as he advised.
1DandyTroll says:
August 16, 2010 at 3:21 pm
I’m betting you mean in most climate science
No, I meant in science, generally. See, e.g. http://www.ionsource.com/Card/number/number.htm
With all due respect to Steven Goddard, Leif Svalgaard and all the other learned folk here, I must pose a question:
How many pedants can dance together on the head of a pin while ignoring the rotten foundation upon which the pin is standing? The HAD-CRU database is corrupted, as confessed to in the harry_read_me.txt files.
– “But what are all those monthly files? DON’T KNOW, UNDOCUMENTED. Wherever I look, there are data files, no info about what they are other than their names. And that’s useless …” (Page 17)
– “As far as I can see, this renders the (weather) station counts totally meaningless.” (57)
– “COBAR AIRPORT AWS (data from an Australian weather station) cannot start in 1962, it didn’t open until 1993!” (71)
– “What the hell is supposed to happen here? Oh yeah — there is no ’supposed,’ I can make it up. So I have : – )” (98)
Read more: Harry Read Me, THE Climategate report | The Daily Inquirer http://www.thedailyinquirer.net/harry-read-me-the-climategate-report/127123#ixzz0woHYISV8
Via: The Daily Inquirer
GIGO. It should be called the global garbage anomaly.
You guys are arguing about 0.4deg. or 0.8deg. of Garbage whose provenance we know nothing about.
And this doesn’t even address the garbage coming out of NOAA, or GISS. There is an inordinate faith in technology and data that borders on zealotry. It leads to the logical fallacy of misplaced precision. People err, machines malfunction. As honorable people we seek the truth but we easily mislead ourselves. I don’t get too excited about tenths of degrees per decade. My thermometer is calibrated in 2 degree increments for crying out loud. We just had a 40 degree F. swing today.
OTOH, thanks to all for the great reading and deep thinking.
Leif Svalgaard says:
August 16, 2010 at 2:58 pm
“Glenn says:
August 16, 2010 at 1:26 pm
Say Leif, if 2010 turns out to be say +0.762C or +0.786 over the 1946 anomaly, will you claim the prediction of 0.8C came true for 2010?
Absolutely, wouldn’t you? wouldn’t any reasonable person?”
No to both. And you have no excuse for the mistakes you have made, despite being corrected. Were the *mean* global temperature “by 2010” at or exceeded 0.8C I would. Of course we (or we should be) critiquing the Guardian article, that published the prediction and figure. Even comparing individual years, anything less than 0.8C is, well, not quite 0.8C. Why do you think that unreasonable?
“Glenn says:
August 16, 2010 at 1:33 pm
“Most”? What science does not follow this “standard practice”?
Here is one example:
stevengoddard says:
August 15, 2010 at 4:30 pm
“Average HadCrut anomaly during that period was -0.010125.”
The ‘accuracy’ of this number is not commensurate with the number of decimals shown. A more ‘standard’ way of expressing that number might have been -0.010, but perhaps this was not meant to be ‘science’.”
Thanks for not answering that simple and direct question, Leif. The article claimed that by 2010 global temperatures would “rise by 0.8C” above WWII temps. The article claimed that the scientists used data from 1946 – 1996 to make the prediction. If the figure 0.8C was used it would mean 0.8C or above, not something less than that, irregardless of the individual data values used to arrive at the prediction, unless they specificially identified a range. The Guardian article did not.
Your attitude here has been to show support for the language in the Guardian article.
Dikran Marsupial
You need to think about how positive feedbacks work. Hansen’s models predicted that Antarctic ice would recede, decreasing albedo, causing further ice loss and warming the ocean.
In fact, the opposite has occurred. The models have to calculate very precise details within each grid cell, or they will head off in the weeds. Which is exactly what they do (head off in the weeds.)
Here is an example of how GCMs use Monte Carlo techniques.
Within a grid cell and layer, a certain cloud fraction is input. Rather than trying to precisely identify the positions of clouds for radiative transfer calculations, the modelers assign a random number (based on the cloud fraction) to each greenhouse gas component.
The average LW absorption over a series of time steps works out about the same as if the cloud positions are modeled precisely.
Glenn says:
August 16, 2010 at 3:53 pm
Even comparing individual years, anything less than 0.8C is, well, not quite 0.8C. Why do you think that unreasonable?
Because the usual practice is that when you write 0.8 is means anything between 0.75 and 0.85. If you meant a very precise number of 0.8, you would write 0.80 or 0.800 or whatever you think the error is.
Your attitude here has been to show support for the language in the Guardian article.
what has attitude to do with a simple arithmetic? What is your attitude? I simply point out that they forecast from a 1946 base out to 2010, and that there are good indications that they will get it right. Simple as that. This does not automatically mean that they will be right for the right reason. Or do you fear that they might?
We don’t really understand how the radiation and energy flows in and out of the Earth system. Has anyone actually explained this adequately yet. Trenberth is invoking a mysterious “negative radiative feedback” of -2.8 watts/m2 to cover the lack of warming/heat accumulation observed to date.
I don’t know why we should have such an unchangeable faith in the climate models. Modeling the climate would be a tough enough job on its own, let alone the fact that all the factors are still not understood yet.
The predictions to date are, let’s say, about one-third to one-half right. That is about all we know right now. What is the reason for this error margin? Over-confidence would be one of the reasons.
Next March, when temperatures have fallen 0.2C to 0.3C, we’ll see if someone will try to start over.
stevengoddard writes:
“You need to think about how positive feedbacks work. Hansen’s models predicted that Antarctic ice would recede, decreasing albedo, causing further ice loss and warming the ocean.”
So what? The point is that GCMs don’t work by predicting the weather, but by simulating weather and taking the ensemble average to cancel the stochastic “weather noise”, leaving an estimate of the forced climate change. Hence, it is irrelevant that we can’t predict the weather more than 72 hours in advance, climate modellers know that perfectly well and their approach does not assume that they can. That really is basic stuff, GCM 101.
It is ironic that an article criticising a bit of sloppy journalism contains a far more egregious error than the target of the OP.
stevengoddardsays:
“Here is an example of how GCMs use Monte Carlo techniques.”
Here is another: GCMs are used to run a number of stochastic simulations of the Earths climate system to a change in the forcings. The outputs of these runs are averaged to cancel the stochastic variability of each run (i.e. the weather) give an estimate of the forced response of the system (i.e. climate change). As I said, the whole ensemble of GCM runs is a Monte-Carlo simulation.
Leif Svalgaard says:
August 16, 2010 at 4:45 pm
“Glenn says:
August 16, 2010 at 3:53 pm
Even comparing individual years, anything less than 0.8C is, well, not quite 0.8C. Why do you think that unreasonable?
Because the usual practice is that when you write 0.8 is means anything between 0.75 and 0.85. If you meant a very precise number of 0.8, you would write 0.80 or 0.800 or whatever you think the error is.”
No, Leif, when I write that a temperature will be .8C more than a previous temperature, I do not mean .75C. In context to the subject, I would be right if the temp was .8C or more. When you add 0.122 to 0.6 you do not get 0.672 or 0.8.
“Your attitude here has been to show support for the language in the Guardian article.
what has attitude to do with a simple arithmetic? What is your attitude? I simply point out that they forecast from a 1946 base out to 2010, and that there are good indications that they will get it right. Simple as that. This does not automatically mean that they will be right for the right reason. Or do you fear that they might?”
The Guardian article won’t be “right” for any reason, Leif. “To 2010” excludes 2010 itself, using your own language, which by the way is the same as “by 2010”. And as I showed you, using your curious rounding argument, that the “prediction” came true the year before the referenced article was written, by comparing yearly temperatures. That isn’t in any way “right”, or a prediction at all. But it appears you want the Guardian article to appear “right”, and have used several curious phrases and claims to make it appear so, as in creating the appearance of the prediction as a comparison of two single year global anomalies, claiming that 2010 is one of those years, and advocating an acceptable “rounding error”.
Look at the graph above, if the little black line had not stalled around 1999, the mean temp by 2010 may have met the predicted 0.8C. It did stall, and it will not meet the prediction, even if you use 2010 (looking like it will be a hot year, gee I wonder why you keep insisting on using it) and it is much hotter than expected.
You can try to cheat with numbers, but La Nina will still get the last laugh.
Glenn says:
August 16, 2010 at 5:38 pm
No, Leif, when I write that a temperature will be .8C more than a previous temperature, I do not mean .75C.
As I said, “fuzzy math”…
Dikran Marsupial says:
The point is that GCMs don’t work by predicting the weather, but by simulating weather and taking the ensemble average to cancel the stochastic “weather noise”, leaving an estimate of the forced climate change.
Does it really leave you an estimate of the forced climate change, though? It is a pretty weak stand for the IPCC to take that the net natural forcings are ~0. I don’t think that is justified and I also believe that when you factor in the contamination in the temperature record, we can see that the models are actually much further off than they appear.
Weather isn’t exactly random anyway, unless you are speaking on weather terms. When we are talking about climate, the world follows a predictable pattern (seasons, anyone?). The StDev is just large because of the myriad amount of factors in play. Science would be better off trying to develop intricate understanding of the weather. Characterizing the response of nature to the inputs and outputs of the system as random is, imho, essentially giving up on the meat of the problem. If you can’t predict the weather, you don’t understand natural forcings well enough for me to trust you.
The DMI temps, the hurricane predictions, the ‘BBQ summer’, and the lack of specific predictions like Russian heat waves and Pakistan floods all point to a ‘101’ level of misunderstanding the problem. I am not attacking you with this post, my kangaroo friend, I am simply pointing out some of the reasons I think warrant skepticism. I am a much more friendly person to disagree with face-to-face.
Dave F says:
“Does it really leave you an estimate of the forced climate change, though?”
Whether it works or not is irrelevant to the point – climate prediction does not depend on accurately predicting weather, it works by simulating weather. I have no problem with anyone deciding they don’t trust the models, as long as they don’t mislead with incorrect arguments such as “we can’t predict the weather more than 72 hours in advance, so we can’t predict the climate 60 years in advance”. This is a non-sequitur as climate predictions are not made on the basis of predicting the weather.
“If you can’t predict the weather, you don’t understand natural forcings well enough for me to trust you.”
This is faulty reasoning. The behaviour of a double pendulum can be described in a few equations, but you still can’t predict its actual trajectory. This is the essence of chaos (as opposed to randomness), even if you understand natural forcings *perfectly*, you still can’t predict the weather more than a few days in advance, because the weather is also a chaotic system.
Glenn says:
August 16, 2010 at 5:38 pm
Leif Svalgaard says:
August 16, 2010 at 4:45 pm
“Glenn says:
August 16, 2010 at 3:53 pm
Even comparing individual years, anything less than 0.8C is, well, not quite 0.8C. Why do you think that unreasonable?
Because the usual practice is that when you write 0.8 is means anything between 0.75 and 0.85. If you meant a very precise number of 0.8, you would write 0.80 or 0.800 or whatever you think the error is.”
No, Leif, when I write that a temperature will be .8C more than a previous temperature, I do not mean .75C. In context to the subject, I would be right if the temp was .8C or more. When you add 0.122 to 0.6 you do not get 0.672 or 0.8.
Well what you would do is not the issue, rather what any scientist does is exactly what Leif described. This is elementary High school science, check significant figures and rounding rules.
…even if you understand natural forcings *perfectly*, you still can’t predict the weather more than a few days in advance, because the weather is also a chaotic system…
The weather is just a sum of responses to various energy conditions, though, is it not? Weathermen give very reasonable, and physically scientific, explanations for the day’s weather pattern. Anyhow, I think that it may be less chaotic than it appears if we were to study it further. I do notice that the weather generally follows the same patterns seasonally, but it seems that the standard deviation of the data is still large enough to make any signals teased out by averaging questionable.
Check the graph of the area I live in.
http://en.wikipedia.org/wiki/Cincinnati#Climate
The average high for the year is 17.8C (just to obey the rules Leif has set out 😉 ). Yet you can find an average 14C less than that or 12C more than that depending on what month/season you look at. That is a pretty large variation when you consider that the monthly averages are constructed from 30 days each of data spread over how many years? As far as I know, the StDev shows up the best with more samples so we can generally say that the StDev of the annual average is pretty large, right? So why would 0.8C over ~70 years start to become significant? I would say that is well within the range of variations of weather, and that those variations are averaged out of the figures the anomalies are derived from. So if you want to attribute these things to something, the onus is on your theory to eliminate the smoothing as the cause first.
People are worried about spreadsheet copy and paste precision to the third decimal point, while they argue for for data which is off by 100%. Weird.
Dikran Marsupial
If you average together 10,000 numbers which are biased too high, you get an average which is ………. too high.
Phil. says:
August 16, 2010 at 7:08 pm
Glenn says:
August 16, 2010 at 5:38 pm
Leif Svalgaard says:
August 16, 2010 at 4:45 pm
“Glenn says:
August 16, 2010 at 3:53 pm
Even comparing individual years, anything less than 0.8C is, well, not quite 0.8C. Why do you think that unreasonable?
Because the usual practice is that when you write 0.8 is means anything between 0.75 and 0.85. If you meant a very precise number of 0.8, you would write 0.80 or 0.800 or whatever you think the error is.”
No, Leif, when I write that a temperature will be .8C more than a previous temperature, I do not mean .75C. In context to the subject, I would be right if the temp was .8C or more. When you add 0.122 to 0.6 you do not get 0.672 or 0.8.
“Well what you would do is not the issue, rather what any scientist does is exactly what Leif described. This is elementary High school science, check significant figures and rounding rules.”
Appears Leif would disagree as he said: “It is standard practice in most science”.
But if you wish to claim that 0.122 + 0.6 = 0.672 be my guest.
Leif Svalgaard says:
August 16, 2010 at 6:00 pm
Glenn says:
August 16, 2010 at 5:38 pm
No, Leif, when I write that a temperature will be .8C more than a previous temperature, I do not mean .75C.
“As I said, “fuzzy math”…”
Say “hfh9hekjberjkb”, makes about as much sense, Svalgaard. Seems you are of the school where 0.122 + 0.6 = 0.672.
Glenn says:
August 16, 2010 at 9:27 pm
“As I said, “fuzzy math”…”
Say “hfh9hekjberjkb”, makes about as much sense, Svalgaard. Seems you are of the school where 0.122 + 0.6 = 0.672.
It seems you have left science behind and have run out of anything worthwhile to contribute.
Glenn says:
August 16, 2010 at 9:24 pm
When you add 0.122 to 0.6 you do not get 0.672 or 0.8.
But when you add the 1946 anomaly you get 0.804 = 0.600 – (-0.204).