The Guardian Fails Their O-Levels

By Steve Goddard

CRU Temperature Anomalies

Yesterday, the Guardian reported :

Meteorologists have developed remarkably effective techniques for predicting global climate changes caused by greenhouse gases. One paper, by Stott and Myles Allen of Oxford University, predicted in 1999, using temperature data from 1946 to 1996, that by 2010 global temperatures would rise by 0.8C from their second world war level. This is precisely what has happened.

Huh?

The temperature rise since WWII reported by CRU is 0.4C (not 0.8C) and it occurred prior to the date of the study. Climate models use thousands of empirically derived back-fit parameters. Given that fact, the only thing remarkable is that their prediction was so far off the mark. Their forecast is the equivalent of me predicting that Chelsea wins 12-0 yesterday. Off by a factor of two, and after the fact.

I recently attended a meeting of weather modelers, who told me that their models are effective for about 72 hours, not 60 years. GCMs use the same underlying models as weather modelers, plus more parameters which may vary over time.

h/t to reader M White

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

214 Comments
Inline Feedbacks
View all comments
Glenn
August 16, 2010 9:57 pm

Leif Svalgaard says:
August 16, 2010 at 9:40 pm
Glenn says:
August 16, 2010 at 9:24 pm
When you add 0.122 to 0.6 you do not get 0.672 or 0.8.
But when you add the 1946 anomaly you get 0.804 = 0.600 – (-0.204).
And when you add mayonnaise, mustard and ketchup you get McDonald’s secret sauce. So what?

Glenn
August 16, 2010 9:59 pm

Leif Svalgaard says:
August 16, 2010 at 9:38 pm
Glenn says:
August 16, 2010 at 9:27 pm
“As I said, “fuzzy math”…”
Say “hfh9hekjberjkb”, makes about as much sense, Svalgaard. Seems you are of the school where 0.122 + 0.6 = 0.672.
“It seems you have left science behind and have run out of anything worthwhile to contribute.”
Or you are looking in the mirror wondering what to say that won’t make you look so silly.

Spector
August 16, 2010 11:58 pm

All this heat and smoke over 0.4 deg C… 🙂 I suspect that many of general public are assuming that Global Warming has caused a scientifically proven average temperature increase of at least 3 to 7 deg C in order to cause all those record high extreme temperatures reported around the world and to threaten the sudden melting the polar ice-caps. One could make a real comedy routine about someone proclaiming that a 0.6 deg C average temperature increase since 1880 meant the end of the world was at hand.

richard telford
August 17, 2010 12:15 am

“I recently attended a meeting of weather modelers, who told me that their models are effective for about 72 hours, not 60 years. GCMs use the same underlying models as weather modelers, plus more parameters which may vary over time.”
This is entirely true: only a fool, a fraud, or Piers Corbyn would claim to be able to forecast if it will rain on your wedding next year. It is also completely irrelevant and profoundly misleading – when did you ever see a climate scientist predict the weather 60 years hence.
As I am sure it has been explained to you many times, weather forecasting is an initial value problem – our inability to precisely measure all aspect of the weather at the start of the forecast introduces errors that propagate and become large enough to cause the forecast to have no skill – climate projections are a boundary problem.
Consider whether it will snow at Christmas: a weather forecast made in August would be useless. A climate forecast, derived from our understanding of Earth’s orbit, would usefully predict a higher probability of snow at Christmas than in August.
There is a useful debate to be had about how well the models reproduce certain aspects of climate. By repeating the oft rebutted canard you demonstrate you are not remotely qualified to participate.

Dikran Marsupial
August 17, 2010 3:38 am

stevengoddard
“If you average together 10,000 numbers which are biased too high, you get an average which is ………. too high.”
But the error doesn’t compound by averaging, it reduces as the variance cancels and you are left only with the bias.
It is a standard rhetorical technique to evade admitting an error (in this case suggesting that the fact that weather can only be predicted 72 hours in advance means climate can’t be predicted 60 years in advance) by moving the discussion onto other topics. Fine, if you can’t admit you are wrong, it is your perrogative, but don’t expect me to help you with the evasion.

Dikran Marsupial
August 17, 2010 3:49 am

richard telford says:
“There is a useful debate to be had about how well the models reproduce certain aspects of climate. By repeating the oft rebutted canard you demonstrate you are not remotely qualified to participate.”
Well said, repeating these oft rebutted canards is bad enough, but the inability to admit it and the evasion suggests a lack of genuine scientific skepticism, which is arguably even worse. Scientists get stuff wrong all the time, and they know they do, but good ones are able to learn from it.

August 17, 2010 5:15 am

Richard Telford says:
“Consider whether it will snow at Christmas: a weather forecast made in August would be useless. A climate forecast, derived from our understanding of Earth’s orbit, would usefully predict a higher probability of snow at Christmas than in August.”
That is your defense of GCMs??
Maybe you folks need a computer model to tell us there is a higher probability of snow in cold weather, but the rest of us enough have common sense to know that low bar is no defense of computer models..
This table shows the lack of accuracy of climate models. The total is 1 correct prediction, 27 incorrect, and 4 questionable.
The universe of commodity trading of, say, soybeans, is much smaller than the universe of factors that determine regional climates or future planetary temperatures. If a supercomputer could predict the climate, wouldn’t someone use a supercomputer to predict a relatively small commodities market, and convert computing power into immense wealth?
Of course they would. Do they? No, there are too many variables in the soybean market. But compared with climate variables, the soybean market’s variables are tiny.
It should be remembered that even the biggest, most sophisticated GCMs fail to predict the climate in any region with any accuracy. Not a single GCM predicted the cooling over the past ten years. Not one of them. The UN/IPCC admits as much, pointing out that the climate is a chaotic system.
What is worse, the warmist crowd portrays models as evidence. Models are not evidence. Models are tools — and not very accurate tools.
The empirical evidence shows that the current climate is well within its historical parameters. Nothing unusual is happening, despite the fervent wishes and beliefs of those infected with cognitive dissonance.

richard telford
August 17, 2010 6:02 am

Smokey says: August 17, 2010 at 5:15 am
Christmas
It’s not a defense, but a simple case to help those with limited understanding.
“Not a single GCM predicted the cooling over the past ten years.”
Two canards – we’ll have enough for Duck Soup soon. First, there has not been cooling over the last ten years. Second, the models could only have predicted cooling if they had been initiated with the ocean and atmosphere in the 2000 configuration, as if they were weather forecasts run for a decade. Most models are not initiated in this way (a couple of recent ones have been), but instead have a long run in phase. So the models cannot predict next year’s weather, but you can look in the model runs and test if long periods with little warming occur. Go on have a look – you would be surprised.
I bet you won’t look at the model output – it would be most unlike a skeptic to risk anything that might change their beliefs.

Dikran Marsupial
August 17, 2010 6:22 am


Smokey
says:
“Not a single GCM predicted the cooling over the past ten years. Not one of them.”
(i) What cooling over the past ten years would that be? For example UAH shows a small (probably non-significant) warming over the past decade. The UAH dataset is generally the one showing the smallest warming trend, so I am not cherry picking.
(ii) The reason that the models didn’t predict it is because it is caused by weather noise (in this case very probably by ENSO) and is not the forced response of the climate system. However, as Easterling and Wehner show, decadal or longer periods of little warming or even cooling are to be expected, even during a long term warming trend, because the trend is small in comparison with the effects of short term cyclic phenomena such as ENSO. Not only do such periods ocurr in the data, they also ocurr in the model output. So you are wrong, the models did predict that such periods will ocurr, they just can’t predict exactly when, as they are not caused by the forced climate change, but by internal variability (i.e. weather).
You have just demonstrated that, like Steven Goddard, you don’t know the difference between weather and climate, and furthermore, you don’t actually know what the models do predict and what they don’t. Read Easterling and Wehners paper, it makes the point quite straight-forwardly.

August 17, 2010 6:23 am

richard telford,
Thanks for your opinion. But in fact, the models are wrong.
And of course there was global cooling over the past decade. Your opinion does not trump that fact.
You simply do not understand the role of scientific skeptics. Skeptics do not have a belief system, like those promoting the CAGW belief do. Skeptics simply say, “Provide convincing, empirical, testable and replicable evidence that a tiny trace gas drives the Earth’s climate.” But no such evidence exists.
See, the burden is on those promoting the CO2=CAGW hypothesis. So far, they have failed to provide convincing evidence showing that human emitted CO2 causes any measurable changes in the planet’s temperature.
That leaves only the null hypothesis: natural climate variability fully explains current observations. Falsify that, if you can. If you do, you will be the first, and on the short list for the [now greatly diminished] Nobel prize.

August 17, 2010 6:28 am

Glenn says:
August 16, 2010 at 9:59 pm
Or you are looking in the mirror wondering what to say that won’t make you look so silly.
I have tried very hard to stick to the science and the numbers as I have understood them. And you give me that. Sigh.

August 17, 2010 6:35 am

Dikran Marsupial
It makes no sense to measure 2000-2010 temperature trends, because 2000 started in La Nina and 2010 is still seeing peak temperatures from the last El Nino.
Six months from now will provide a sensible comparison, because the trend will start and end with La Nina.

Dikran Marsupial
August 17, 2010 6:38 am

Smokey, the decade didn’t end in Nov 2009 ;o)
For *scientific* skepticism, the thing that should come first is skepticism of your own position, before you tackle the arguments of the opposition. If you did that, you wouldn’t be trotting out old cannards and not understand the refutations.
Science is best performed as a chess player plays chess, you aim to minimise the maximum gain available to your opponent, rather than merely the move that gives you the greatest gain if your opponent doesn’t spot a good reply. For instance, when I wanted to demonstrate that there has been no cooling over the last decade, rather than choosing the GISTEMP data (which would show the largest warming trend) I chose the UAH one (which shows the lowest). I did so to make it clear that I wasn’t cherry picking (I don’t have to on this occasion as the data demonstrate you were wrong whatever dataset you choose), and so you couldn’t make any argument about UHI or other such problems with station data. Go ahead, show some real skepticism, starting with your own arguments (try and prove them wrong before using them). That way you may make some gains rather than marginalising yourself by demonstrating your ignorance and bias.
n.b. not I haven’t claimed that the models are skillful here, I just pointed out Steven Goddards canard.

George Lawson
August 17, 2010 7:09 am

I don’t think any of us disagree that warming is happening so we shouldn’t be worrying about the odd tenth of a degree of accuracy in the extent to which it is happening.. The real discussion should surely concentrate on whether we bloggers and the rest of humanity are causing it.

August 17, 2010 7:10 am

richard and dikram, , you’re going to have to do a lot better than simply expressing your opinions here.
The following decade or longer declining temperature charts falsify the grant-sniffing Easterling-Wehner link: models can not predict even the decade immediately ahead.
Prof Freeman Dyson explains:

I have studied the climate models and I know what they can do… They do a very poor job of describing the clouds, the dust, the chemistry and the biology of fields and farms and forests. They do not begin to describe the real world that we live in. The real world is muddy and messy and full of things that we do not yet understand. It is much easier for a scientist to sit in an air-conditioned building and run computer models, than to put on winter clothes and measure what is really happening outside in the swamps and the clouds. That is why the climate model experts end up believing their own models.

Models are very effective as a tool in supporting grant applications. But in predicting the climate? Not so much.
IPCC climate model vs reality.
Also, please don’t use WoodForTrees charts without displaying the parameters. Why? Because it’s mighty suspicious when someone posts a graph with no metadata. Here is a chart of declining UAH temps.
Here are four temp sets since 2002 — including the ARGO deep sea buoys.
Here’s a chart of GISS vs other benchmarks. Note that GISS shows rising temps — while everyone else shows declining temperatures. Who should we believe, an organization under James Hansen that constantly begs for more grant money based on a baseless scare? Or our lyin’ satellite eyes?
Raw monthly temps again. Is it time to panic yet?
Want more? OK, here is another chart of satellite temperatures. Note the decline. Same here.
Time to face reality: computer climate models are wrong. And the premier empirical test of the CAGW model “fingerprint” fails. No tropospheric “hot spot.” Conclusion: CAGW falsified.
The null hypothesis remains standing — while the CO2=CAGW hypothesis conjecture is once again debunked.

August 17, 2010 7:22 am

Dikran Marsupial
I am not familiar with, nor am I trotting out “old canards.”
Rather, I am explaining how GCMs work. A lot of people show up here and arrogantly assume that the authors are not familiar with the topics they write about.

richard telford
August 17, 2010 8:05 am

Smokey says:
August 17, 2010 at 7:10 am
You have either not read, or not understood Easterling and Wehner (2009). Which is it? They clearly demonstrate that the models predict periods with little or no warming. The models cannot predict when these periods will occur because of the way they are initialised.
You call yourself a skeptic and then have the gall to link to http://wattsupwiththat.files.wordpress.com/2009/03/akasofu_ipcc.jpg
Do you not think that a little even-handed skepticism is in order. Exactly what magical force drives the trend and oscillations? Climate change science should be based on physics not numerology.

richard telford
August 17, 2010 8:08 am

stevengoddard says:
August 17, 2010 at 7:22 am
Rather, I am explaining how GCMs work. A lot of people show up here and arrogantly assume that the authors are not familiar with the topics they write about.
————-
That many authors don’t know what they write about here would be the more charitable of the possible explanations for the ignorance often celebrated.

barry
August 17, 2010 8:23 am

The WFT graph you linked shows 0.4C from WWII to 1996, which was the period of the study.

No it wasn’t.
The study was in 1999, and used climate data from 1946 to 1996 to come up with a predicted increase of temps at 0.8C by 2010. It says so right in your top post.
James Sexton was right. The temperature increase from 1946 – 2010 is ~0.7C. That is 0.1C off the prediction made in the study, as reported by the Guardian.

Dikran Marsupial
August 17, 2010 8:32 am

Smokey says:
“The following decade or longer declining temperature charts falsify the grant-sniffing Easterling-Wehner link:”
Actually as Easterling and Wehner say that we should expect to see the occasional decade or two with little warming or even cooling, even in the presence of a long term warming, those charts corroborate Easterling and Wehners paper. Did you not even read the abstract?
BTW, the “grant-sniffing” is an ad-hominem and impresses nobody, and FYI all academics are “grant sniffing”, even the sceptics, getting grants is part of the job and it would be a career limiting move not to pursue funding. As it happens plenty of skeptical research gets funded as well.
“models can not predict even the decade immediately ahead.”
Nor do they attempt to do so, for the reasons set out in Easterling and Wehner. Over such a short period the observed trends are dominated by things like ENSO.
“Also, please don’t use WoodForTrees charts without displaying the parameters. Why? Because it’s mighty suspicious when someone posts a graph with no metadata. Here is a chart of declining UAH temps.”
Not very observant are you, the ledgend on woodfortrees plots provide the metadata. Also it is very funny that you make the assertion that temperatures have been declining for the last decade and back it up with a plot that ends in Nov 2009, and then when a plot of the last decade is given, you bluster (incorrectly) about the lack of metadata, and then give a plot starting in 1998! If that is skepticism, sorry, I am not that impressed.

Dikran Marsupial
August 17, 2010 8:39 am

Steve Goddard wrote
“I am not familiar with, nor am I trotting out “old canards.””
It is obvious that you are not familiar with the refutation, if you were you would know that it is indeed a canard.
“Rather, I am explaining how GCMs work.”
incorrectly if you think that the prediction horizon for weather forecasting has any bearing on climate prediction, for the reasons I and Richard Telford have given,.
“A lot of people show up here and arrogantly assume that the authors are not familiar with the topics they write about.”
No assuming required, my opinion was formed a-posteriori after having seen your comment about not being able to predict the weather more than 72 hours in advance. Your argument was wrong not because it was made by you, but because of the nature of the statement, I have an open mind about anything else you write and will form an opinion based on the content. However, not all are as open minded and your inability to accept and correct errors will make you an object of ridicule for many. As I have said before I would like to see a strong debate on this issue from both sides, and your posts on this side are presenting an open goal for your opponents.

August 17, 2010 9:01 am

Some folks above nominate themselves as the sole arbiters of who might be a scientific skeptic. I refer those people to Dr Karl Popper, who shows explicitly what is required regarding a hypothesis such as CO2=CAGW:

1. It is easy to obtain confirmations, or verifications, for nearly every theory — if we look for confirmations.
2. Confirmations should count only if they are the result of risky predictions; that is to say, if, unenlightened by the theory in question, we should have expected an event which was incompatible with the theory — an event which would have refuted the theory. Every “good” scientific theory is a prohibition: it forbids certain things to happen. The more a theory forbids, the better it is.
3. A theory which is not refutable by any conceivable event is non-scientific. Irrefutability is not a virtue of a theory (as people often think) but a vice.
4. Every genuine test of a theory is an attempt to falsify it, or to refute it. Testability is falsifiability; but there are degrees of testability: some theories are more testable, more exposed to refutation, than others; they take, as it were, greater risks.
5. Confirming evidence should not count except when it is the result of a genuine test of the theory; and this means that it can be presented as a serious but unsuccessful attempt to falsify the theory. (I now speak in such cases of “corroborating evidence.”)
6. Some genuinely testable theories, when found to be false, are still upheld by their admirers — for example by introducing ad hoc some auxiliary assumption, or by reinterpreting the theory ad hoc in such a way that it escapes refutation. Such a procedure is always possible, but it rescues the theory from refutation only at the price of destroying, or at least lowering, its scientific status. (I later described such a rescuing operation as a “conventionalist twist” or a “conventionalist stratagem.”)
One can sum up all this by saying that the criterion of the scientific status of a theory is its falsifiability, or refutability, or testability.

Skeptics are still waiting for testable experiments or observations showing that CO2 has a measurable, real world effect that causes the planet’s temperature rises.
The Easterling and Wehner paper is not falsifiable because it claims that warming — or cooling — is all predicted, no matter which happens: if the temperature rises, that was predicted; if the temperature falls, well, that was predicted too. That is not a testable hypothesis, nor does it make any risky predictions or forbid temperature rises or declines [#2 above]; therefore, it is not science.
Regarding the end dates of the charts I post, they are charts obtained from various sources. Most go right up to this year. If the ending date is the issue, then you win the argument. But if the declining temperature over most of the past decade is the issue, then you lose the argument.
The null hypothesis of natural climate variability is easily falsifiable: temperatures simply need to go outside of their past parameters to falsify the null hypothesis. But a hypothesis that says temperatures can either go up or go down is not testable or falsifiable; therefore, it is not science.

Dikran Marsupial
August 17, 2010 9:15 am

Smokey says:
“The Easterling and Wehner paper is not falsifiable because it claims that warming — or cooling — is all predicted, no matter which happens. That is not testable; therefore it is not science.”
Wrong, if we observed a long term warming of the climate with no cooling periods, that would falsify Easterling and Wehner as they predict that such periods will happen. If you really understood Popper, you would be applauding climate modellers as they are producing testable predictions of the consequences of their theories. The models are easily falsified, all you need is observations that lie outside the error bars of the model projections. Sadly if you insist on choosing very short periods, which are dominated by internal variability rather than climate per se, then the error bars will be large reflecting the uncertainty due to things like ENSO.
“Regarding the end dates of the charts I post, they are charts obtained from various sources. Most go right up to this year. If the ending date is the issue, then you win the argument. But if the declining temperature over most of the past decade is the issue, then you lose the argument.”
The start and end dates are indeed the issue, because ENSO allows you to cherry pick a period to suit any result you want a-priori, e.g. by starting at the El-Nino in 1998 and stopping in Nov 2009 before the current El-Nino (an you have presented several examples). I on the other hand produced of a chart of the last decade, which was the period you specified (not me – so you can’t accuse me of cherry picking!) it just so happens that at the moment that trend shows warming, but you didn’t bother to check the data before making the argument. The difference between us though is that I know that short terms trends are essentially meaningless, which is why (unlike you) I don’t try to base my arguments on them. Pick a period long enough to average over many ENSO cycles (e.g. 30 years) and cherry picking becomes more difficult, which is why “skeptics” don’t like to talk about them (genuine skeptics on the other hand are fine with them).

August 17, 2010 9:36 am

Dikran Marsupial
72 hours is the standard window of accuracy assumed by weather modelers. I’m sorry that you don’t want to believe it.

Noelene
August 17, 2010 9:55 am

Been reading some of the comments in the Guardian.
Sort of the same debate going on between two people there
Bioluminescence says
The authors made their calculations using data from 1946 to 1996, i.e. they calculated the average temperature during this period, then derived the anomalies. The anomalies derived from the two baseline periods – 1961-1990 and 1946-1996 – are different, therefore using anomalies based on 1961-1990 is incorrect in this case. You have to use the anomalies based on the 1946 to 1996 data.
So far for 2010, I get an increase of 0.76 since 1946. Has anyone else tried?
BombNo20 says
I found the paper here.
Can I sign off with an indicator that shows the sensitivity of start-point selection – something I should have picked up on from the start?
The data you linked to has:
1946 – 2010 ranging -0.204 to +0.533 = +0.737 rise.
and
1945 – 2010 ranging -0.007 to +0.533 = +0.540 rise.
This raises the question of how much value I can grant to even specific predictions. Given that this prediction, made 11 years in advance, is still subject to the vaguaries of weather (good/bad from one year to the next), how long should I wait to decide if a prediction is ‘right’ ?
If “Your” anomaly data says +0.737 – how good is the prediction of +0.8C? I’d say pretty darned good.
If “My” anomaly data says +0.540 – how good is the prediction of +0.8C? I’d say it was not even close.
Much to ponder.
Bioluminescence
Thanks for the link – I’ll try and have a proper look at it soon.
Since I don’t know how they reached their conclusion it’s difficult to comment. Perhaps if they’d started with 1945 rather than 1946 their estimate would’ve been different – am I making any sense? I’m of course assuming they started in 1946 because their analysis is based on data from 1946 to 1996.
Maybe a statistician could enlighten us?
End