Epic Failure of the Canadian Climate Model

Guest Essay by: Ken Gregory

The Canadian Centre for Climate Modeling and Analysis located at the University of Victoria in British Columbia submitted five runs of its climate model CanESM2 for use in the fifth assessment report of the International Panel on Climate Change (IPCC). The climate model produces one of the most extreme warming projections of all the 30 models evaluated by the IPCC. (See Note 1.) The model badly fails to match the surface and atmosphere temperature observations, both globally, regionally, as presented in six graphs.

Global Trends

The graph below compares the near-surface global temperatures to the model runs.

clip_image002

Figure 1. Canadian climate model simulations of near-surface global temperatures and three datasets of observations.

The five thin lines are the climate model runs. The thick black line is the average (mean) of the five runs. The satellite data in red is the average of two analysis of lower troposphere temperature. The radiosonde weather balloon data is from the NOAA Earth Space Research Laboratory. The surface data is the HadCRUT4 dataset. (See Note 2 for the data sources.) The best fit linear trend lines (not shown) of the model mean and all datasets are set to zero at 1979, which is the first year of the satellite data. We believe this is the best method for comparing model results to observations.

Any computer model that is used to forecast future conditions should reproduce known historical data. A computer climate models that fails to produce a good match to historical temperature data cannot be expected to give useful projections.

Figure 1 shows that the computer model simulation after 1983 produces too much warming compared to the observations. The discrepancy between the model and the observation increase dramatically after 1998 as there has been no global near-surface warming during the last 16 years. With the model and observation trends set to zero in 1979, the discrepancy between the model mean of the near-surface global temperatures and the surface observations by 2012 was 0.73 °C. This discrepancy is almost as large as the 0.8 °C estimated global warming during the 20th century. The model temperature warming trend as determined by the best fit linear line from 1979 to 2012 through the model mean is 0.337 °C/decade, and the average trend of the three observational datasets is 0.149 °C/decade. Therefore, the model temperature warming rate is 226% of the observations (0.377/0.147 = 226%).

The large errors are primarily due to incorrect assumptions about water vapor and cloud changes. The climate model assumes that water vapor, the most important greenhouse gas, would increase in the upper atmosphere in response to the small warming effect from CO2 emissions. A percentage change in water vapor has over five times the effect on temperatures as the same percentage change of CO2. Contrary to the model assumptions, radiosonde humidity data show declining water vapor in the upper atmosphere as shown in this graph.

The individual runs of the model are produced by slightly changing the initial conditions of several parameters. These tiny changes cause large changes of the annual temperatures between runs due to the chaotic weather processes simulated in the model. The mean of the five runs cancels out most of the weather noise because these short term temperature changes are random.

The climate model assumes that almost all of the temperature changes are due to anthropogenic greenhouse gas emissions and includes insignificant natural causes of climate change. The simulated weather noise includes the effects of ENSO (El Niño and La Niña), but does not include multi-decadal temperature changes due to natural ocean oscillations or solar-induced natural climate change other than the small changes in the total solar irradiance. The historical record includes the effects of large volcanoes that can last for three years. The projections assumes no large future volcanoes. There has been no significant volcanoes since 1992 that could have affected temperatures as shown in this graph of volcanic aerosols.

The model fails to match the temperature record because the model over estimates the effects of increasing greenhouse gases and it does not include most natural causes of climate change.

Figure 2 compares the mid-troposphere global temperatures at the 400 millibar pressure level, about 7 km altitude, to the model. (millibar = mbar = mb. 1 mbar = 1 hPa.)

clip_image004

Figure 2. Canadian climate model simulations of mid-troposphere global temperatures and two datasets of observations. The balloon data is at the 400 mbar pressure level, about 7 km altitude.

The discrepancy in 2012 between the model mean of the global mid-troposphere temperatures and the satellite and balloon observations were 1.26 °C and 1.04 °C, respectively. The model temperature trend is 650% of the satellite trend.

The satellites measure the temperature of a thick layer of the atmosphere. The satellite temperature weighting function describes the relative contributions that each atmospheric layer makes to the total satellite signal. We compared the balloon trends weighted by the lower troposphere satellite temperature weighting functions to the near-surface observed trends for global and tropical data. Similar comparisons were made for the mid-troposphere. The weighted thick layer balloon trends for the lower and mid-troposphere were similar to the near-surface and 400 mbar balloon trends, respectively. We conclude that the satellite lower-troposphere and mid-troposphere temperature trends are approximately representative of the near-surface and 400 mbar temperature trends, respectively. (See notes 3 for further information.)

Tropical Trends

Figure 3 compares the near-surface temperatures in the tropics from 20 degrees North to 20 degrees South latitude to the climate model simulations. The model temperature trend is 300% of the average of the three observational trends. The discrepancy of model to near-surface observation trends in the tropic is much greater than for the global average.

clip_image006

Figure 3. Canadian climate model simulations of near-surface tropical temperatures and three datasets of observations.

Figure 4 compares the warming trend in the tropical mid-troposphere to the observations. The increasing tropical mid-troposphere water vapor in the model makes the warming trend at the 400 mbar pressure level 55% greater than the surface trend.

The discrepancy in 2012 between the model mean of the mid-troposphere tropical temperatures and the balloon observations was 1.33 °C. The model temperature trend is 560% of the average of the two observational trends.

clip_image008

Figure 4. Canadian climate model simulations of mid-troposphere 400 mbar tropical temperatures and two datasets of observations.

The discrepancy is even greater at the 300 mbar pressure level, which is at about 9 km altitude.

Figure 5 compares the model to the balloon temperatures at the 300 mbar pressure level in the tropics. The tropical warming trend at 300 mbar in the model is exactly twice the model surface trend. In contrast, the temperature trend of the balloon data at 300 mbar is an insignificant 3% greater than the surface station trend. The discrepancy in 2012 between the model mean of the mid-troposphere tropical temperatures at the 300 mbar level and the balloon observations was 1.69 °C.

The temperature trends (1979 to 2012) of the tropical atmosphere at 7 km and 9 km altitude are an astonishing 470% and 490% of radiosonde balloon data. These are huge errors in the history match!

clip_image010

Figure 5. Canadian climate model simulations of mid-troposphere 300 mbar tropical temperatures and weather balloon observations.

South Trends

In the far south, the near-surface modeled temperature tend is in the opposite direction from the observations. The figure 6 compares the model near-surface temperatures from 50 degrees South to 75 degrees South latitude to the observations. The modeled temperatures from 1979 are increasing at 0.35 °C/decade while the surface temperatures are decreasing -0.07 °C/decade.

Temperatures over most of Antarctica (except the Antarctic Peninsula) have been falling over the last 30 years. The Antarctic sea ice extent is currently 1 million square km greater than the 1979 to 2000 mean. The sea ice extent has been greater than the 1979 to 2000 mean for all of 2012 and 2013 despite rising CO2 levels in the atmosphere.

clip_image012

Figure 6. Canadian climate model simulations of near-surface southern temperatures (50 S to 75 S latitude) and two datasets of observations.

Summary of Trend Errors

The table below summarizes the model trend to observation trend ratios.

Model Trend to Observations Ratios (1979 to 2012)
Model/Satellite Model/Balloon Model/Surface
Global Surface 254% 209% 220%
Global 400 mb 650% 315%
Tropics Surface 304% 364% 249%
Tropics 400 mb 690% 467%
Tropics 300 mb 486%
South Surface 3550% -474%

The table shows that the largest discrepancies between the model and observations are in the tropical mid-troposphere. The model error increases with altitude and is greatest at the 300 mbar pressure lever at about 9 km altitude. The ratio of the modeled tropical mid-troposphere 300 mbar warming rate to the surface warming rate is 200%, and is a fingerprint of the theoretical water vapor feedback. This enhanced warming rate over the tropics, named the “hot spot”, is responsible for 2/3 of the warming in the models. The fact that the observations show no tropical mid-troposphere hot spot means that there is no positive water vapor feedback, so the projected warming rates are grossly exaggerated.

Model results with large history match errors should not be used for formulating public policy. A model without a good history match is useless and there can be no confidence in its projections. The lack of a history match in the Canadian model output shows that the modeling team have ignored a fundamental requirement of computer modeling.

A global map of the near-surface air temperature from the model for April 2013 is here.

Anti-Information

Patrick Michaels and Paul “Chip” Knappenberger compared the model output to actual 20th century temperature data to determine what portion of the actual data can be explained by the model. They wrote, “One standard method used to determine the utility of a model is to compare the “residuals”, or the differences between what is predicted and what is observed, to the original data.  Specifically, if the variability of the residuals is less than that of the raw data, then the model has explained a portion of the behavior of the raw data.”

In an article here, they wrote “The differences between the predictions and the observed temperatures were significantly greater (by a factor of two) than what one would get just applying random numbers.” They explained that a series of random numbers contain no information. The Canadian climate model produces results that are much worse than no information, which the authors call “anti-information”.

The Canadian Model Contributes to Fuel Poverty

The results from the Canadian climate model were used in a U.S. Global Change Research Program report provided to the US Environmental Protection Agency to justify regulating CO2. The authors of that report were told that the Canadian climate model produces only anti-information. They confirmed this fact, but published their report unchanged. The Canadian government has indicated it will follow the lead of the U.S.A. in regulating CO2 emissions.

The Canadian climate model is also used by the IPCC to justify predictions of extreme anthropogenic warming despite the fact that the model bears no resemblance to reality. As does the climate model, the IPCC ignores most natural causes of climate change and misattributes natural climate change to greenhouse gas emissions. Here is a list of 123 peer-reviewed papers published from 2008 to 2012 on the solar influence on climate that were ignored by the IPCC in the fifth assessment report.

Climate alarmism based on climate models that don’t work has so far cost the world $1.6 trillion in a misguided and ineffective effort to reduce greenhouse gas emissions. These efforts have caused electricity prices to increase dramatically in Europe causing fuel poverty and putting poor people at risk. High fuel costs and cold winter weather are blamed for 30,000 excess deaths in Britain last year. Europe’s energy costs have increased by 17% for consumers and 21% for industry in the last four years. The Canadian climate model failures has contributed to this misery.

Canadian politicians and taxpayers need to ask why we continue to fund climate models that can’t replicate the historical record and produce no useful information.

Ken Gregory, P.Eng.

Friends of Science

Ten years of providing independent

climate science information

Notes:

1. The Canadian Earth System Model CanESM2 combines the CanCM4 model and the Canadian Terrestrial Ecosystem Model which models the land-atmosphere carbon exchange. Table 9.5 of the IPCC Fifth Assessment Report Climate Change 2013 shows that the CanESM2 transient climate sensitivity is 2.4 °C (for double CO2). The 30 model mean transient climate sensitivity 90% certainty range is 1.2 °C to 2.4 °C.

2. The Canadian climate model CanESM monthly data was obtained from the KNMI Climate Explorer here. The satellite data was obtained from the University of Alabama in Huntsville here (LT) and here (MT), and from Remote Sensing Systems here (LT) and here (MT). The radiosonde weather balloon data was obtained from the NOAA Earth System Research Laboratory here. The global surface data is from the HadCRUT4 dataset prepared by the U.K. Met Office Hadley Centre and the Climate Research Unit of the University of East Anglia, here. The HadCRUT4 tropical data (20 N to 20 S) was obtained from KNMI Climate Explorer. An Excel spreadsheet containing all the data, calculations and graphs is here.

3. The average global weather balloon trends of all pressure levels from 1000 mbar to 300 mbar weighted by the LT satellite temperature weighting functions is only 3% greater than the balloon trend at 1000 mbar, confirming that the LT satellite trends is representative of the near-surface temperature trend. The weighted average tropical weather balloon trends are 3% greater than the average of the 1000 mbar balloon trend and the surface station trend. The temperature weighting functions for land and oceans were obtained from the Remote Sensing Systems website. The average global and tropical balloon trends of all pressure levels weighted by the mid-troposphere (MT) satellite temperature weighting functions are 13% and 4% greater than the balloon trends at 400 mbar. If the MT satellite trends were adjusted by these factors to correspond to the 400 mbar pressure level, the global and tropical satellite trend adjustments would be -0.017 °C/decade and -0.004 °C/decade, respectively. Since the MT satellite trends are already less than the 400 mbar balloon trends, no adjustments were applied.

=============================

PDF version of this article is  here

0 0 votes
Article Rating
143 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Eliza
October 24, 2013 8:20 am

Try comparing to radiosonde and satellite that will look even more ridiculous not even a Fail maybe a fraud LOL

Eliza
October 24, 2013 8:26 am

A bit OT (delete or put elsewhere if you wish) but Solar 24 picking up a bit
http://www.swpc.noaa.gov/SolarCycle/sunspot.gif I was wondering if all the low’s SNN (but not very very low) add up to be considerable flux over time even though the cycle per se is “low” that may support Leif’s assertions re solar activity climate to some extent anyway. It (the idea) may be complete nonsense as well LOL

DontGetOutMuch
October 24, 2013 8:28 am

Clearly, reality is flawed. Why have there been no adjustments?

bk
October 24, 2013 8:30 am

Solid well written article. Like the fuel poverty part. We have to keep hammering away at the alarmist side. They are trying to stand their ground and repeat their cult mantras of “Climate Change is Real” and “End the Denial” but the public is starting to notice that the alarmist arguments are just a religious chant sounding like la la la la.

October 24, 2013 8:31 am

Thanks, Ken. Good article.
It is not surprising, but it is sad that policy is being based on unrealistic models, all over the world.
The underlying cause is not in the models, or even in post-modern climatology of the IPCC. It is more basic, in the advance of socialism.

Sean Peake
October 24, 2013 8:35 am

I think CGI did the coding

October 24, 2013 8:40 am

“Any computer model that is used to forecast future conditions should reproduce known historical data. A computer climate models that fails to produce a good match to historical temperature data cannot be expected to give useful projections.”
well that’s patently false.
Some simple examples come to mind, like the distance to empty model in my cars navigation system. Its always wrong but useful
but today I’ll give you a life an death one from operational analysis.
In order to make decisions about our future defenses the DoD routinely uses models that are not verified against the past and can never be verified against the past. The models are even used in theatre to evaluate strategic and tactical decisions.
Example: the design of the F-22. During the design of that system the model Tac Brawler was used to inform decisions about the number of missiles the aircraft would carry; 4 6 or 8.
To make the decision the modelling departments had to simulate future air battles.
1. the planes and weapons being tested were all imaginary projections
2. there was no historical data to verify against.There is very little real data on
air combat between aircraft in the modern era.
Despite this utter lack of historical data to verify against the output of the models was useful
in informing decisions.
Where you do have historical data to verify against you can even use bad fits to inform decisions. For example: In modelling how many bombs its takes to cripple a runway you
would construct a model. The model may predict 3 bombs and your historical data may tell you
that the model always underestimates the total required. So the models hindcast is low, say by one bomb. Going forward say you want to model the effect of a bomb with an increased number of cluster uniits. You use your existing model. It predicts 2 bombs for this future weapon. Is that useful? Sure it is. If I am attack planning i use that model ( an example would be bug splat ) the model predicts 2, and as a planner I decide that 3 should be the planning number. Why? because my model that fails to hindcast is characteristically low. To be safe Im not going to count on 2 bombs doing the job. I’ll need to plan for 3. meanwhile the modelling guys continue to refine their model. They dont consider their model falsified. They consider it as having limited utility. Important ultility ( it gives you a ‘floor” estimate) but utility nonetheless.

C.M. Carmichael
October 24, 2013 8:49 am

As an Alumni of UVic it saddens me to see such obvious propaganda being displayed, anyone with any pride would keep this model’s output as their biggest secret. Tell people that the dog ate your model output, or better yet tell them that occasionally water from the deep ocean grabs model outputs and wisks them away to the deep ocean, really deep, so don’t bother looking. BC in general and Victoria in particular has always been a “Hippie” which means “green” haven. Canada has its only federal Green party representative there, and she is a real nutter.

David s
October 24, 2013 8:54 am

I guess that means if the model differs from reality then reality must be wrong. Or possibly the modelers have become detached from reality.

October 24, 2013 8:55 am

Paging Arch Warmists weaver?
This is symptomatic of the canadian bureaucracies approach to CAGW.
Complete and total F.U.D.
Whether through incompetence,cowardice, malice or orders from above they have betrayed the citizens of Canada.
Utter and expensive nonsense, such as “Environment Canada’s Science”, billions of dollars spent on studies of the effect of AGW, while never proving for themselves that there is any such beast.
Climate Change mitigation policies are all the rage, yet there exists zero science confirming the need for these policies, even government admits this as they fail to document the science they claim their policy is based on.
This whole scheme bears the handprints of a group of people centred around the federal Liberal Party of Canada.

Betapug
October 24, 2013 9:07 am

With research product like, “The Impact of a Human Induced Shift in North Pacific Circulation on North American Surface Climate, (J.C. Fyfe, 2008) it is no wonder that CCCMA has also managed to produce the first elected Provincial Green Party MLA in Canada, “Nobel Prize winner” Dr. Andrew J. Weaver.
As with many academics these days, Dr. Weaver seems to keep fingers in several pies at once. He is also a principal of Solterra Solutions Ltd. a private corporation providing “Climate, Forensic and Educational Services” including promotion of Weaver’s books on Global Warming.
http://www.solterrasolutions.com/index.php?page=2

Fred Souder
October 24, 2013 9:14 am

I like using graphs from this site in my classroom, but the Y axes are mislabeled. Would it be difficult to change them to “temperature anomalies” or “deviations”?
Steven Mosher:
Usually your examples have some merit and represent a valid – or at least arguable – point of view, but your examples of models that don’t need to reflect past data are unrelated to climate models and do not strengthen your argument.

Doug Proctor
October 24, 2013 9:18 am

I’m a Canadian whose taxes pay for this, but this isn’t my complaint/question. The question is, why would ANY SCIENTIST submit for consideration models that fail so badly to match observation?
There has to be some belief that complex reality disguises itself ONLY IN THE SHORT TERM, a period of time of no real concern. It would be like saying things fall down, not up, now, yes, but tomorrow they will go up because my model says the sky sucks more than the earth.
The only other reason I can imagine is that the scientists involved have some OTHER temperature profile they are matching. Does the climate war disconnect go this far, that the warmists and the skeptics are using two different sets of “observations”?

Theo Goodwin
October 24, 2013 9:19 am

“Despite this utter lack of historical data to verify against the output of the models was useful
in informing decisions.”
Of course it was useful. The opinions of various decision makers were useful. But that fact doesn’t make the opinions of various decision makers into science or something like science. And we are talking about science here because we are using words like ‘verify’ and the crucially important ‘predict’.
If there is no prediction there is no science. And the issue here, as always, is whether the models meet the standards of science. If they do not meet the standards of science then they cannot be offered as products of science. Just ask the IPCC if they are willing to withdraw the claim that the models are scientific. Ask yourself the same question and give us your answer.
As a postmodernist, you are unwilling to set forth standards for science. You are unwilling to set forth standards for scientific prediction. Regarding prediction, you are totally willing to speak with the vulgar. For example, if I say that Jupiter’s surface will become purple polka dots against a white background at exactly 3 pm today, EDT, you will accept that as a scientific prediction just a highly unlikely prediction. Nonsense. All scientific predictions must make reference to some set of well confirmed physical hypotheses which can be used to explain the predicted phenomenon. See Kepler’s three laws and Newton’s deduction of them for crystal clear examples.
Models and statistical analysis are useful in decision making but they do not reach the level of science. They do not do prediction. They do analysis. A consultant can legitimately offer his models and his time-series analysis as tools that can improve a corporation’s planning for the future. That consultant carries with him a great deal of tacit knowledge that he has acquired through experience. That tacit knowledge makes the consultant a very useful resource for decision makers. But the models, the statistical analysis, and the tacit knowledge do not rise to the level of science. They cannot magically become well confirmed physical hypotheses that can be used to predict and to explain the phenomena of interest.
If the Canadian model discussed in Gregory’s article above is offered by its makers as a scientific tool, and it should be because all climate modelers treat their models as substitutes for the scientific theory that they do not possess, then it is an abject failure either (Gregory) because it cannot reproduce historical data or (me) it cannot meet the standards for scientific prediction. In either case, the IPCC and all its friends should “fess up” and admit that modeling is not science.

Jquip
October 24, 2013 9:20 am

Mosher: “In order to make decisions about our future defenses the DoD routinely uses models that are not verified against the past and can never be verified against the past. The models are even used in theatre to evaluate strategic and tactical decisions.”
Over history they’ve used sheep knuckle bones, half-flattened twigs of wood, astrologers, and animal sacrifices. Not saying they were ‘right,’ but they had utility. And since that, then we know that you’re totally down with lynching a man and reading his entrails because:
“They dont consider their model falsified. They consider it as having limited utility. Important ultility ( it gives you a ‘floor” estimate) but utility nonetheless.”
So I’ll tell you what. I’ll give you your point on utility, if you’ll start predicting the climate using palmistry.

Dr. Lurtz
October 24, 2013 9:21 am

The Canadian model is a success. It proves that CO2 is NOT the driving “force” that caused “Global Warming” [Global Temperature Stagnation] from 1979 until 2012.

JimS
October 24, 2013 9:23 am

Even though he is a zoologist, David Susuki is an example of our Canadian scientists. Need I say more?

Rob
October 24, 2013 9:27 am

That’s as bad as bad can get. The entire theory of “Global Warming” must
be reevaluated by science(never mind the trillion dollar AGW Industry).

DirkH
October 24, 2013 9:31 am

Canada disappoints
-gavin.

Theo Goodwin
October 24, 2013 9:32 am

C.M. Carmichael says:
October 24, 2013 at 8:49 am
“BC in general and Victoria in particular has always been a “Hippie” which means “green” haven. Canada has its only federal Green party representative there, and she is a real nutter.”
That part of the world attracts them. In Eugene OR if you are discovered eating the wrong food an “intervention” is required by the entire community of your friends.

Editor
October 24, 2013 9:34 am

Steven Mosher says:
October 24, 2013 at 8:40 am

“Any computer model that is used to forecast future conditions should reproduce known historical data. A computer climate models that fails to produce a good match to historical temperature data cannot be expected to give useful projections.”

well that’s patently false.
Some simple examples come to mind, like the distance to empty model in my cars navigation system. Its always wrong but useful
but today I’ll give you a life an death one from operational analysis.
In order to make decisions about our future defenses the DoD routinely uses models that are not verified against the past and can never be verified against the past. The models are even used in theatre to evaluate strategic and tactical decisions.
Example: the design of the F-22. During the design of that system the model Tac Brawler was used to inform decisions about the number of missiles the aircraft would carry; 4 6 or 8.
To make the decision the modelling departments had to simulate future air battles.
1. the planes and weapons being tested were all imaginary projections
2. there was no historical data to verify against.There is very little real data on
air combat between aircraft in the modern era.
Despite this utter lack of historical data to verify against the output of the models was useful
in informing decisions.

Steven, in your example there is no historical data … which is not the situation that Ken describes in the part you quoted.
What he is talking about is a situation where there is loads of historical data, but the computer model does a crappy job matching it. In your example, it would be a situation where someone decided to use a model for the design of the F-22, despite the fact that the model gave wildly wrong answers regarding historical air combat …
And yes, to be pedantic, you may be able get bits and scraps of useful information from even the lousiest model … but that’s not what Ken’s talking about either.
Bottom line?
Ken is right. In general, if you trust a model to predict the future when it has shown that it gives wildly wrong answers about the past, you’re a fool.
w.

Stonehenge
October 24, 2013 9:34 am

As a BC resident, I can add to C.M. Carmichael’s comment. A former leader in the U. Vic. Climate modelling group, Andrew Weaver, is now the sole Green Party BC provincial member of the legislative assembly and is known locally as a CAGW activist.

DirkH
October 24, 2013 9:36 am

Steven Mosher says:
October 24, 2013 at 8:40 am
“but today I’ll give you a life an death one from operational analysis.
In order to make decisions about our future defenses the DoD routinely uses models that are not verified against the past and can never be verified against the past. The models are even used in theatre to evaluate strategic and tactical decisions.
Example: the design of the F-22. During the design of that system the model Tac Brawler was used to inform decisions about the number of missiles the aircraft would carry; 4 6 or 8.
To make the decision the modelling departments had to simulate future air battles.
1. the planes and weapons being tested were all imaginary projections
2. there was no historical data to verify against.There is very little real data on
air combat between aircraft in the modern era.
Despite this utter lack of historical data to verify against the output of the models was useful
in informing decisions. ”
We’ll take your word for it, Mosher. Well, I’m kidding. I think you’re making a totally preposterous claim that you could never back up with anything. You’ve lost it. Really? The model decided what, 6 rockets? And? Now Obama kills all kinds of people with hellfire-equipped drones. And when an Oniks 800-P is in proximity he moves away his supercarriers. What exactly did you want to tell us? Are you out of your mind?

AlecM
October 24, 2013 9:39 am

There is no back radiation, a failure to understand the difference between a Radiation Field and the real heat flux which for a plane surface is the negative of the vector difference of opposing RFs.
This takes out the 333 W/m^2. Next remove the incorrect assumption of Kirchhoff’s Law of Radiation at ToA. What you then get is a no feedback result.
However, that is wrong because CO2 is the working fluid of the control system that corrects almost perfectly for any change in its concentration hence no present warming as the other warming effects, solar and polluted clouds, return to near zero.
However, beware of the future because the oceans are hiding the enormous cooling as cloud area increases because of low solar magnetic field.

Doug Proctor
October 24, 2013 9:44 am

Mosher –
Inappropriate comparison, but “useful”: climate models so far suggest that the reverse is true relative to the bomb effectiveness on reducing runway usefulness, as >95% models have predicted temperatures higher than we have experienced for 15 years.
So, using a Mosher-style thinking: we give the models their due, and say they always OVERESTIMATE the temperature rise. Now we have a ceiling for the actual temperature rise. Reality will be less than that, so that look to, say, not 3 degrees but 2 (reversing your bomb numbers) of warming by 2100. Going forward, we would then refine our models, knowing the number will come down, not go up, in all likelihood.
So, by your argument, the U of VIc is very useful: it tells us we should prepare for a temp rise of 2C or under. Which is to say, an outcome of limited harm that is more appropriately handled through mitigation and adaptation rather than radical socio-economic-political policies.
However – to go back to the inappropriateness of your comparison – in the case of climate, unlike recent air combat, we do have a lot of data. Because of that different, it is appropriate that the model of current temperature variations reflect observation from at least the beginning of the “problem” we are projecting, i.e. post-1988.
But I understand why the “disconnect” of model with observation does not bother you. The whole thing with current CAGW work is that the present is supposed to be atmospherically special. A-CO2 has caused new rules to apply. We cannot use the past or even the present to predict the near future; unfortunately we don’t know what the new rules are, except they are different from the past. “Climate science” is one that is determining its organizing rules as it goes forward, except that its general principles – which are special, recall, and do not bear comparison with the past principles, especially geologically based ones – say that the future will be really hot.
Okay, a science in diapers, growing. Not really the “settled” level, but, okay, like a car trip off into a strange country …..which wouldn’t be bad if we weren’t a) forced to go along, b) forced to pay for the ride, and c) told that those without a clue as to where we are going are going to be holding the steering wheel and aren’t sure there are any brakes in the car.

AlecM
October 24, 2013 9:49 am

Doug: there is no CO2 effect because it’s the working fluid of the control system that stabilises lower atmosphere temperature. The real AGW has been from polluted clouds – Sagan’s physics is wrong.
In short, Climate Alchemy has got virtually all its basic physics wrong, including misinterpreting Tyndall! The difference between reality and the models is stark proof of this.

Kev-in-Uk
October 24, 2013 9:57 am

Jeez, Mosher – are you for real? I know you like to defend models per se – but I think you are way off base on this one, as your comment shows. Moreover, are you really, seriously trying to defend the policy making decisions being made that are gonna affect all our lives (and, as the warmista are fond of saying, our grandkids lives, etc) based on completely useless predictive models? Granted, this canadian model may not be mainstream used – but the simple fact that it has seen the light of day and been published (as science?) is bad enough!
I don’t give a fig for your example – I think you are being silly to try and compare the two issues.
As for the post, I think it’s very good – my first thoughts on the model/reality graphs were – ‘why the feck did somebody not notice the divergence much earlier?’ I mean, it is clear that there was disagreement between model prediction and reality back in the 80’s. Surely, even if the model was being constantly updated and tweaked, somebody there must have thought to query why the hindcasting part was so wrong too? What were these people hoping for? – that the model would suddenly ‘come good’ and temps would rocket!!??

October 24, 2013 10:01 am

Are Computer Climate Models Like The ObamaCare Exchanges?
http://www.talk-polywell.org/bb/viewtopic.php?f=8&t=4823
The problem is Chaos.

Jquip
October 24, 2013 10:03 am

DirkH: “Well, I’m kidding. I think you’re making a totally preposterous claim that you could never back up with anything.”
Gotta defend him on this one. The claim is completely true when speaking about the industrial side of the military. eg. New planes, new battleships, etc. The models they use in such circumstances are not engineering issues but bluster used by stakeholders of ‘infallible machine tactic x’ to justify the use of ‘infallible machine tactic x.’ They’re also famously wrong, to a fault. At least in those cases that we have been able to test. (eg. Have actual utility == can make and take bets on it. And win better than 1/2 the time.)
Which include: Knights, Phalanxes, Firearms, Machine guns, Armored Cars, Tanks, Battleships, and so forth. Most notably, as to the sensibility of these things in the US military, is the F-22 boondoggle and the A-10. One still can’t get off the ground. And try as they might with models that justify the F-22, they still can’t figure out how to get the provably indispensable A-10 to stay on the ground.

policycritic
October 24, 2013 10:10 am

@Steven Mosher

Example: the design of the F-22. During the design of that system the model Tac Brawler was used to inform decisions about the number of missiles the aircraft would carry; 4 6 or 8.
To make the decision the modelling departments had to simulate future air battles.
1. the planes and weapons being tested were all imaginary projections
2. there was no historical data to verify against. There is very little real data on
air combat between aircraft in the modern era.

Despite this utter lack of historical data to verify against the output of the models was useful in informing decisions.

C’mon. The Pentagon’s air combat flight simulator Tac Brawler was invented/created in the 1980s. It was based on countless (thousands of) hours of data from experienced pilots working with the scientific model makers. It’s a vast and complicated simulation system, running in FORTRAN on a number of different systems, with the graphic interface system end of it being one tiny portion. It was 1/2 million lines of code. To claim an “utter lack of historical data” is false in the extreme. You ought to read up on it.
Furthermore, the Stealth came before it, in the 1970s. The Stealth had a trailer full of Super Crays miniaturized to around 15 cm X 9 cm X 26 cm in the cockpit, which they did using an early form of quantum computing with three laser heads reading holographic drives. By taking the coordinates of where the Stealth was at the moment with the surrounding threats identified, it could make decisions for the pilots in real time.

Box of Rocks
October 24, 2013 10:18 am

Steven Mosher :
How does you model handle the runway being repaired – after each engagement or not being damaged at all?
Believe me, a couple of my Seabees can do wonders to models….

October 24, 2013 10:46 am

Dr. Lurtz says: October 24, 2013 at 9:21 am
“The Canadian model is a success. It proves that CO2 is NOT the driving “force” that caused “Global Warming” [Global Temperature Stagnation] from 1979 until 2012.”
Good point.
It might also indicate that there are other deficiencies in the model. If CO2 sensitivity cannot be adjusted to make the model agree with reality than CO2 sensitivity is not the only significant deficiency. Likewise if there are no other significant model deficiencies than you should be able to determine climate sensitivity to CO2 by adjusting the CO2 component until the model agrees with reality (after all it is claimed we understand the physics of CO2 in the atmosphere). Although we talk about CO2 because that is the money molecule, perhaps we should be talking more about other molecules (H2O for example) as well.

SCheesman
October 24, 2013 10:48 am

We just want it to be warmer.

Eric
October 24, 2013 10:52 am

Mosher
I think you are also overlooking a VERY large discrepancy in your comparison, specifically in regards to your bomb comparison. The military has 100’s of years of observing how weapons explode and the types of damage they do, therefore they can use this data to make a fairly accurate model that predicts the type of damage to expect when dropping a certain size bomb on a runway and how many it may take to destroy said runway. If they ran a simulated bombing run from the Iraq war, to test the system, and it came back stating that the bombs didn’t explode per se but instead became a pot of petunias and a large sperm whale on their way to the ground…I don’t think they would use this model to predict outcomes of future bombing runs…do you?

October 24, 2013 10:58 am

Ken Gregory said in his guest essay at WUWT,
“. . . The [Canadian] climate model produces one of the most extreme warming projections of all the 30 models evaluated by the IPCC. . . .”

– – – – – – – – –
Ken Gregory,
Well argued and the graphics are great. Thank you.
Canadian modeling extremists? What comes to mind is this paraphrase of the famous Glibert and Sullivan opera lyric. And apologies to the memories of Gilbert and Sullivan.

They are the very model of a modern Major-Extremist,
They’ve information adjusted, cherry-picked, and alarmist,

The rest of the G&S lyrics in that passage are also promising for adaptation to the theme of Canadian Modeling Extremists. : )
John

KNR
October 24, 2013 11:00 am

The first rule of climate ‘science’ , if the models and reality differ in value its reality which is wrong , takes care of this issue .

October 24, 2013 11:02 am

Does Steven Mosher still believe in the Carbon dioxide nonsense?
He still does not get it that it is a scam?
http://blogs.24.com/henryp/2011/08/11/the-greenhouse-effect-and-the-principle-of-re-radiation-11-aug-2011/

October 24, 2013 11:10 am

You forget the ultimate argument of Andrew Weaver and the Establishment: Even if we are wrong we are right, because reducing fossil-fuel use is “just the right thing to do.” The fact that we require the necessary taxes to guarantee our indexed pensions is irrelevant.

October 24, 2013 11:19 am

There is a reason why we call BC coastal areas the “Left Coast”. Suzuki is a “micro” biologist representative of left coast thinking in so many ways, not the least of which is “do as I say, not as I do.”

Crispin in Waterloo
October 24, 2013 11:19 am

@Mosher
Willis got it in before me. Completely inappropriate comparison – one where there is a set of historical data and where there is not. And the military example is not strictly a model, it is more like a computer game. If the programme is not modeling something real, is it is model (read ‘toy version’)? It runs on a bunch of intelligently chosen assumptions but none make it a model of reality and it doesn’t have to be.

Jaye Bass
October 24, 2013 11:21 am

Sounds to me like Mosher has jumped the shark in the extreme.

October 24, 2013 11:23 am

Mosher,
Another military analogy? What is the fixation with climate science skeptics often using military analogies? Jeesh, it is getting a little creepy of late in the skeptical communities.
Analogy is to science as Walt Disney is to Feynman.
Analogy is poetry at best, at worst it is mere rhetorical device.
John

Mike Smith
October 24, 2013 11:27 am

Yeah, the measured temps in Canada are a tad lower than expected. That’s because the extra heat is hiding in the deep tundra.
Any day now (10/30?) it’s gonna jump out and go boo.

October 24, 2013 11:28 am

Stonehenge says:
October 24, 2013 at 9:34 am
As a BC resident, I can add to C.M. Carmichael’s comment. A former leader in the U. Vic. Climate modelling group, Andrew Weaver, is now the sole Green Party BC provincial member of the legislative assembly and is known locally as a CAGW activist.
——
Out of curiousity I looked up Weaver’s current status as a professor in the School of Earth and Ocean Sciences at the University of Victoria, British Columbia, to see if he had given that up to become an MLA. It occurred to me he might have left after having done such an abysmal job with his science, but no. I wonder how one can simultaneously be a full time professor and be an MLA – which is supposed to be a full time job also, representing his constituents (and not just on climate and environmental issues). Wikipedia and the Green Party page fail to enlighten. The U. Vic. faculty website also has no mention of whether he is on leave.

Political Junkie
October 24, 2013 11:36 am

As noted by some above, Canadian climate change rock stars Suzuki and Weaver (a good American comparison would be Gore and Mann) are having a hell of a week!
The SUN TV is really skewering Suzuki for his gargantuan carbon footprint, four homes, five kids and demonstrated spectacular ignorance of climate change science. On Australian TV this self-promoting “guru” did not recognize even one of these acronyms – NASA GISS, UAH, RSS or HADCrut! Really! Not one.
The current article of course is about Weaver’s work – these are models were built under his guidance at the University of Victoria.

Solomon Green
October 24, 2013 11:37 am

In his example of modelling the number of bombs necessary to put a runway out of action Steven Mosher uses historic data which tells him that his model always underestimated the number of bombs required and hence, as a planner he would invariably allow for an extra bomb.
Taking his analogy into climate forecasting, historic data has revealed that all the current and past climate models, when back tested, have overestimated temperature increases by a significant factor. Therefore, in order to allow for this overestimate temperature bias, Mr. Mosher should always reduce the temperature scenarios obtained from these models by a significant factor should he wish to use them for making long term predictions. To use his analogy, instead of giving him a “floor” they give him a “ceiling”.
Reverting to his bomb analogy he says “To be safe I’m not going to count on 2 bombs doing the job. I’ll need to plan for 3. meanwhile the modelling guys continue to refine their model”. If they are any good the modelling guys will continue to refine their model until it can correctly hindcast.
Unfortunately I have seen no sign that climate modellers are refining their models to provide accurate hindcasts. Rather they appear to find it easier to refine, or filter, the historic data in order to match their models.

Crispin in Waterloo
October 24, 2013 11:41 am


“The individual runs of the model are produced by slightly changing the initial conditions of several parameters. These tiny changes cause large changes of the annual temperatures between runs due to the chaotic weather processes simulated in the model. The mean of the five runs cancels out most of the weather noise because these short term temperature changes are random.”
I have a problem with the idea that these temperature changes are ‘random’. Random means assuming there are causeless effects. Weather prediction is definitely based on deterministic outcomes from an initial state. The better described the initial state, the more accurate the prediction and the farther into the future the prediction can be made with meaningful accuracy.
Because the weather systems are chaotic (which many people confuse with ‘random’) it is very difficult to predict future temperatures for more than a few days. That fact that the model runs are clustered along a line does not mean they cancel each other out and produce a reasonably central trend. It means the model is simplistic; simple, if you will. Chaotic means the result is highly dependent on the initial conditions, not that the output is not deterministic and not that it includes randomness. Their model outputs are not chaotic enough.
Chaotic systems look random to the untrained eye, but that is just ignorance. If the conditions that initiate the deterministic calculations are known precisely, it no longer looks random, it is just highly sensitive to those initial conditions, right down to 12 digit precision rounding errors.
The similar tracks for many years of the model runs indicate that the programme is not set to the level of chaos (internal sensitivity) that real climate systems have. During at least some runs, the current historical temperature trend should have been produced. It is not in evidence at all! Were the model to have included sensitivities sufficient to have produced something similar to actual temperatures, other runs would be as wildly different in other directions. That would show the true nature of the weather system and its inherent unpredictability.
This can be countered with the observation that if enough initial parameters and their influences were known, the result would consistently reproduce actual temperatures because it would consider solar and magnetic and GCR influences properly. Fair enough, but looking at the cluster of results being so consistent and consistently wrong, they are either making huge mistakes about what influences the climate, or huge misallocation of forcing effects, or huge simplifications that prevent the model output from occasionally reproducing reality.
Whatever the defect(s) your conclusion would be the same of course: that any models producing consistently incorrect predictions are not considering influences, not considering them correctly nor proportional to their actual impact.
Thanks for the effort made.

Jquip
October 24, 2013 11:42 am

John Whitman: “What is the fixation with climate science skeptics often using military analogies?”
Skeptics tend to be capitalists => Capitalists tend to be conservative => Conservatives tend to be pro-military => The Military uses models *even worse than* Climatology => Ergo, Skeptics are pro-‘worse than’-models => Therefore, shut up.
Quod erat Climatum.

October 24, 2013 11:47 am

@Steven Mosher says: October 24, 2013 at 8:40 am
=================================================
All very well, Mr. Mosher, but the models you reference were all being used for a clear purpose. Climate models, however, are not, as the IPCC have so clearly stated,.
http://www.cfact.org/2013/02/06/global-warming-was-never-about-climate-change/
IPCC official Ottmar Edenhofer, speaking in November 2010, advised that: “…one has to free oneself from the illusion that international climate policy is environmental policy. Instead, climate change policy is about how we redistribute de facto the world’s wealth…” – See more at: http://www.cfact.org/2013/02/06/global-warming-was-never-about-climate-change/#sthash.dA4YBPID.dpuf
You are not comparing like with like. The Canadian modellers may think that they are pursuing science, but the reality is is that they are just another group of useful idiots.
More from the above link…
Opening remarks offered by Maurice Strong, who organized the first U.N. Earth Climate Summit (1992) in Rio de Janeiro, Brazil, revealed the real goal: “We may get to the point where the only way of saving the world will be for industrialized civilization to collapse. Isn’t it our responsibility to bring this about?”
Former U.S. Senator Timothy Wirth (D-CO), then representing the Clinton-Gore administration as U.S Undersecretary of State for global issues, addressing the same Rio Climate Summit audience, agreed: “We have got to ride the global warming issue. Even if the theory of global warming is wrong, we will be doing the right thing in terms of economic policy and environmental policy.” (Wirth now heads the UN Foundation which lobbies for hundreds of billions of U.S. taxpayer dollars to help underdeveloped countries fight climate change.)
Also speaking at the Rio conference, Deputy Assistant of State Richard Benedick, who then headed the policy divisions of the U.S. State Department said: “A global warming treaty [Kyoto] must be implemented even if there is no scientific evidence to back the [enhanced] greenhouse effect.”
In 1988, former Canadian Minister of the Environment Christine Stewart told editors and reporters of the Calgary Herald: “No matter if the science of global warming is all phony…climate change [provides] the greatest opportunity to bring about justice and equality in the world.”
In 1996, former Soviet Union President Mikhail Gorbachev emphasized the importance of using climate alarmism to advance socialist Marxist objectives: “The threat of environmental crisis will be the international disaster key to unlock the New World Order.”

C.M. Carmichael
October 24, 2013 11:57 am

If Weaver isn’t visible or audible in Lotusville ( Victoria ), find out where the most fashionable activists are being arrested for anti-pipeline hijinx and I think you will find Dr. Weaver.

October 24, 2013 12:01 pm

This is a good start. Now repeat this analysis for ALL of the models that contribute to CIMP5, one at a time. Note well that the treatment of the mean in the graph above IS a legitimate usage of statistics as the five model runs are drawn from an actual statistical ensemble of outcomes of THIS model given a Monte Carlo (random) perturbation of the initial conditions within the general range of the expected errors in the inputs.
One thing that they (sadly) did not do is a formal computation of sigma (from the data) relative to the mean that would permit us to apply an actual hypothesis test to the model result with the null hypothesis “this model is correct”. If the model is correct, then the probability of getting “reality” given the model is — eyeballing the data only, sadly, easily less than 0.01, and I wouldn’t be surprised if the failure is 3\sigma by the end, less than 0.001. We would be justified in rejecting this model as it isn’t even close to reality — it fails well beyond the usual 95/5% confidence level (which I don’t think much of anyway) — out there at the 99% or 99.9% level. If the model is correct, we are forced to conclude that the current neutral temperature behavior that has persisted for the last 15+ years is literally a freak of nature, so unlikely that it would have only occurred in 1 future time evolution in a 1000 started from the approximately correct initial conditions.
Of course this is almost certainly untrue. A much more sensible thing to do is invert the order of importance — assume that nature did what nature is most likely to do, so that the systematic deviation from this in even a small ensemble of samples is rather unlikely give an correct model.
The big question then is, why are the results from this model included in the spaghetti graph of figure 1.4 in the SPM or anywhere else in the report unless it were to point out its failure? All it does is shift the “mean model performance” up (as if this has some predictive value when it is in fact meaningless) and define an improbably high upper bound for the range of possible future temperatures.
So the correct thing to do is not only repeate the analysis above for every model in CIMP5 or used in AR5, but then to reject the models that fail an elementary hypothesis test, allowing for the fact that in 30 or so tries the criteria for failure needs to be a lot more stringent and re-publish all of the figures and numbers with the failed models removed.
This would almost instantly remove all cause for alarm — completely eliminate any possibility of a “catastrophe”, for starters, for example — and replace it with a healthy “concern” that global warming is at least moderately likely to continue erratically for the rest of the century, if the climate does not perversely decide to cool instead in spite of increased CO_2. It would also force climate scientists to look carefully at the features of the most successful models (which are obviously going to be the ones predicting the least warming) to begin to get an empirical idea of how much climate sensitivity has been overestimated. Yes, even AR5 is dropping climate sensitivity, but not nearly enough. Indeed, it is hard to see what would be enough. It’s currently difficult to positively ascertain whether or not net feedback from increased CO_2 is zero or even negative, so that total climate sensitivity could be close to zero and still be consistent with the data.
rgb

Crispin in Waterloo
October 24, 2013 12:17 pm

Well said Robert. It is not as if there is only one model to choose from. The problem has been there was only one alarming ECS value. Now that it is going the way of the dodo, we are left with the un-alarming magnitude of lesser fowls.

CaligulaJones
October 24, 2013 12:18 pm

And to think I just re-read this from John Brignall’s “Numberwatch”:
The law of computer models
The results from computer models tend towards the desires and expectations of the modellers.
Corollary
The larger the model, the closer the convergence.

PeterB in Indianapolis
October 24, 2013 12:18 pm

Leave it to Mosher to basically make the comment (to boil it down to brass tacks) “A crappy model gives more useful information than no model”.
As the author of this piece has shown, “anti-information” is far worse than no information, so in THIS PARTICULAR CASE, no information would, in fact, be better.
Certainly, in Mosher’s example of military planning, there are indeed cases where hindcasting the model would make no sense whatsoever, but we aren’t talking military planning and strategy here, we are talking climate, and we have a pretty good idea of what the climate has been for the past 150 years, and we know that it RELATES TO how the climate will be 150 years from now.
I am willing to buy the idea that aerial warfare 150 years ago looks nothing like aerial warfare now, which also looks nothing like aerial warfare 150 years from now. I am NOT willing to buy the idea that climate 150 years ago, climate now, and climate 150 years from now bear no relationship to each other.

October 24, 2013 12:24 pm

Jquip on October 24, 2013 at 11:42 am said,

John Whitman said: “What is the fixation with climate science skeptics often using military analogies?”

Skeptics tend to be capitalists => Capitalists tend to be conservative => Conservatives tend to be pro-military => The Military uses models *even worse than* Climatology => Ergo, Skeptics are pro-’worse than’-models => Therefore, shut up.
Quod erat Climatum.

– – – – – – – –
Jquip,
Hey, nice to get a comment.
Did your parody evolve from a paraphrase of a Lewandowsky & Cook pseudo-science paper? It seems familiar to their intellectual toxic waste.
Actually, that was pretty well done. I enjoyed it. Thanks.
John

TomRude
October 24, 2013 12:28 pm

Guess why the Canadian Model star scientist has turned politician and op-ed writer in the Gleube & Mail?

RokShox
October 24, 2013 12:31 pm

“The climate model assumes that water vapor, the most important greenhouse gas, would increase in the upper atmosphere in response to the small warming effect from CO2 emissions.”
Why is such an assumption made at all? Why isn’t the water vapor content a quantity that is implicitly solved for in the governing Navier-Stokes equations?
Do they use a constituitive relationship which ties CO2 to water vapor? That would just be a giant fudge knob!

KTWO
October 24, 2013 12:34 pm

I suggest studying how to model using the Battle Of Britain. There is plenty of historical data about the designs of British and German aircraft, the constraints, and how engagements worked out. Game makers have. And done it well.
Models of F-22 combat that had never yet happened did not show what would happen. But they were useful; they framed the arguments quashed some speculations, refuted some scenarios, and guided analysts.
The reader has already spotted one big difference between modeling aircraft and modeling climate. The numbers.
The speeds of the planes, their range, rate of climb, etc. are known. But we don’t know or agree about the numbers for climate; even the temperatures for 1935 in Oklahoma seem to subject to change.

Box of Rocks
October 24, 2013 12:41 pm

KTWO
How does one model in the human element of aerial combat? Factors like eyesight and reaction times?
Just b/c a plane can do x – mechanically does not mean that x gets done b/c of human factors.

October 24, 2013 12:44 pm

Every time a scientist quotes equations in response to doubts, as if a mechanical isolated reaction represents the entire atmosphere with all the thousands of other influences, they are closing their eyes. You don’t need equations, theories, absorption spectra etc to read a damned thermometer, if it is not rising then adjust your bloody theories, don’t claim the heat is hiding somewhere but if we wait long enough it’s going to turn up and keep spending even more to stop something which. has. already. stopped. (if it ever really even started that is).

Go Canucks!!
October 24, 2013 12:57 pm

Dr. Weaver, along with his computer models, consulted with the Provincial and municipal governments and advised them that the SLR will be around 1 m.
The engineering study that was based on the computer models recommends that the Richmond (greater Vancouver) dykes must be upgraded to the tune of, $100,000,000.00.
Real dollars for a problem that may not exist.

Peter Stroud
October 24, 2013 12:58 pm

The UK Prime Minister has just announced that the government will rein back on green energy taxes. He should use the conclusions from this paper, to counter the shrill arguments of his Liberal Democrat partners. But I doubt that he will.

October 24, 2013 1:11 pm

This model does particularly bad over the last few decades. Others do better. You can see a comparison of all runs of all CMIP5 runs to global land/ocean temps (NCDC, GISTemp, HadCRUT4) here: http://i81.photobucket.com/albums/j237/hausfath/globalmodelobscomps1880-2100_zps38674af4.png
One things to note is that surface temperatures are not the only metric of interest; precipitation is also useful to evaluate. That said, there does need to be more critical evaluation of models based on the physics they include and the accuracy of their hindcast/forecast. Simply lumping all models together into a multi-model mean in the name of “model democracy” is shortsighted.

October 24, 2013 1:17 pm

What’s all the fuss about?
It is just another climate computer model with pre-determined results, as required by the Canadian climate establishment for the continuation of funding.
The real question is: Are there any climate models without this bias?

October 24, 2013 1:21 pm

RokShox says: October 24, 2013 at 12:31 pm

“The climate model assumes that water vapor, the most important greenhouse gas, would increase in the upper atmosphere in response to the small warming effect from CO2 emissions.”
Why is such an assumption made at all? Why isn’t the water vapor content a quantity that is implicitly solved for in the governing Navier-Stokes equations?

The amount of water vapor in the lower atmosphere is determined by precipitation system and the Clausius–Clapeyron relation. Air in clouds and immediately next to the ocean surface is at or near 100% relative humidity, so as temperatures increase the absolute humidity there also increases. The average absolute humidity also increases between the clouds and the ocean surface with increasing temperatures.
The humidity in the upper atmosphere, above most clouds is controlled by other processes that are not resolved by climate models. Susan Solomon write about the lower Stratosphere, which also applies to the upper troposphere,

Current global climate models suggest that the water vapor feedback to global warming due to carbon dioxide increases is weak but these models do not fully resolve the tropopause or the cold point, nor do they completely represent the QBO [Quasi Biennial Oscillation], deep convective transport and its linkages to SSTs, or the impact of aerosol heating on water input to the stratosphere.

Solomon, a lead IPCC author, is saying the models are crap. The model assumes various parameters to control upper atmosphere water vapor, rather than directly assuming a water vapor response there.
Upper atmosphere water vapor is important because as reported in a previous guest post
http://wattsupwiththat.com/2013/03/06/nasa-satellite-data-shows-a-decline-in-water-vapor/
“A water vapor change in the 300-200 mb layer has 81 times the effect on OLR than the same change in the 1013-850 mb near-surface layer.” as displayed in my graph here;
http://www.friendsofscience.org/assets/documents/FOS%20Essay/OLR_PWV_bar.jpg

TomRude
October 24, 2013 1:30 pm

Unprecedented cherry picking from GRL…
http://onlinelibrary.wiley.com/doi/10.1002/2013GL057188/abstract
Abstract
[1] Arctic air temperatures have increased in recent decades, along with documented reductions in sea ice, glacier size, and snowcover. However, the extent to which recent Arctic warming has been anomalous with respect to long-term natural climate variability remains uncertain. Here we use 145 radiocarbon dates on rooted tundra plants revealed by receding cold-based ice caps in the Eastern Canadian Arctic to show that 5000 years of regional summertime cooling has been reversed, with average summer temperatures of the last ~100 years now higher than during any century in more than 44,000 years, including peak warmth of the early Holocene when high latitude summer insolation was 9% greater than present. Reconstructed changes in snow line elevation suggest that summers cooled ~2.7 °C over the past 5000 years, approximately twice the response predicted by CMIP5 climate models. Our results indicate that anthropogenic increases in greenhouse gases have led to unprecedented regional warmth.
==
Another hockey stick cut and paste?

October 24, 2013 1:36 pm

Go Canucks!! says: October 24, 2013 at 12:57 pm

Dr. Weaver, along with his computer models, … recommends that the Richmond (greater Vancouver) dykes must be upgraded to the tune of, $100,000,000.00.
Real dollars for a problem that may not exist.

The staff of the Canadian Centre for Climate Modelling and Analysis is listed here;
http://www.ec.gc.ca/ccmac-cccma/default.asp?lang=En&n=EF40AED9-1
Dr. Weaver is not listed. Did he recently leave the centre?
The citizens of British Columbia are subject to a high carbon tax to avoid the dangers of sea level rise. Here is my graph of sea level rise on the B.C. coast.
http://www.friendsofscience.org/assets/documents/FOS%20Essay/Sea_Level_Canada_West.jpg
The graph shows the average monthly sea level of 10 tide gauge stations on the B.C. coast. The black line is the linear best fit to the data. Over the period 1973 to 2011 the average sea level has declined at 0.5 mm/year.

Editor
October 24, 2013 1:38 pm

Steven Mosher – You say your car’s miles-to-empty model is always wrong but useful. That’s because the amount by which it is wrong is small compared with the total and with the accuracy that you actually need, ie, it is “information” not “anti-information”. Similarly, my watch is always wrong, but it is useful, as is our local weather forecast. In your “bomb” example, you don’t use the TacBrawler(“TB”) model, you use TB+1. In all these examples, there actually has been historical data telling us how useful they are. So, instead of the climate model in question, CanESM, should we use say CanESM-0.2pd or CanESM/3? (Doug Proctor and Solomon Green make similar suggestions). The timescales shown are probably too short to be sure, but eyeballing the charts suggests that the model will be pretty useless regardless of how it is used. For climate models, some sort of match to periods such as 1940-70 and 2000+ is needed. Because today’s pattern seems to be a repeat of warming periods in the further past such as the MWP, Roman and Minoan, the models also need to be able to reproduce something of those periods, and the cooler periods between them, too. I am pretty sure that not one single climate model in IPCC use today can do that.

October 24, 2013 1:39 pm

RokShox says:
October 24, 2013 at 12:31 pm
“The climate model assumes that water vapor, the most important greenhouse gas, would increase in the upper atmosphere in response to the small warming effect from CO2 emissions.”
___________
Not so — According to Andrew Weaver “Every one knows – that water vapour in the atmosphere will condense, form clouds and will make rain – thus it is treated as a CONSTANT in (our) climate models.” This statement was given following his presentation at UVic to Chemistry – faculty, students and members of the Chemical Institute of Canada. It was in response to my question “… why did you only talk about CO2 and not once mention Water vapour. I have to say that his flippant retort was unprofessional – to say the least.

RokShox
October 24, 2013 1:42 pm

Thanks Ken.
“The model assumes various parameters to control upper atmosphere water vapor”
Amazing that such a key physical effect – water vapor feedback – is not treated as an implicit consequence of the underlying physics, but is treated as a parameterization. Parameters = knobs.

Pamela Gray
October 24, 2013 1:44 pm

Am I just dense? It appears to me that the tweaking done to the model to match/tune model output to observations clearly demonstrates the wrong dials were tweaked since the “don’t touch the dials anymore” projection phase no longer matches the current observations. This seems a simple conclusion, readily made, yet no climate researcher has come forth to state the obvious. So either I’m dense, or the researchers are. It can’t be both.

Alan Robertson
October 24, 2013 1:56 pm

Ken Gregory says:
October 24, 2013 at 1:36 pm
The citizens of British Columbia are subject to a high carbon tax to avoid the dangers of sea level rise. Here is my graph of sea level rise on the B.C. coast.
http://www.friendsofscience.org/assets/documents/FOS%20Essay/Sea_Level_Canada_West.jpg
The graph shows the average monthly sea level of 10 tide gauge stations on the B.C. coast.
Over the period 1973 to 2011 the average sea level has declined at 0.5 mm/year.
_____________________
The high tax, she works, eh?

Richards in Vancouver
October 24, 2013 1:59 pm

Ah, Mosh, reality strikes!
During the Falklands war the Brits wanted to close Stanley airport. If memory serves, they used five Vulcan bombers, two of them for radar suppression and three to deliver old WW2 iron bombs to crater the runway.
The bombs went off just fine. But only one hit the runway. One crater. Quickly repaired, too. All the rest were mis-aimed, digging beautiful craters off to the side.
That’s reality, Mosh. Put it in your model and stick it….well, never mind.
BTW: That was an extraordinary operation, well worth googling. It had everything going for it: imagination, daring, skill, great planning. Problem is, it failed

Pamela Gray
October 24, 2013 2:12 pm

A miss is as good as a mile in climate science. Billions are being spent on a 1 degree change in average global temperature. Maybe even trillions. Mosh, you must admit that at the very minimum that same money COULD have been spent on better health care and cleaner water in most if not all places on Earth that it was needed. And the improvement could have been sustained.

October 24, 2013 2:15 pm

Pamela Gray says: October 24, 2013 at 1:44 pm

So either I’m dense, or the researchers are. It can’t be both.

Yup! I have always liked reading your many comments of the site. You are not dense. It must be the modelers!

Jquip
October 24, 2013 2:42 pm

Pamela Gray: “So either I’m dense, or the researchers are. It can’t be both.”
You have an implicit commitment to the idea that science has a connection to empiricism. Drop that idea and do the same reasoning.
“… that same money COULD have been spent on better health care and cleaner water in most if not all places on Earth that it was needed.”
Sure, but that would result in the exponential growth of exhalers. As this would cause the planet to combust, the proper action is no humanitarian action.

H.R.
October 24, 2013 2:48 pm

“The climate model produces one of the most extreme warming projections of all the 30 models evaluated by the IPCC.”
It’s cold up there. My guess is that most sensible Canadians have had their fingers crossed hoping the model is right. Looks like they’re out of luck, though.

clipe
October 24, 2013 2:53 pm

Yet last month, twelve Canadian climate scientists, economists and policy experts condemned the government in their alarmist open letter to Natural Resources Minister Joe Oliver writing that, if Canada wants to avoid dangerous climate change it
“will require significantly reducing our reliance on fossil fuels and making a transition to cleaner energy.”
Many of the signatories to the letter worked with the Canadian and other governments to create the IPCC in the first place and for years have boosted the incorrect claim that we are facing “dangerous climate change”. Of course, we always experience climate change, but now, because of the deceptions, these scientists urge preparation for warming even though we more probably face cooling.

http://drtimball.com/2013/canada-leads-the-world-in-climate-deception/

October 24, 2013 3:31 pm

H.R. says: October 24, 2013 at 2:48 pm

It’s cold up there. My guess is that most sensible Canadians have had their fingers crossed hoping the model is right.

I can find very few ‘sensible Canadians’ regarding climate change. The vast majority supports carbon taxes, windmills and policies that inhibit the wealth creation from petroleum development. In Alberta, all political parties support CO2 reduction schemes. Our oil companies gives huge financial donations to green NGOs that want to destroy their business, but none to us.

Robert of Ottawa
October 24, 2013 3:49 pm

As a Canadian, I am proud of this failure!

Pamela Gray
October 24, 2013 3:50 pm

Oil companies want the green contracts. Then they can call themselves Energy companies.

Robert of Ottawa
October 24, 2013 3:53 pm

clipe says October 24, 2013 at 2:53 pm
Yet last month, twelve Canadian climate scientists, economists and policy experts condemned the government in their alarmist open letter to Natural Resources Minister Joe Oliver writing that, if Canada wants to avoid dangerous climate change it
“will require significantly reducing our reliance on fossil fuels and making a transition to cleaner energy.”

Fortunately Canada’s PM is a very level-headed fellow:
He writes that it’s based on “tentative and contradictory scientific evidence” and it focuses on carbon dioxide, which is “essential to life.”
He says Kyoto requires that Canada make significant cuts in emissions, while countries like Russia, India and China face less of a burden.
Under Kyoto, Canada was required to reduce emissions by six per cent by 2012, while economies in transition, like Russia, were allowed to choose different base years. As developing nations, China and Indiawere exempted from binding targets forthe first round of reductions.

“Kyoto is essentially a socialist scheme to suck money out of wealth-producing nations,” Harper’s letter reads.

http://www.cbc.ca/news/canada/harper-s-letter-dismisses-kyoto-as-socialist-scheme-1.693166

Rich Lambert
October 24, 2013 4:18 pm

Many have touched on this. The models show that contrary to the best efforts to show that carbon dioxide is a climate driver the models show that it isn’t.

clipe
October 24, 2013 4:20 pm

Ouch!

They display their failures on maps. Pick any map or period and it shows how a coin toss would achieve better or at least comparable results. Here is their caption for the maps.
” The upper panel shows the seasonal air temperature or precipitation anomaly forecasts. The forecast are presented in 3 categories: below normal, near normal and above normal. The lower panel illustrates the skill (percent correct) associated to the forecast.”
The maps are of temperature and precipitation for 12, 6 and 1-3 months.</blockquote?
http://drtimball.com/2013/wrong-prediction-wrong-science-unless-its-government-climate-science/

Kev-in-Uk
October 24, 2013 4:28 pm

Pamela Gray says:
October 24, 2013 at 3:50 pm
Oil companies want the green contracts. Then they can call themselves Energy companies
As an ex-oilfield geologist from many years ago – even way back then, if I’ve said this once, I’ve said it a million times – the oil companies are absolutely loving the green agenda – yes, even the anti-arctic drilling, etc – ALL of this makes their ‘product’ more expensive and more wanted/needed by the oil hungry world. I wouldn’t be surprised to find a large percentage of NGO ‘green’ funding coming from oil companies via various annonymous donations. A single cent on a barrel of oil means companies are worth millions more – billions, if they have very large ‘reserves’. To paraphrase another mega giant type corporation (McD’s) ‘They are simply loving it!’

DirkH
October 24, 2013 4:34 pm

Zeke Hausfather says:
October 24, 2013 at 1:11 pm
“This model does particularly bad over the last few decades. Others do better.”
That’s GREAT, Zeke. You know that in hindsight? FANTASTIC! That’s like, hmm, predicting the past.
Whatcha think are the odds that the models that did better CONTINUE to do better?

Alan Robertson
October 24, 2013 4:39 pm

Robert of Ottawa says:
October 24, 2013 at 3:49 pm
As a Canadian, I am proud of this failure!
__________________
Do y’all pronounce it- hoseur?

October 24, 2013 4:39 pm

It should be no surprise the U-Vic model is higher than reality. U-Vic is on Vancouver Island, the home of BC Bud, our most famous “green” product. Smoke ’em if you got ’em.
Green party you say? BC has the original green party, still going strong:
http://www.marijuanaparty.ca/index.en.php3

DirkH
October 24, 2013 4:47 pm

Do the different models constitute different CO2AGW theories, Zeke? In that case, which one is the one that the IPCC stands for when claiming their 95% certainty?
Do the different CO2AGW theories have names? Say, Gavin’s Third Theory? Hansens General Warming Theory?
Are there papers about the different theories? Or are they just random permutations of the 300 parameters, whipped up on the whim of a post doc? Rethorical question.

October 24, 2013 4:48 pm

if these model runs were omitted from the AR%, how would the multi-model mean change? presumably it would lie closer to reality….?

October 24, 2013 4:49 pm

Kev-in-Uk says:
October 24, 2013 at 4:28 pm
the oil companies are absolutely loving the green agenda
=============
the EPA has pretty much outlawed coal, the only competition to oil and gas, giving the petroleum industry a monopoly going forward. No matter how expensive oil gets or how cheap coal gets, you won’t have a choice. Cheap gas is simply the teaser, so you won’t complain about coal getting shut out. Once coal is gone, look for gas to skyrocket to match oil.

Alan Robertson
October 24, 2013 5:14 pm

man in a barrel says:
October 24, 2013 at 4:48 pm
if these model runs were omitted from the AR%, how would the multi-model mean change? presumably it would lie closer to reality….?
______________________
Closer to reality than what? Why would you assume that? Meaningless data points remain so.

JimF
October 24, 2013 5:55 pm

Chirp, chirp, sing the crickets waiting for Mosher’s reply to the complete destruction of his latest silly foray into the global warming controversy. Some Canadian here ought to print this out and send it to all Canadian newspapers, and to their elected officials. Is there no one who cannot be thunderstruck at the continuing stupidity of whatever tax-payer supported group is publishing this dreck?

mark fraser
October 24, 2013 6:02 pm

Already sent it to Bill Good, the panting fanboy of Andrew Weaver, recently departed from UVic
for a term as GREEN MLA in the provincial gummint. Not that it’ll make any difference. He’ll be on the phone getting a rebuttal.

High Treason
October 24, 2013 6:06 pm

How gullible are people to believe the correlation between global warming and CO2? The models are so far out of step with reality, and becoming more so, yet the certainty level has gone up. Anyone who cannot smell a rat here is a fool.

TomRude
October 24, 2013 6:10 pm

@Go Canucks,
UBC, SFU and UVic activist scientists are called upon by their pals green councillors or bureaucrats to come over and deliver an alarmist message to local governments who are obviously clueless and ready to buy anything.
Many of these University departments are bankrolled by Tides and co. I would also bet that some in the construction industry may be involved in pushing for this potential $ windfall, ready to build anything as long as someone pays. It would be very interesting to follow closely contract attributions both off and on campuses and how much a discount may be offered to Universities new buildings in relation with the number of municipal contracts linked to “climate change” mitigation…
Local newspapers that are surviving mostly on real estates ads are very much willing propagandists of the worst kind: http://www.nsnews.com/north-vancouver-city-plans-for-climate-change-floods-1.668838
“Although the city approved an ongoing strategy to reduce greenhouse gas emissions in 2005, the focus is now on dealing with existing climate change, said Caroline Jackson, section manager of environmental sustainability at the City of North Vancouver.” And “According to provincial guidelines, the city should also be planning for a halfmetre rise in the sea level by 2050, which would flood coastal and creek areas, she said.” And “To adapt to climate change, the city needs to improve its emergency response system, upgrade its sewer and drainage systems and ensure that vulnerable populations are properly planned for, said Deborah Harford, a panel speaker and executive director of the SFU Climate Change Team.”
Perfect loop closed so taxpayers are on the hook… but Hartford or Jackson will be long gone when their alarmist crap is debunked… Funny how these SFU goons never show the graph Ken Gregory posted to Mayors and Councils: http://www.friendsofscience.org/assets/documents/FOS%20Essay/Sea_Level_Canada_West.jpg

October 24, 2013 6:15 pm

As a Canadian who lives near Victoria and who has lived in Victoria for many years, I take a little extra joy when I say, “Ha ha!”

Political Junkie
October 24, 2013 6:18 pm

JimF
You assumption that most Canadian newspapers would care is incorrect.
CBC, CTV, Globe and the Star will not touch this. Taking them to the Press Council doesn’t work – been there, done that. Although I can’t prove it, my assumption is that they have a policy much like revealed in the BBC’s 28gate scandal.
On the other hand, SUN TV and the Post will cover the story.

Jquip
October 24, 2013 6:25 pm

Junkie: “You assumption that most Canadian newspapers would care is incorrect.”
Not a clue about Canadian speech things. Am I correct to assume that the outlets are required by law to report facts? And that government functionaries decide what facts are?

October 24, 2013 6:35 pm

Robert Brown says:
October 24, 2013 at 12:01 pm
This is a good start. Now repeat this analysis for ALL of the models that contribute to CIMP5, one at a time.
**************
Actually this was done in my latest paper:
Scafetta, N. 2013. Discussion on climate oscillations: CMIP5 general circulation models versus a semi-empirical harmonic model based on astronomical cycles. Earth-Science Reviews 126, 321-357.
http://www.sciencedirect.com/science/article/pii/S0012825213001402
Here figures 4-11 contain 48 panels where all 48 CIMP5 GCMs and their 162 simulations were extensively shown and analyzed. In particular the CanESM2 model simulations are shown in Figure 4F and later more extensively in Figure 17.
All CIMP5 GCMs present more or less the same problems. The disparity vs. the data since 2000 is just one of the aspects. The CanESM2 model performs better than the average. However all models miss the complex harmonic component of the climate by a significant factor.

Political Junkie
October 24, 2013 7:09 pm

Jquip,
There is no law that controls the veracity of what newspapers can report. On the topic of climate change Canadian papers regularly print absolute drivel. When challenged, they don’t have to substantiate their sources.
Theoretically, the various provincial press councils act as an impartial arbiter of the “truth.”
However, they are voluntary industry organizations paid for by the newspapers.
When one tries to challenge an untruthful scare story through the press council they simply say that they don’t have the resources to arbitrate a “scientific” debate. Their ridiculous definition of “scientific” makes it impossible for them to compare IPCC forecasts to what actually happened, for example.
Therefore, the demonstrably false “it’s worse than we thought ” story stands.

r murphy
October 24, 2013 7:14 pm

On behalf of all Canadians i sincerely apologize. Be assured that the influence of Weaver, Suzuki, and perhaps our biggest embarrassment, Greenfleece is on the wane. There is still a lot of ‘green’ up here but not the type that makes one go insane.

Jquip
October 24, 2013 7:20 pm

Junkie: “When one tries to challenge an untruthful scare story through the press council they simply say that they don’t have the resources to arbitrate a “scientific” debate.”
So just like the US then: “If it bleeds, it leads.” Thanks for the response.

r murphy
October 24, 2013 7:30 pm

PS at least Mosh isn’t Canadian. Not sure what model produced him but obviously it could use a little tweaking.

Reg Nelson
October 24, 2013 7:34 pm

A broken clock is correct twice a day. A climate model is not — not twice a day, not twice a decade, not twice an ever. And there is reason for that; these models aren’t meant to model reality. These models are meant to model an alternative reality — a scary, doomsday, Chicken Little, imaginary reality. The sole purpose is to fuel political propaganda.
But the good news is the tide is turning, as evidenced in Australia and Germany. Taxpayers are no longer happy paying for billion dollar “low/no utility” broken climate clocks.

Paul Vaughan
October 24, 2013 7:39 pm

“3550%”

TimO
October 24, 2013 7:46 pm

Great thing is that whey they are that far off, no one can say their models just need a ‘little tweak’…

Eric Gisin
October 24, 2013 7:57 pm

Why didn’t you provide any links to the CCCma web site? The latest global model there is CanESM2, not CanESM: http://www.cccma.ec.gc.ca/diagnostics/cgcm4/cgcm4.shtml
There is a web page at CCCma listing dozens of people claimed to be Nobel 2007 winners: http://www.ec.gc.ca/sc-cs/default.asp?lang=En&xml=32AAA89D-4BDE-4A76-B1A0-4277E5123F77

October 24, 2013 8:20 pm

Eric Gisin says: October 24, 2013 at 7:57 pm

The latest global model there is CanESM2, not CanESM

The first sentence of the essay identifies the model as CanESM2. Also, see note 1. Yes, I should have put CanESM2 rather than CanESM in the graphs’ titles.

October 24, 2013 9:29 pm

I grew up on the Canadian prairie where there are four distinct seasons. Almost Winter, Winter, Still Winter, and Mosquitoes. Alas my American friends, what you mistake for a poor quality climate model is simply our optimism on display.

jorgekafkazar
October 24, 2013 10:38 pm

Garbage in, garbage out, eh?

GeeJam
October 24, 2013 11:12 pm

Inspired by Doug Proctor’s analogy @ October 24, 2013 at 9:44 am
Doug’s disjointed but profound last paragraph reads . . . .
“. . . . which wouldn’t be bad if we weren’t a) forced to go along, b) forced to pay for the ride, and c) told that those without a clue as to where we are going are going to be holding the steering wheel and aren’t sure there are any brakes in the car.”
Re-written thus . . . .
Using tax-payers money, we have taken delivery of a brand new luxurious coach.
We will be driving the coach on our ‘Magical Green Mystery Tour’ in order to help save the planet.
Everyone is compelled to board our coach – all of you are forced to go along – no if’s or buts.
There is no need to bring your overcoats as we guarantee your journey will get hot.
Although we will be heading in the right direction, we do not know where we will be going – yet.
Everyone will trust our judgement based on a series of wiggly road maps that confirm our route.
You will also need to pay extra for your seats at above inflation rates.
Those gullible enough will have the best seats.
For passengers who bring their own canister of fuel – we will pay you a subsidy.
Those who question wether we are doing the ‘right thing’ will be forced to stand in the aisle.
Although it may sound unrealistic, our target is 12,500 miles without stopping.
And, after-all, we are not able to stop due to our coach not having any brakes.
PS. Our driver is registered blind.
It will be a very bumpy ride.

Bob Highland
October 25, 2013 12:01 am

Back in the old days, before the land temperature records were subjected to bouts of data diddling to conceal the awful truth, one could scrutinise the graphs back to 1850 and clearly see the alternate warmer/cooler regimes in roughly 30-year cycles that even the dullest brain could imagine was a manifestation of natural cycles.
At that time, the obvious path for students of the new discipline of climate science would have been to study these cycles with sufficient rigour to ensure that all of the physical principles that underlie them were thoroughly understood as a basis for future weather prediction and modelling.
But the true climate change deniers first had their way, doing what is unforgiveable (or even fraudulent) in real science, by retrospectively changing old data until it served their ends, by portraying the Earth as a place of Gaian perfection with only modest diversions from the supposed “average” temperature. Then, having perverted their discipline so soon after its inception by cleaving to a fixation with CO2 as the bad guy responsible for “unprecedented” late-20th-century warming based on Green activist preoccupations, with its incipient guilt-trips plus some flimsy science based on underworked 19th century radiative speculations, they quickly painted themselves into a corner.
One might think that modelling was the way out, with its inherent flexibility to keep torturing the data and algorithms in different ways with the biggest supercomputers available until they squeal out the desired result, with a QED for both accurate prediction and hindcasting.
That they have ALL failed to do so, in so uniformly spectacular a way, would lead one to suppose that either:
a) They are incompetent modellers, as any accountant/marketing executive/politician/consultant with half a brain can make an Excel speadsheet prove anything he wants. OR
b) The basic premise of the models, which all place CO2 as the dominant driver of current and future weather patterns, is just plain wrong.
I’m going with b), because there is clearly no lack of imagination in finding new fudge factors to explain the divergence between models and observation. They’ve used aerosols that only operate for selective time periods, “missing” heat that has magically disappeared into the abyssal oceans without showing itself on the way down, melting Arctic ice that makes places colder because it was so hot. You name it, and they’ve already found a way to insult us with its irrelevance. They’ve even claimed lately that “a 16-year pause in warming is not inconsistent with our models.” Oh yeah? Why don’t you show us the results of one of those model runs, then? Is it that they reveal some other inconvenient truths?
In most professions, when you get things wrong by such a large margin you do the decent thing and resign before you get fired. But these guys simply apply for a bigger computer, so that they can get things wrong faster and to an additional three significant digits of precision.

Brian H
October 25, 2013 1:02 am

Since the brief glorious moment in 1998’s El Nino when the models resembled reality, they have been higher than kites. I wonder what digital drugs they’re being fed.

October 25, 2013 1:19 am

Good work as usual from Friends of Science, Ken.
Regarding alleged oil company collaboration in the Global Warming Scam:
I suggest that Shell and BP did collaborate in the CAGW scam from early days, and Exxon did not.
It appears that Exxon later caved in due to green propaganda and intense market pressure, especially in Europe.
Many of the members of Friends of Science are retired oil company scientists.
I am also an old oil man, but although I admire Friends of Science, I am not a member.
I strongly oppose CAGW alarmism and green energy fraud because it is irrational, immoral and destructive to humanity AND the environment.
It is truly regrettable, and even reprehensible that energy companies have capitulated to global warming hysteria and are sponsoring the very people that seek their destruction.
We wrote this in 2002 and have been proven correct to date:
2002
[PEGG, reprinted at their request by several other professional journals , the Globe and Mail and la Presse in translation]
http://www.apegga.org/Members/Publications/peggs/WEB11_02/kyoto_pt.htm
On global warming:
“Climate science does not support the theory of catastrophic human-made global warming – the alleged warming crisis does not exist.”
On green energy:
“The ultimate agenda of pro-Kyoto advocates is to eliminate fossil fuels, but this would result in a catastrophic shortfall in global energy supply – the wasteful, inefficient energy solutions proposed by Kyoto advocates simply cannot replace fossil fuels.”
If the large energy companies want to regain the moral high ground, they should adopt these two statements as their policies on climate and energy.
Failure to do so will perpetuate the status quo – where the big energy companies, who did not originate the global warming scam but acquiesced to it, will unfairly receive most to the blame when it unravels.
And unravel it will, as natural global cooling resumes in the near future, and Excess Winter Mortality figures tragically climb in certain countries – a strong probability, in my opinion.
In our modern complex world, fossil fuel energy is what keeps most of us in Northern climes from freezing and starving to death.
Foolish politicians, particularly in Western Europe, have badly compromised their energy systems with nonsensical grid-connect wind and solar power schemes. This could end badly, and it probably will.
Anyone who does not want to be tarred with the responsibility for this probable imminent increase in human suffering needs to act quickly and decisively.
Regards to all, Allan
[PEGG is what group? Mod]

October 25, 2013 1:47 am

Another of those pesky Climate Models that has not had an Eureka moment. None of them has had such.
And then the IPCC expects an aggregation of incorrect models to magically supply a correct answer.
Must be powerful stuff their smoking/snorting.

CodeTech
October 25, 2013 4:47 am

Hang on a second…
Ever since the current Prime Minister took office, the whole “global warming/climate change” alarm bells stopped ringing. What “Canada” has indicated is that if the US is on board, we will have to be also. That’s not the same as supporting it. And Alberta has exactly ONE political party, so claiming they’re all “on board” isn’t quite accurate. Hopefully that will be remedied next election anyway, since the “Progressive Conservatives” have completely dropped their “Conservative” part and are now operating wholly as “Progressives”.
Also, I haven’t read all the comments, but Mosher is, as usual, wrong. The F22 and other flight models are based on over a century of historical data, much of which involved loss of life to acquire. Claiming that modelling an airframe in various configurations and loadouts performing various maneuvers is even all that difficult is an outright lie. We know how to model physical processes that are understood.
The FACT is that modelling climate will always be voodoo, for the simple fact that we don’t understand even half of the processes involved. This is proven by the abject failure of these and all of the other models. At some point the people doing this stuff will HAVE to come to the realization that their hypothesis about CO2 causing warming has been disproved, utterly. Maybe, when that happens, the honest few can get to work studying CLIMATE instead of voodoo.
CO2 does not drive climate.

Olaf Koenders
October 25, 2013 5:12 am

Thanks Mike Jonas for mentioning that models should reliably backcast to the 1940’s or earlier, which they can’t and WON’T, because it’s largely impossible and ruins their hyp(e)othesis utterly.
Correct me if I’m wrong but it’s obvious they don’t track PDO, AMO or ENSO very well, if at all. Those 3 are likely predictable and very influential on the global climate, even regionally.
If all they do is give CO2 too much weighting in the code they’re doomed to fail every time. Solar cycles are predictable among differing timescales but number of sunspots in the shorter 9 – 11 year cycle is difficult, as NASA got their first predictions of cycle 24 spectacularly wrong and had to reassess several times.
I’m not saying we should scrap models altogether because with PROPER science, rather than these half-baked political dogmas and the criminal and greedy elements that reside within them as well as the ones they create, some decades in the future will come software that will come close, until Nature decides to throw a curve ball. That’ll always happen.
But policy decisions using any model until it’s proven at least 95% effective is foolish, especially looking at the results of CanESM2 – would be diabolically retarded.

Olaf Koenders
October 25, 2013 5:19 am

I forgot to say that it seems like the Canadian Grabbermint and EPA got the tool to sell their message. But I would like to see how they’d sell their message now that the word’s out on the 17 – 20 year pause.
How would the scamming adjusters at GISS, BoM and NOAA explain their adjustments to data all over the world DURING the pause. Serious jail time would send a clear message for anyone trying to defraud science ever again.

Wayne
October 25, 2013 5:43 am

Mosher: I think you’re confounding two concepts, one of which is worth defending and the other of which is not.
1. A poor result as stepping stone to a good result. It’s true that many of the commentators here would condemn the Wright Brothers’ initial experiments. You’d hear things like “That doesn’t look like a bird!”, “Man has never flown and never will”, and “Look how many times they’ve failed to fly. You’d think they’d get a clue!”.
But the point, of course, is that a poor result can still give some useful information and by examining why the result is poor, you can improve.
2. A poor result is more useful than no result. Thus your bomb crater example.
#2, however, reminds me of college where I got bored of the unimaginative example problems in Differential Equations (or was it Partial Differential Equations?), and convinced the professor that we should try a real-world example. So we estimated several reasonable values for cooking a potato and worked the equations and… and the result is that it should take 6 hours to bake a potato.
The answer was ridiculous. It wasn’t “better than nothing”. It wasn’t useful as “an upper bound”. It was useless. Somewhere, our reasonable approximations for the conditions of baking a potato were unreasonable. If we’d looked long and hard enough, we may have found out where, and it would have been a valuable lesson.
People may pretend that their model falls into class #2, but they’re fooling themselves. Perhaps their boss in the Air Force won’t accept the answer that “Lieutenant Jones is an expert at runways and bombs and he estimates it will take 10 bombs to disable this runway”, so they come up with their poor model, and then adjust its results until it agrees with Lieutenant Jones, and then present it to their boss as, “Our most sophisticated model indicates that it will take 10 bombs +/- 1 bomb to disable this runway.”
The Canadian climate model is much more useless than your suggested bomb model (that’s always low by 1 bomb). It’s at least multiplicatively wrong, and perhaps exponentially. It’s not useful as an upper bound — which it’s been used as — and it radically distorts the “CI’s” of the model ensemble, so that reality barely falls into the CI.
A true class #2 model is something like gravity ignoring air friction. Not the climate model under discussion.

Jim G
October 25, 2013 7:09 am

Does anyone know where one might find some ACTUAL historical global temperature charts that are NOT based upon departures from normal or “anomalies”? Scaling graphs in tenths of degrees is in itself very misleading to the great unwashed masses as it exagerates what the sam hill is going on in the real world. I googled to no avail to find some actual temperature data in graphical form, or any other form for that matter.

Scott Basinger
October 25, 2013 7:23 am

Sounds like a model that goes backwards on its key metric. Time to defund this one, snip from the pack, and focus on the models that have shown more predictive skill.

Theo Goodwin
October 25, 2013 7:48 am

Bob Highland says:
October 25, 2013 at 12:01 am
Very well said. Regarding most comments, I am very encouraged that just about everyone understands the importance and use of historical data.

October 25, 2013 8:02 am

All Models are wrong to some degree. I am sure that complex GCM software is no different to any other large computer code. There will always remain plenty of hidden bugs it is just that they haven’t been found yet. There is often a disconnect between the chief scientist and the programmers.
I once worked on a MHD Plasma physics code which simulates magneto hydrodynamics and transport of energy and impurities in magnetically confined plasmas for Fusion research. The code is around 1.5 million lines of Fortran. At that time there was only one physicist programmer really understood the full structure. That one key person kept most of the details to himself because that way he had a job for life. I expect it is just the same situation for all the big climate models. Just one or two programmers in the background really know the code while the climate scientists who give talks about how “robust” their results are and the “consensus” do zero or very little actual coding. One other tendency of large software projects is code inertia. Once a large block of code gets written it tends to get layered over with upgrades rather than being thrown away to rewrite from scratch.
Every model is just an over-simplification of reality. This is particularly true for GCMs in the way they treat water vapor and cloud feedbacks. Obviously these two processes are intimately connected with each other and both involve positive and negative feedback. No GCM can accurately simulate cloud formation because the micro-physics of cloud formation is still poorly understood. Ken has shown above how the models do not describe water vapor in the upper troposphere – which is where the greenhouse effect is strongest. They also cannot describe how cloud cover changes in response to warming.

October 25, 2013 8:21 am

Jim G says: October 25, 2013 at 7:09 am

Does anyone know where one might find some ACTUAL historical global temperature charts that are NOT based upon departures from normal or “anomalies”?

This page says; https://www2.ucar.edu/climate/faq/what-average-global-temperature-now
“Between 1961 and 1990, the annual average temperature for the globe was around 57.2°F (14.0°C), according to the World Meteorological Organization.”
The HadCRUT4 temperature anomaly for 1961 – 1990 is zero (actually -0.00051 C).
The anomaly for the first 8 months of 2013 is 0.468 C. That gives the average 2013 temperature about 14.47 C.
Look at this amazing graph of the actual modeled temperatures;
http://curryja.files.wordpress.com/2013/10/figure.jpg
from http://judithcurry.com/2013/10/02/spinning-the-climate-model-observation-comparison-part-ii/
The year 2000 modeled temperatures of the climate model ensemble ranges from 12.5 C to 15.7 C.
That is a huge range! Judith Curry writes, “how [can] these models produce anything sensible given the temperature dependence of the saturation vapor pressure over water, the freezing temperature of water, and the dependence of feedbacks on temperature parameter space.”

Pamela Gray
October 25, 2013 8:31 am

Clive, I see this from a different angle. The models describe the modelers’ idea of global warming in response to increased anthropogenic CO2 quite well. In fact I would say that the modelers are as a group, quite proud of the projections. I strongly doubt they were/are interested at all in natural climate variability. I propose this is why they do not seek to change these models towards natural climate variability. It would, by default, destroy the alarmism as soon as the word got out that the models have been changed to place more emphasis on natural swings. It would also instantly dry up the money they receive from IPCC to provide the next round of projections.
In reality there is absolutely no money or rewards in modeling natural variability. No grants. No media coverage, no journal articles, and no chance at a Nobel Peace or Science Prize.
So to extend your last sentence, the models cannot describe how cloud cover or water vapor changes because the modelers do not want to describe these things, not because they are poorly understood.

Alistair Ahs
October 25, 2013 8:38 am

@clivebest – You make a lot of assumptions. Why don’t you go looking for some evidence, instead? There have been papers written on how climate models are written, for example – http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=5337646
The main article makes a common mistake about the way in which climate models work. It says:
“The large errors are primarily due to incorrect assumptions about water vapor and cloud changes. The climate model assumes that water vapor, the most important greenhouse gas, would increase in the upper atmosphere in response to the small warming effect from CO2 emissions.”
The model does not assume this at all. This is an emergent property of the equations that are written into the model code.
These equations run from the relatively simple Navier-Stokes equations of fluid dynamics to more complicated equations used to approximate the behaviour of convective storms, or clouds in the boundary layer. These equations are written in different ways in the different climate models, and somehow the interactions between the equations produce models with a high climate sensitivity, or with a low climate sensitivity.
There is some interest among climate scientists in trying to devise a way to discriminate between “better” and “worse” climate models by comparing them to recent climate, but my main point is that the sensitivity of a model is not predictable in advance by the scientists who are writing the model code, because it is too complicated for that.
They have to run the model and see how it comes out.

October 25, 2013 8:48 am

Alistair Ahs says:
“They have to run the model and see how it comes out.”
As the article makes clear, the model is wrong.
I think über-warmist Phil Jones’ chart shows reality. Well before CO2 began to rise significantly, the planet went through the same cycles.
It is all natural variability. That is the null hypothesis. If you can falsify that, you will be the first. If you can’t, though, the default assumption must be natural variability. Everything else is evidence-free conjecture.

October 25, 2013 10:13 am

@Alistair Ahs.
I try not to make any assumptions. The use of software engineering techniques and module testing just reduces the probability of bugs per line of code. GCM models consist of millions of lines of historic and new code. No matter how sophisticated the procedures there will still be bugs present. Any large piece of software needs continuous updates to fix bugs.
The emergent properties of models from the underlying physical equations is of course correct. Some equations are better known than others. MHD equations are more complex than hydrodynamics, and both don’t describe turbulence. As afar as I know cloud models are still only based on large grids whereas real cloud formation is very complex.
All models are ideal simulations which can be used to interpret and understand observations. This is surely what is happening in hindcasts where aerosol forcing appears to me to be fine tuned to fit the data. However using the same models to predict future observations is clearly far less certain. This inherent uncertainty does not seem to make it through to policy makers.

October 25, 2013 10:52 am

[PEGG is what group? Mod]
PEGG is the Journal of APEGA, (formerly APEGGA, the Association of Professional Engineers, Geologists and Geophysicists of Alberta). http://www.apegga.org/
In 2002 I was asked by APEGGA to write an article as one-side of a debate with the Pembina Institute on the science of global warming. The debate is available at
http://www.apegga.org/Members/Publications/peggs/WEB11_02/kyoto_pt.htm
I enlisted the participation of Dr. Sallie Baliunas, Harvard U Astrophysicist, and Dr. Tim Patterson, Carleton U Paleoclimatologist.
In our rebuttal we wrote eight statements, all of which have since been proven correct in those countries that fully embraced humanmade global warming mania.
Our eight statements were directed against the now-defunct Kyoto Protocol, and are as follows:
1. Climate science does not support the theory of catastrophic human-made global warming – the alleged warming crisis does not exist.
2. Kyoto focuses primarily on reducing CO2, a relatively harmless gas, and does nothing to control real air pollution like NOx, SO2, and particulates, or serious pollutants in water and soil.
3. Kyoto wastes enormous resources that are urgently needed to solve real environmental and social problems that exist today. For example, the money spent on Kyoto in one year would provide clean drinking water and sanitation for all the people of the developing world in perpetuity.
4. Kyoto will destroy hundreds of thousands of jobs and damage the Canadian economy – the U.S., Canada’s biggest trading partner, will not ratify Kyoto, and developing countries are exempt.
5. Kyoto will actually hurt the global environment – it will cause energy-intensive industries to move to exempted developing countries that do not control even the worst forms of pollution.
6. Kyoto’s CO2 credit trading scheme punishes the most energy efficient countries and rewards the most wasteful. Due to the strange rules of Kyoto, Canada will pay the former Soviet Union billions of dollars per year for CO2 credits.
7. Kyoto will be ineffective – even assuming the overstated pro-Kyoto science is correct, Kyoto will reduce projected warming insignificantly, and it would take as many as 40 such treaties to stop alleged global warming.
8. The ultimate agenda of pro-Kyoto advocates is to eliminate fossil fuels, but this would result in a catastrophic shortfall in global energy supply – the wasteful, inefficient energy solutions proposed by Kyoto advocates simply cannot replace fossil fuels.
*******************
My country Canada was foolish enough to sign the Kyoto Protocol but then was wise enough to largely ignore it. Those Canadian provinces that did adopt the policies of global warming mania, like Ontario, are now paying a very heavy price for their foolishness.
In the aforementioned debate, the Pembina Institute rejected our position through an appeal to the authority of the IPCC, which had NO successful predictive track record at that time and STILL has NO successful predictive record.
I suggest that in science, one’s predictive record is perhaps the ONLY objective measure of one’s competence, and as such, the IPCC has completely failed.

Slartibartfast
October 25, 2013 12:43 pm

Any simulation whose predictions are verifiable using data should be validated to the extent possible. Every missile simulation starts out being a best guess as to aerodynamics, propulsion and guidance. But guess what happens? The missile is flight tested. Discrepancies between the simulation and performance are then resolved. Errors in models are corrected.
This is the part of the simulation-prediction game that Mr. Mosher seems to be very unfamilar with. Even to the point of discomfort.

herkimer
October 25, 2013 4:46 pm

PAMELA GRAY
You said “In reality there is absolutely no money or rewards in modeling natural variability. No grants. No media coverage, no journal articles, and no chance at a Nobel Peace or Science Prize.”
Well said . As recently reported on GWPF web page , The world is spending $1billion per day to tackle global warming ” With all this free money going around , who would want to stop this. Certainly not those getting all this free money . So they will avoid reporting the truth or publishing correct model outputs as it will turn off the money tap. Meanwhile the globe will go its own way as it always has done and most likely cooling will set in for next many decades regardless what money is collected or spent.

October 25, 2013 8:33 pm

BrianH writes ” I wonder what digital drugs they’re being fed.”
Possibly digitalias?

Josef Rejano
October 25, 2013 10:15 pm

I wonder how this model does pre-1970? If a model is able to predict historical data over several decades, it might still be useful even if it fails to predict the last one or two. Although a 3 sigma deviation is definitely bad, if the model successfully predicts the past few decades it might suggest that there is something in the recent measurements that it isn’t properly taking into account, and not necessarily that the model is complete garbage.
“The best fit linear trend lines (not shown) of the model mean and all datasets are set to zero at 1979, which is the first year of the satellite data. We believe this is the best method for comparing model results to observations.”
Forgive me if this is obvious, but I’m not entirely sure if this is well justified. Is this the standard way that the base lines of these models are lined up?
I just stumbled across this article, and I think it’s an interesting analysis. But, since a lot of the commentary here seems to be on the anti-AGW side of things it just makes me feel a little skeptical. Has anyone else had any doubts about this, and then convinced themselves that those doubts were not well founded? Because that kind of scrutiny is what I’m really interested in, and yet seems so hard to find on either side of the argument.

Editor
October 26, 2013 3:13 am

The one thing that strikes me about all these studies is that AGW has changed from being a science to becoming a belief, because any evidence that AGW is not happening is totally disregarded or some illogical explanation that disregards the laws of physics is trotted out (the missing heat has disappeared into the ocean depths!).
I find this frightening, because science is capable of change, beliefs are not!

Sean
October 26, 2013 10:41 am

Maybe the missing heat is on the moon. Did Weaver check there?

October 26, 2013 11:30 am

Google ‘conenssti energy’ to discover what has driven average global temperature since 1610. Follow a link in that paper to a paper that gives an equation that calculates average global temperatures with 90% accuracy since before 1900 using only one external forcing. Carbon dioxide change has no significant influence. The average global temperature trend is down.

October 26, 2013 11:51 am

Josef Rejano says: October 25, 2013 at 10:15 pm

“The best fit linear trend lines (not shown) of the model mean and all datasets are set to zero at 1979, which is the first year of the satellite data. We believe this is the best method for comparing model results to observations.”
Forgive me if this is obvious, but I’m not entirely sure if this is well justified. Is this the standard way that the base lines of these models are lined up?

Why don’t you think this method of comparing model results to observations is well justified? If you criticize my method, why don’t you gives reasons that another method is better?
The absolute values of temperatures in both models and observations is very uncertain; only the relative changes in temperatures are meaningful, so we have to use anomalies. I am comparing the rate of warming, or the trends of temperatures between observations and the models. A different choice of comparisons would change the constant added to the curves, but would have no affect on the trend comparisons. It would just change the vertical position of the model curves relative to the observations in the graphs.
I choose the method suggested by Dr. Chrisy and Dr. Spencer in the post “STILL Epic Fail: 73 Climate Models vs. Measurements” here:
http://www.drroyspencer.com/2013/06/still-epic-fail-73-climate-models-vs-measurements-running-5-year-means/
Dr. Spencer states;

In this case, the models and observations have been plotted so that their respective 1979-2012 trend lines all intersect in 1979, which we believe is the most meaningful way to simultaneously plot the models’ results for comparison to the observations.

An alternative method would be to set the average values of a short period of time of model and observations to the same value. The satellite data starts in 1979, so we could have set the average of 1979 to 1984 values to zero in the graphs. Using a longer period would made the match of surface and radiosonde observation to models much worst in the period before 1979. The short 6-year period 1979 to 1984 might, by chance, be on the high or low side of the longer term trend of either the model or the observations, creating a biased comparison. In conclusion, our method of making the intercepts of the tends all match at 1979 is the best method of trend comparisons for presentation in the graphs.

October 27, 2013 4:48 am

OK good, Canadian models are particularly bad, but can we get excited about 1 area in a big world ?Could a dramagreen simply cherry pick other places to find where models worked ? In the whole world there must be 1 or 2
– what do you think ?

richardscourtney
October 27, 2013 6:32 am

stewgreen:
At October 27, 2013 at 4:48 am you ask

OK good, Canadian models are particularly bad, but can we get excited about 1 area in a big world ?Could a dramagreen simply cherry pick other places to find where models worked ? In the whole world there must be 1 or 2
– what do you think ?

I think you have provided a clear example of the Texas Sharpshooter fallacy
https://yourlogicalfallacyis.com/the-texas-sharpshooter
The IPCC AR5 provides a spaghetti graph of 95 computer model projections. All except three of the projections are obviously wrong and, therefore, it is tempting to keep those three and to reject the 92 obviously wrong ones.
That temptation is an error. All of the models adopt similar modelling principles but operate different assumptions, parametrisations (i.e. guesses), etc.. And it could be expected that three or more of the 95 models would have matched the past by pure chance. So, there is no reason to suppose that any of the models can project the future.
The correct action is either
(a) to examine each of all the models because that may provide information about the models
or
(b) to reject all of the models because it is decided that they are flawed in principle so an alternative modelling method is required.
Rchard

buggs
October 28, 2013 12:34 pm

Late to the party but I am confused. At what point did we stop using 30 year rolling averages? Wasn’t 30 years the magical number? If so, shouldn’t we have a starting point somewhere around 1983?
Or is the starting point rather more decidedly chosen for a particular characteristic, say the low point following cooling from the mid 1940s or so until now? So if we can arbitrarily pick a starting point to show “data” I would like to see them show everything based on a start point around 1925. Wouldn’t that make the overall picture a little bit different?

richardscourtney
October 28, 2013 1:09 pm

buggs:
At October 28, 2013 at 12:34 pm you ask

Late to the party but I am confused. At what point did we stop using 30 year rolling averages? Wasn’t 30 years the magical number? If so, shouldn’t we have a starting point somewhere around 1983?

Yes, you are “confused” and your confusion was deliberately invoked by warmunists in promotion of the AGW-scare.
There is and was no justification for using “30 year rolling averages” and “30 years” was falsely asserted to be a “magical number” so there is no justifiable reason to suggest “a starting point somewhere around 1983”. I explain this as follows.
As part of the International Geophysical Year (IGY), in 1958 it was decided that 30 years would be the standard period for comparison of climatological data. The period of 30 years was a purely arbitrary choice and was selected on the basis that it was thought there was then sufficient data for global data to be compiled for only the previous 30 years.
A 30 year standard period is NOT a time for determining a climatological datum. It is a time for obtaining an average against which a climatological datum can be compared. So, for example, HadCRU and GISS each provide a climatological datum of mean global temperature for a single year and present it as a difference (i.e. an anomaly) from the average mean global temperature of a 30 year period. But they use different 30 year periods to obtain the difference.
The arbitrary choice of 30 years is unfortunate for several reasons. For example, it is not a multiple of the ~11 year solar cycle, or the ~22 year Hale cycle, or the ~60 year AMO, or etc.. But in 1958 it was thought that 30 years was the longest available period against which to make comparisons so 30 years was adopted as the standard period.
And that is the ONLY true relevance of 30 years for climate data.
Indeed, the IPCC did not use 30 years as a basis for anything except as the ‘climate normal’ for comparison purposes. Hence, for example, in its 1994 Report the IPCC used 4 year periods to determine if there were changes in hurricane frequency.
The satellite data for global temperature began to be recorded in 1979. And it showed little change to global temperature with time after nearly 20 years. So, alarmists started to promote the false idea that data had to be compiled over 30 years for it to be meaningful and, therefore, – they said – the satellite data should be ignored. But the satellite data had two effects: (a) it constrained the amount of global warming which e.g. HadCRU and GISS could report since 1979, and (b) as 30 years and more of satellite data were obtained the alarmists lost ability to refute the validity of the satellite data.
Richard