Epic Failure of the Canadian Climate Model

Guest Essay by: Ken Gregory

The Canadian Centre for Climate Modeling and Analysis located at the University of Victoria in British Columbia submitted five runs of its climate model CanESM2 for use in the fifth assessment report of the International Panel on Climate Change (IPCC). The climate model produces one of the most extreme warming projections of all the 30 models evaluated by the IPCC. (See Note 1.) The model badly fails to match the surface and atmosphere temperature observations, both globally, regionally, as presented in six graphs.

Global Trends

The graph below compares the near-surface global temperatures to the model runs.

clip_image002

Figure 1. Canadian climate model simulations of near-surface global temperatures and three datasets of observations.

The five thin lines are the climate model runs. The thick black line is the average (mean) of the five runs. The satellite data in red is the average of two analysis of lower troposphere temperature. The radiosonde weather balloon data is from the NOAA Earth Space Research Laboratory. The surface data is the HadCRUT4 dataset. (See Note 2 for the data sources.) The best fit linear trend lines (not shown) of the model mean and all datasets are set to zero at 1979, which is the first year of the satellite data. We believe this is the best method for comparing model results to observations.

Any computer model that is used to forecast future conditions should reproduce known historical data. A computer climate models that fails to produce a good match to historical temperature data cannot be expected to give useful projections.

Figure 1 shows that the computer model simulation after 1983 produces too much warming compared to the observations. The discrepancy between the model and the observation increase dramatically after 1998 as there has been no global near-surface warming during the last 16 years. With the model and observation trends set to zero in 1979, the discrepancy between the model mean of the near-surface global temperatures and the surface observations by 2012 was 0.73 °C. This discrepancy is almost as large as the 0.8 °C estimated global warming during the 20th century. The model temperature warming trend as determined by the best fit linear line from 1979 to 2012 through the model mean is 0.337 °C/decade, and the average trend of the three observational datasets is 0.149 °C/decade. Therefore, the model temperature warming rate is 226% of the observations (0.377/0.147 = 226%).

The large errors are primarily due to incorrect assumptions about water vapor and cloud changes. The climate model assumes that water vapor, the most important greenhouse gas, would increase in the upper atmosphere in response to the small warming effect from CO2 emissions. A percentage change in water vapor has over five times the effect on temperatures as the same percentage change of CO2. Contrary to the model assumptions, radiosonde humidity data show declining water vapor in the upper atmosphere as shown in this graph.

The individual runs of the model are produced by slightly changing the initial conditions of several parameters. These tiny changes cause large changes of the annual temperatures between runs due to the chaotic weather processes simulated in the model. The mean of the five runs cancels out most of the weather noise because these short term temperature changes are random.

The climate model assumes that almost all of the temperature changes are due to anthropogenic greenhouse gas emissions and includes insignificant natural causes of climate change. The simulated weather noise includes the effects of ENSO (El Niño and La Niña), but does not include multi-decadal temperature changes due to natural ocean oscillations or solar-induced natural climate change other than the small changes in the total solar irradiance. The historical record includes the effects of large volcanoes that can last for three years. The projections assumes no large future volcanoes. There has been no significant volcanoes since 1992 that could have affected temperatures as shown in this graph of volcanic aerosols.

The model fails to match the temperature record because the model over estimates the effects of increasing greenhouse gases and it does not include most natural causes of climate change.

Figure 2 compares the mid-troposphere global temperatures at the 400 millibar pressure level, about 7 km altitude, to the model. (millibar = mbar = mb. 1 mbar = 1 hPa.)

clip_image004

Figure 2. Canadian climate model simulations of mid-troposphere global temperatures and two datasets of observations. The balloon data is at the 400 mbar pressure level, about 7 km altitude.

The discrepancy in 2012 between the model mean of the global mid-troposphere temperatures and the satellite and balloon observations were 1.26 °C and 1.04 °C, respectively. The model temperature trend is 650% of the satellite trend.

The satellites measure the temperature of a thick layer of the atmosphere. The satellite temperature weighting function describes the relative contributions that each atmospheric layer makes to the total satellite signal. We compared the balloon trends weighted by the lower troposphere satellite temperature weighting functions to the near-surface observed trends for global and tropical data. Similar comparisons were made for the mid-troposphere. The weighted thick layer balloon trends for the lower and mid-troposphere were similar to the near-surface and 400 mbar balloon trends, respectively. We conclude that the satellite lower-troposphere and mid-troposphere temperature trends are approximately representative of the near-surface and 400 mbar temperature trends, respectively. (See notes 3 for further information.)

Tropical Trends

Figure 3 compares the near-surface temperatures in the tropics from 20 degrees North to 20 degrees South latitude to the climate model simulations. The model temperature trend is 300% of the average of the three observational trends. The discrepancy of model to near-surface observation trends in the tropic is much greater than for the global average.

clip_image006

Figure 3. Canadian climate model simulations of near-surface tropical temperatures and three datasets of observations.

Figure 4 compares the warming trend in the tropical mid-troposphere to the observations. The increasing tropical mid-troposphere water vapor in the model makes the warming trend at the 400 mbar pressure level 55% greater than the surface trend.

The discrepancy in 2012 between the model mean of the mid-troposphere tropical temperatures and the balloon observations was 1.33 °C. The model temperature trend is 560% of the average of the two observational trends.

clip_image008

Figure 4. Canadian climate model simulations of mid-troposphere 400 mbar tropical temperatures and two datasets of observations.

The discrepancy is even greater at the 300 mbar pressure level, which is at about 9 km altitude.

Figure 5 compares the model to the balloon temperatures at the 300 mbar pressure level in the tropics. The tropical warming trend at 300 mbar in the model is exactly twice the model surface trend. In contrast, the temperature trend of the balloon data at 300 mbar is an insignificant 3% greater than the surface station trend. The discrepancy in 2012 between the model mean of the mid-troposphere tropical temperatures at the 300 mbar level and the balloon observations was 1.69 °C.

The temperature trends (1979 to 2012) of the tropical atmosphere at 7 km and 9 km altitude are an astonishing 470% and 490% of radiosonde balloon data. These are huge errors in the history match!

clip_image010

Figure 5. Canadian climate model simulations of mid-troposphere 300 mbar tropical temperatures and weather balloon observations.

South Trends

In the far south, the near-surface modeled temperature tend is in the opposite direction from the observations. The figure 6 compares the model near-surface temperatures from 50 degrees South to 75 degrees South latitude to the observations. The modeled temperatures from 1979 are increasing at 0.35 °C/decade while the surface temperatures are decreasing -0.07 °C/decade.

Temperatures over most of Antarctica (except the Antarctic Peninsula) have been falling over the last 30 years. The Antarctic sea ice extent is currently 1 million square km greater than the 1979 to 2000 mean. The sea ice extent has been greater than the 1979 to 2000 mean for all of 2012 and 2013 despite rising CO2 levels in the atmosphere.

clip_image012

Figure 6. Canadian climate model simulations of near-surface southern temperatures (50 S to 75 S latitude) and two datasets of observations.

Summary of Trend Errors

The table below summarizes the model trend to observation trend ratios.

Model Trend to Observations Ratios (1979 to 2012)
Model/Satellite Model/Balloon Model/Surface
Global Surface 254% 209% 220%
Global 400 mb 650% 315%
Tropics Surface 304% 364% 249%
Tropics 400 mb 690% 467%
Tropics 300 mb 486%
South Surface 3550% -474%

The table shows that the largest discrepancies between the model and observations are in the tropical mid-troposphere. The model error increases with altitude and is greatest at the 300 mbar pressure lever at about 9 km altitude. The ratio of the modeled tropical mid-troposphere 300 mbar warming rate to the surface warming rate is 200%, and is a fingerprint of the theoretical water vapor feedback. This enhanced warming rate over the tropics, named the “hot spot”, is responsible for 2/3 of the warming in the models. The fact that the observations show no tropical mid-troposphere hot spot means that there is no positive water vapor feedback, so the projected warming rates are grossly exaggerated.

Model results with large history match errors should not be used for formulating public policy. A model without a good history match is useless and there can be no confidence in its projections. The lack of a history match in the Canadian model output shows that the modeling team have ignored a fundamental requirement of computer modeling.

A global map of the near-surface air temperature from the model for April 2013 is here.

Anti-Information

Patrick Michaels and Paul “Chip” Knappenberger compared the model output to actual 20th century temperature data to determine what portion of the actual data can be explained by the model. They wrote, “One standard method used to determine the utility of a model is to compare the “residuals”, or the differences between what is predicted and what is observed, to the original data.  Specifically, if the variability of the residuals is less than that of the raw data, then the model has explained a portion of the behavior of the raw data.”

In an article here, they wrote “The differences between the predictions and the observed temperatures were significantly greater (by a factor of two) than what one would get just applying random numbers.” They explained that a series of random numbers contain no information. The Canadian climate model produces results that are much worse than no information, which the authors call “anti-information”.

The Canadian Model Contributes to Fuel Poverty

The results from the Canadian climate model were used in a U.S. Global Change Research Program report provided to the US Environmental Protection Agency to justify regulating CO2. The authors of that report were told that the Canadian climate model produces only anti-information. They confirmed this fact, but published their report unchanged. The Canadian government has indicated it will follow the lead of the U.S.A. in regulating CO2 emissions.

The Canadian climate model is also used by the IPCC to justify predictions of extreme anthropogenic warming despite the fact that the model bears no resemblance to reality. As does the climate model, the IPCC ignores most natural causes of climate change and misattributes natural climate change to greenhouse gas emissions. Here is a list of 123 peer-reviewed papers published from 2008 to 2012 on the solar influence on climate that were ignored by the IPCC in the fifth assessment report.

Climate alarmism based on climate models that don’t work has so far cost the world $1.6 trillion in a misguided and ineffective effort to reduce greenhouse gas emissions. These efforts have caused electricity prices to increase dramatically in Europe causing fuel poverty and putting poor people at risk. High fuel costs and cold winter weather are blamed for 30,000 excess deaths in Britain last year. Europe’s energy costs have increased by 17% for consumers and 21% for industry in the last four years. The Canadian climate model failures has contributed to this misery.

Canadian politicians and taxpayers need to ask why we continue to fund climate models that can’t replicate the historical record and produce no useful information.

Ken Gregory, P.Eng.

Friends of Science

Ten years of providing independent

climate science information

Notes:

1. The Canadian Earth System Model CanESM2 combines the CanCM4 model and the Canadian Terrestrial Ecosystem Model which models the land-atmosphere carbon exchange. Table 9.5 of the IPCC Fifth Assessment Report Climate Change 2013 shows that the CanESM2 transient climate sensitivity is 2.4 °C (for double CO2). The 30 model mean transient climate sensitivity 90% certainty range is 1.2 °C to 2.4 °C.

2. The Canadian climate model CanESM monthly data was obtained from the KNMI Climate Explorer here. The satellite data was obtained from the University of Alabama in Huntsville here (LT) and here (MT), and from Remote Sensing Systems here (LT) and here (MT). The radiosonde weather balloon data was obtained from the NOAA Earth System Research Laboratory here. The global surface data is from the HadCRUT4 dataset prepared by the U.K. Met Office Hadley Centre and the Climate Research Unit of the University of East Anglia, here. The HadCRUT4 tropical data (20 N to 20 S) was obtained from KNMI Climate Explorer. An Excel spreadsheet containing all the data, calculations and graphs is here.

3. The average global weather balloon trends of all pressure levels from 1000 mbar to 300 mbar weighted by the LT satellite temperature weighting functions is only 3% greater than the balloon trend at 1000 mbar, confirming that the LT satellite trends is representative of the near-surface temperature trend. The weighted average tropical weather balloon trends are 3% greater than the average of the 1000 mbar balloon trend and the surface station trend. The temperature weighting functions for land and oceans were obtained from the Remote Sensing Systems website. The average global and tropical balloon trends of all pressure levels weighted by the mid-troposphere (MT) satellite temperature weighting functions are 13% and 4% greater than the balloon trends at 400 mbar. If the MT satellite trends were adjusted by these factors to correspond to the 400 mbar pressure level, the global and tropical satellite trend adjustments would be -0.017 °C/decade and -0.004 °C/decade, respectively. Since the MT satellite trends are already less than the 400 mbar balloon trends, no adjustments were applied.

=============================

PDF version of this article is  here

Advertisements

  Subscribe  
newest oldest most voted
Notify of
Eliza

Try comparing to radiosonde and satellite that will look even more ridiculous not even a Fail maybe a fraud LOL

Eliza

A bit OT (delete or put elsewhere if you wish) but Solar 24 picking up a bit
http://www.swpc.noaa.gov/SolarCycle/sunspot.gif I was wondering if all the low’s SNN (but not very very low) add up to be considerable flux over time even though the cycle per se is “low” that may support Leif’s assertions re solar activity climate to some extent anyway. It (the idea) may be complete nonsense as well LOL

DontGetOutMuch

Clearly, reality is flawed. Why have there been no adjustments?

bk

Solid well written article. Like the fuel poverty part. We have to keep hammering away at the alarmist side. They are trying to stand their ground and repeat their cult mantras of “Climate Change is Real” and “End the Denial” but the public is starting to notice that the alarmist arguments are just a religious chant sounding like la la la la.

Thanks, Ken. Good article.
It is not surprising, but it is sad that policy is being based on unrealistic models, all over the world.
The underlying cause is not in the models, or even in post-modern climatology of the IPCC. It is more basic, in the advance of socialism.

Sean Peake

I think CGI did the coding

“Any computer model that is used to forecast future conditions should reproduce known historical data. A computer climate models that fails to produce a good match to historical temperature data cannot be expected to give useful projections.”
well that’s patently false.
Some simple examples come to mind, like the distance to empty model in my cars navigation system. Its always wrong but useful
but today I’ll give you a life an death one from operational analysis.
In order to make decisions about our future defenses the DoD routinely uses models that are not verified against the past and can never be verified against the past. The models are even used in theatre to evaluate strategic and tactical decisions.
Example: the design of the F-22. During the design of that system the model Tac Brawler was used to inform decisions about the number of missiles the aircraft would carry; 4 6 or 8.
To make the decision the modelling departments had to simulate future air battles.
1. the planes and weapons being tested were all imaginary projections
2. there was no historical data to verify against.There is very little real data on
air combat between aircraft in the modern era.
Despite this utter lack of historical data to verify against the output of the models was useful
in informing decisions.
Where you do have historical data to verify against you can even use bad fits to inform decisions. For example: In modelling how many bombs its takes to cripple a runway you
would construct a model. The model may predict 3 bombs and your historical data may tell you
that the model always underestimates the total required. So the models hindcast is low, say by one bomb. Going forward say you want to model the effect of a bomb with an increased number of cluster uniits. You use your existing model. It predicts 2 bombs for this future weapon. Is that useful? Sure it is. If I am attack planning i use that model ( an example would be bug splat ) the model predicts 2, and as a planner I decide that 3 should be the planning number. Why? because my model that fails to hindcast is characteristically low. To be safe Im not going to count on 2 bombs doing the job. I’ll need to plan for 3. meanwhile the modelling guys continue to refine their model. They dont consider their model falsified. They consider it as having limited utility. Important ultility ( it gives you a ‘floor” estimate) but utility nonetheless.

C.M. Carmichael

As an Alumni of UVic it saddens me to see such obvious propaganda being displayed, anyone with any pride would keep this model’s output as their biggest secret. Tell people that the dog ate your model output, or better yet tell them that occasionally water from the deep ocean grabs model outputs and wisks them away to the deep ocean, really deep, so don’t bother looking. BC in general and Victoria in particular has always been a “Hippie” which means “green” haven. Canada has its only federal Green party representative there, and she is a real nutter.

David s

I guess that means if the model differs from reality then reality must be wrong. Or possibly the modelers have become detached from reality.

john robertson

Paging Arch Warmists weaver?
This is symptomatic of the canadian bureaucracies approach to CAGW.
Complete and total F.U.D.
Whether through incompetence,cowardice, malice or orders from above they have betrayed the citizens of Canada.
Utter and expensive nonsense, such as “Environment Canada’s Science”, billions of dollars spent on studies of the effect of AGW, while never proving for themselves that there is any such beast.
Climate Change mitigation policies are all the rage, yet there exists zero science confirming the need for these policies, even government admits this as they fail to document the science they claim their policy is based on.
This whole scheme bears the handprints of a group of people centred around the federal Liberal Party of Canada.

Betapug

With research product like, “The Impact of a Human Induced Shift in North Pacific Circulation on North American Surface Climate, (J.C. Fyfe, 2008) it is no wonder that CCCMA has also managed to produce the first elected Provincial Green Party MLA in Canada, “Nobel Prize winner” Dr. Andrew J. Weaver.
As with many academics these days, Dr. Weaver seems to keep fingers in several pies at once. He is also a principal of Solterra Solutions Ltd. a private corporation providing “Climate, Forensic and Educational Services” including promotion of Weaver’s books on Global Warming.
http://www.solterrasolutions.com/index.php?page=2

Fred Souder

I like using graphs from this site in my classroom, but the Y axes are mislabeled. Would it be difficult to change them to “temperature anomalies” or “deviations”?
Steven Mosher:
Usually your examples have some merit and represent a valid – or at least arguable – point of view, but your examples of models that don’t need to reflect past data are unrelated to climate models and do not strengthen your argument.

Doug Proctor

I’m a Canadian whose taxes pay for this, but this isn’t my complaint/question. The question is, why would ANY SCIENTIST submit for consideration models that fail so badly to match observation?
There has to be some belief that complex reality disguises itself ONLY IN THE SHORT TERM, a period of time of no real concern. It would be like saying things fall down, not up, now, yes, but tomorrow they will go up because my model says the sky sucks more than the earth.
The only other reason I can imagine is that the scientists involved have some OTHER temperature profile they are matching. Does the climate war disconnect go this far, that the warmists and the skeptics are using two different sets of “observations”?

Theo Goodwin

“Despite this utter lack of historical data to verify against the output of the models was useful
in informing decisions.”
Of course it was useful. The opinions of various decision makers were useful. But that fact doesn’t make the opinions of various decision makers into science or something like science. And we are talking about science here because we are using words like ‘verify’ and the crucially important ‘predict’.
If there is no prediction there is no science. And the issue here, as always, is whether the models meet the standards of science. If they do not meet the standards of science then they cannot be offered as products of science. Just ask the IPCC if they are willing to withdraw the claim that the models are scientific. Ask yourself the same question and give us your answer.
As a postmodernist, you are unwilling to set forth standards for science. You are unwilling to set forth standards for scientific prediction. Regarding prediction, you are totally willing to speak with the vulgar. For example, if I say that Jupiter’s surface will become purple polka dots against a white background at exactly 3 pm today, EDT, you will accept that as a scientific prediction just a highly unlikely prediction. Nonsense. All scientific predictions must make reference to some set of well confirmed physical hypotheses which can be used to explain the predicted phenomenon. See Kepler’s three laws and Newton’s deduction of them for crystal clear examples.
Models and statistical analysis are useful in decision making but they do not reach the level of science. They do not do prediction. They do analysis. A consultant can legitimately offer his models and his time-series analysis as tools that can improve a corporation’s planning for the future. That consultant carries with him a great deal of tacit knowledge that he has acquired through experience. That tacit knowledge makes the consultant a very useful resource for decision makers. But the models, the statistical analysis, and the tacit knowledge do not rise to the level of science. They cannot magically become well confirmed physical hypotheses that can be used to predict and to explain the phenomena of interest.
If the Canadian model discussed in Gregory’s article above is offered by its makers as a scientific tool, and it should be because all climate modelers treat their models as substitutes for the scientific theory that they do not possess, then it is an abject failure either (Gregory) because it cannot reproduce historical data or (me) it cannot meet the standards for scientific prediction. In either case, the IPCC and all its friends should “fess up” and admit that modeling is not science.

Jquip

Mosher: “In order to make decisions about our future defenses the DoD routinely uses models that are not verified against the past and can never be verified against the past. The models are even used in theatre to evaluate strategic and tactical decisions.”
Over history they’ve used sheep knuckle bones, half-flattened twigs of wood, astrologers, and animal sacrifices. Not saying they were ‘right,’ but they had utility. And since that, then we know that you’re totally down with lynching a man and reading his entrails because:
“They dont consider their model falsified. They consider it as having limited utility. Important ultility ( it gives you a ‘floor” estimate) but utility nonetheless.”
So I’ll tell you what. I’ll give you your point on utility, if you’ll start predicting the climate using palmistry.

jlurtz

The Canadian model is a success. It proves that CO2 is NOT the driving “force” that caused “Global Warming” [Global Temperature Stagnation] from 1979 until 2012.

JimS

Even though he is a zoologist, David Susuki is an example of our Canadian scientists. Need I say more?

Rob

That’s as bad as bad can get. The entire theory of “Global Warming” must
be reevaluated by science(never mind the trillion dollar AGW Industry).

DirkH

Canada disappoints
-gavin.

Theo Goodwin

C.M. Carmichael says:
October 24, 2013 at 8:49 am
“BC in general and Victoria in particular has always been a “Hippie” which means “green” haven. Canada has its only federal Green party representative there, and she is a real nutter.”
That part of the world attracts them. In Eugene OR if you are discovered eating the wrong food an “intervention” is required by the entire community of your friends.

Willis Eschenbach

Steven Mosher says:
October 24, 2013 at 8:40 am

“Any computer model that is used to forecast future conditions should reproduce known historical data. A computer climate models that fails to produce a good match to historical temperature data cannot be expected to give useful projections.”

well that’s patently false.
Some simple examples come to mind, like the distance to empty model in my cars navigation system. Its always wrong but useful
but today I’ll give you a life an death one from operational analysis.
In order to make decisions about our future defenses the DoD routinely uses models that are not verified against the past and can never be verified against the past. The models are even used in theatre to evaluate strategic and tactical decisions.
Example: the design of the F-22. During the design of that system the model Tac Brawler was used to inform decisions about the number of missiles the aircraft would carry; 4 6 or 8.
To make the decision the modelling departments had to simulate future air battles.
1. the planes and weapons being tested were all imaginary projections
2. there was no historical data to verify against.There is very little real data on
air combat between aircraft in the modern era.
Despite this utter lack of historical data to verify against the output of the models was useful
in informing decisions.

Steven, in your example there is no historical data … which is not the situation that Ken describes in the part you quoted.
What he is talking about is a situation where there is loads of historical data, but the computer model does a crappy job matching it. In your example, it would be a situation where someone decided to use a model for the design of the F-22, despite the fact that the model gave wildly wrong answers regarding historical air combat …
And yes, to be pedantic, you may be able get bits and scraps of useful information from even the lousiest model … but that’s not what Ken’s talking about either.
Bottom line?
Ken is right. In general, if you trust a model to predict the future when it has shown that it gives wildly wrong answers about the past, you’re a fool.
w.

Stonehenge

As a BC resident, I can add to C.M. Carmichael’s comment. A former leader in the U. Vic. Climate modelling group, Andrew Weaver, is now the sole Green Party BC provincial member of the legislative assembly and is known locally as a CAGW activist.

DirkH

Steven Mosher says:
October 24, 2013 at 8:40 am
“but today I’ll give you a life an death one from operational analysis.
In order to make decisions about our future defenses the DoD routinely uses models that are not verified against the past and can never be verified against the past. The models are even used in theatre to evaluate strategic and tactical decisions.
Example: the design of the F-22. During the design of that system the model Tac Brawler was used to inform decisions about the number of missiles the aircraft would carry; 4 6 or 8.
To make the decision the modelling departments had to simulate future air battles.
1. the planes and weapons being tested were all imaginary projections
2. there was no historical data to verify against.There is very little real data on
air combat between aircraft in the modern era.
Despite this utter lack of historical data to verify against the output of the models was useful
in informing decisions. ”
We’ll take your word for it, Mosher. Well, I’m kidding. I think you’re making a totally preposterous claim that you could never back up with anything. You’ve lost it. Really? The model decided what, 6 rockets? And? Now Obama kills all kinds of people with hellfire-equipped drones. And when an Oniks 800-P is in proximity he moves away his supercarriers. What exactly did you want to tell us? Are you out of your mind?

AlecM

There is no back radiation, a failure to understand the difference between a Radiation Field and the real heat flux which for a plane surface is the negative of the vector difference of opposing RFs.
This takes out the 333 W/m^2. Next remove the incorrect assumption of Kirchhoff’s Law of Radiation at ToA. What you then get is a no feedback result.
However, that is wrong because CO2 is the working fluid of the control system that corrects almost perfectly for any change in its concentration hence no present warming as the other warming effects, solar and polluted clouds, return to near zero.
However, beware of the future because the oceans are hiding the enormous cooling as cloud area increases because of low solar magnetic field.

Doug Proctor

Mosher –
Inappropriate comparison, but “useful”: climate models so far suggest that the reverse is true relative to the bomb effectiveness on reducing runway usefulness, as >95% models have predicted temperatures higher than we have experienced for 15 years.
So, using a Mosher-style thinking: we give the models their due, and say they always OVERESTIMATE the temperature rise. Now we have a ceiling for the actual temperature rise. Reality will be less than that, so that look to, say, not 3 degrees but 2 (reversing your bomb numbers) of warming by 2100. Going forward, we would then refine our models, knowing the number will come down, not go up, in all likelihood.
So, by your argument, the U of VIc is very useful: it tells us we should prepare for a temp rise of 2C or under. Which is to say, an outcome of limited harm that is more appropriately handled through mitigation and adaptation rather than radical socio-economic-political policies.
However – to go back to the inappropriateness of your comparison – in the case of climate, unlike recent air combat, we do have a lot of data. Because of that different, it is appropriate that the model of current temperature variations reflect observation from at least the beginning of the “problem” we are projecting, i.e. post-1988.
But I understand why the “disconnect” of model with observation does not bother you. The whole thing with current CAGW work is that the present is supposed to be atmospherically special. A-CO2 has caused new rules to apply. We cannot use the past or even the present to predict the near future; unfortunately we don’t know what the new rules are, except they are different from the past. “Climate science” is one that is determining its organizing rules as it goes forward, except that its general principles – which are special, recall, and do not bear comparison with the past principles, especially geologically based ones – say that the future will be really hot.
Okay, a science in diapers, growing. Not really the “settled” level, but, okay, like a car trip off into a strange country …..which wouldn’t be bad if we weren’t a) forced to go along, b) forced to pay for the ride, and c) told that those without a clue as to where we are going are going to be holding the steering wheel and aren’t sure there are any brakes in the car.

AlecM

Doug: there is no CO2 effect because it’s the working fluid of the control system that stabilises lower atmosphere temperature. The real AGW has been from polluted clouds – Sagan’s physics is wrong.
In short, Climate Alchemy has got virtually all its basic physics wrong, including misinterpreting Tyndall! The difference between reality and the models is stark proof of this.

Kev-in-Uk

Jeez, Mosher – are you for real? I know you like to defend models per se – but I think you are way off base on this one, as your comment shows. Moreover, are you really, seriously trying to defend the policy making decisions being made that are gonna affect all our lives (and, as the warmista are fond of saying, our grandkids lives, etc) based on completely useless predictive models? Granted, this canadian model may not be mainstream used – but the simple fact that it has seen the light of day and been published (as science?) is bad enough!
I don’t give a fig for your example – I think you are being silly to try and compare the two issues.
As for the post, I think it’s very good – my first thoughts on the model/reality graphs were – ‘why the feck did somebody not notice the divergence much earlier?’ I mean, it is clear that there was disagreement between model prediction and reality back in the 80’s. Surely, even if the model was being constantly updated and tweaked, somebody there must have thought to query why the hindcasting part was so wrong too? What were these people hoping for? – that the model would suddenly ‘come good’ and temps would rocket!!??

Are Computer Climate Models Like The ObamaCare Exchanges?
http://www.talk-polywell.org/bb/viewtopic.php?f=8&t=4823
The problem is Chaos.

Jquip

DirkH: “Well, I’m kidding. I think you’re making a totally preposterous claim that you could never back up with anything.”
Gotta defend him on this one. The claim is completely true when speaking about the industrial side of the military. eg. New planes, new battleships, etc. The models they use in such circumstances are not engineering issues but bluster used by stakeholders of ‘infallible machine tactic x’ to justify the use of ‘infallible machine tactic x.’ They’re also famously wrong, to a fault. At least in those cases that we have been able to test. (eg. Have actual utility == can make and take bets on it. And win better than 1/2 the time.)
Which include: Knights, Phalanxes, Firearms, Machine guns, Armored Cars, Tanks, Battleships, and so forth. Most notably, as to the sensibility of these things in the US military, is the F-22 boondoggle and the A-10. One still can’t get off the ground. And try as they might with models that justify the F-22, they still can’t figure out how to get the provably indispensable A-10 to stay on the ground.

policycritic

@Steven Mosher

Example: the design of the F-22. During the design of that system the model Tac Brawler was used to inform decisions about the number of missiles the aircraft would carry; 4 6 or 8.
To make the decision the modelling departments had to simulate future air battles.
1. the planes and weapons being tested were all imaginary projections
2. there was no historical data to verify against. There is very little real data on
air combat between aircraft in the modern era.

Despite this utter lack of historical data to verify against the output of the models was useful in informing decisions.

C’mon. The Pentagon’s air combat flight simulator Tac Brawler was invented/created in the 1980s. It was based on countless (thousands of) hours of data from experienced pilots working with the scientific model makers. It’s a vast and complicated simulation system, running in FORTRAN on a number of different systems, with the graphic interface system end of it being one tiny portion. It was 1/2 million lines of code. To claim an “utter lack of historical data” is false in the extreme. You ought to read up on it.
Furthermore, the Stealth came before it, in the 1970s. The Stealth had a trailer full of Super Crays miniaturized to around 15 cm X 9 cm X 26 cm in the cockpit, which they did using an early form of quantum computing with three laser heads reading holographic drives. By taking the coordinates of where the Stealth was at the moment with the surrounding threats identified, it could make decisions for the pilots in real time.

Box of Rocks

Steven Mosher :
How does you model handle the runway being repaired – after each engagement or not being damaged at all?
Believe me, a couple of my Seabees can do wonders to models….

PMHinSC

Dr. Lurtz says: October 24, 2013 at 9:21 am
“The Canadian model is a success. It proves that CO2 is NOT the driving “force” that caused “Global Warming” [Global Temperature Stagnation] from 1979 until 2012.”
Good point.
It might also indicate that there are other deficiencies in the model. If CO2 sensitivity cannot be adjusted to make the model agree with reality than CO2 sensitivity is not the only significant deficiency. Likewise if there are no other significant model deficiencies than you should be able to determine climate sensitivity to CO2 by adjusting the CO2 component until the model agrees with reality (after all it is claimed we understand the physics of CO2 in the atmosphere). Although we talk about CO2 because that is the money molecule, perhaps we should be talking more about other molecules (H2O for example) as well.

SCheesman

We just want it to be warmer.

Eric

Mosher
I think you are also overlooking a VERY large discrepancy in your comparison, specifically in regards to your bomb comparison. The military has 100’s of years of observing how weapons explode and the types of damage they do, therefore they can use this data to make a fairly accurate model that predicts the type of damage to expect when dropping a certain size bomb on a runway and how many it may take to destroy said runway. If they ran a simulated bombing run from the Iraq war, to test the system, and it came back stating that the bombs didn’t explode per se but instead became a pot of petunias and a large sperm whale on their way to the ground…I don’t think they would use this model to predict outcomes of future bombing runs…do you?

Ken Gregory said in his guest essay at WUWT,
“. . . The [Canadian] climate model produces one of the most extreme warming projections of all the 30 models evaluated by the IPCC. . . .”

– – – – – – – – –
Ken Gregory,
Well argued and the graphics are great. Thank you.
Canadian modeling extremists? What comes to mind is this paraphrase of the famous Glibert and Sullivan opera lyric. And apologies to the memories of Gilbert and Sullivan.

They are the very model of a modern Major-Extremist,
They’ve information adjusted, cherry-picked, and alarmist,

The rest of the G&S lyrics in that passage are also promising for adaptation to the theme of Canadian Modeling Extremists. : )
John

KNR

The first rule of climate ‘science’ , if the models and reality differ in value its reality which is wrong , takes care of this issue .

Does Steven Mosher still believe in the Carbon dioxide nonsense?
He still does not get it that it is a scam?
http://blogs.24.com/henryp/2011/08/11/the-greenhouse-effect-and-the-principle-of-re-radiation-11-aug-2011/

You forget the ultimate argument of Andrew Weaver and the Establishment: Even if we are wrong we are right, because reducing fossil-fuel use is “just the right thing to do.” The fact that we require the necessary taxes to guarantee our indexed pensions is irrelevant.

There is a reason why we call BC coastal areas the “Left Coast”. Suzuki is a “micro” biologist representative of left coast thinking in so many ways, not the least of which is “do as I say, not as I do.”

Crispin in Waterloo

@Mosher
Willis got it in before me. Completely inappropriate comparison – one where there is a set of historical data and where there is not. And the military example is not strictly a model, it is more like a computer game. If the programme is not modeling something real, is it is model (read ‘toy version’)? It runs on a bunch of intelligently chosen assumptions but none make it a model of reality and it doesn’t have to be.

Jaye Bass

Sounds to me like Mosher has jumped the shark in the extreme.

Mosher,
Another military analogy? What is the fixation with climate science skeptics often using military analogies? Jeesh, it is getting a little creepy of late in the skeptical communities.
Analogy is to science as Walt Disney is to Feynman.
Analogy is poetry at best, at worst it is mere rhetorical device.
John

Mike Smith

Yeah, the measured temps in Canada are a tad lower than expected. That’s because the extra heat is hiding in the deep tundra.
Any day now (10/30?) it’s gonna jump out and go boo.

vigilantfish

Stonehenge says:
October 24, 2013 at 9:34 am
As a BC resident, I can add to C.M. Carmichael’s comment. A former leader in the U. Vic. Climate modelling group, Andrew Weaver, is now the sole Green Party BC provincial member of the legislative assembly and is known locally as a CAGW activist.
——
Out of curiousity I looked up Weaver’s current status as a professor in the School of Earth and Ocean Sciences at the University of Victoria, British Columbia, to see if he had given that up to become an MLA. It occurred to me he might have left after having done such an abysmal job with his science, but no. I wonder how one can simultaneously be a full time professor and be an MLA – which is supposed to be a full time job also, representing his constituents (and not just on climate and environmental issues). Wikipedia and the Green Party page fail to enlighten. The U. Vic. faculty website also has no mention of whether he is on leave.

Political Junkie

As noted by some above, Canadian climate change rock stars Suzuki and Weaver (a good American comparison would be Gore and Mann) are having a hell of a week!
The SUN TV is really skewering Suzuki for his gargantuan carbon footprint, four homes, five kids and demonstrated spectacular ignorance of climate change science. On Australian TV this self-promoting “guru” did not recognize even one of these acronyms – NASA GISS, UAH, RSS or HADCrut! Really! Not one.
The current article of course is about Weaver’s work – these are models were built under his guidance at the University of Victoria.

Solomon Green

In his example of modelling the number of bombs necessary to put a runway out of action Steven Mosher uses historic data which tells him that his model always underestimated the number of bombs required and hence, as a planner he would invariably allow for an extra bomb.
Taking his analogy into climate forecasting, historic data has revealed that all the current and past climate models, when back tested, have overestimated temperature increases by a significant factor. Therefore, in order to allow for this overestimate temperature bias, Mr. Mosher should always reduce the temperature scenarios obtained from these models by a significant factor should he wish to use them for making long term predictions. To use his analogy, instead of giving him a “floor” they give him a “ceiling”.
Reverting to his bomb analogy he says “To be safe I’m not going to count on 2 bombs doing the job. I’ll need to plan for 3. meanwhile the modelling guys continue to refine their model”. If they are any good the modelling guys will continue to refine their model until it can correctly hindcast.
Unfortunately I have seen no sign that climate modellers are refining their models to provide accurate hindcasts. Rather they appear to find it easier to refine, or filter, the historic data in order to match their models.

Crispin in Waterloo

@Ken
“The individual runs of the model are produced by slightly changing the initial conditions of several parameters. These tiny changes cause large changes of the annual temperatures between runs due to the chaotic weather processes simulated in the model. The mean of the five runs cancels out most of the weather noise because these short term temperature changes are random.”
I have a problem with the idea that these temperature changes are ‘random’. Random means assuming there are causeless effects. Weather prediction is definitely based on deterministic outcomes from an initial state. The better described the initial state, the more accurate the prediction and the farther into the future the prediction can be made with meaningful accuracy.
Because the weather systems are chaotic (which many people confuse with ‘random’) it is very difficult to predict future temperatures for more than a few days. That fact that the model runs are clustered along a line does not mean they cancel each other out and produce a reasonably central trend. It means the model is simplistic; simple, if you will. Chaotic means the result is highly dependent on the initial conditions, not that the output is not deterministic and not that it includes randomness. Their model outputs are not chaotic enough.
Chaotic systems look random to the untrained eye, but that is just ignorance. If the conditions that initiate the deterministic calculations are known precisely, it no longer looks random, it is just highly sensitive to those initial conditions, right down to 12 digit precision rounding errors.
The similar tracks for many years of the model runs indicate that the programme is not set to the level of chaos (internal sensitivity) that real climate systems have. During at least some runs, the current historical temperature trend should have been produced. It is not in evidence at all! Were the model to have included sensitivities sufficient to have produced something similar to actual temperatures, other runs would be as wildly different in other directions. That would show the true nature of the weather system and its inherent unpredictability.
This can be countered with the observation that if enough initial parameters and their influences were known, the result would consistently reproduce actual temperatures because it would consider solar and magnetic and GCR influences properly. Fair enough, but looking at the cluster of results being so consistent and consistently wrong, they are either making huge mistakes about what influences the climate, or huge misallocation of forcing effects, or huge simplifications that prevent the model output from occasionally reproducing reality.
Whatever the defect(s) your conclusion would be the same of course: that any models producing consistently incorrect predictions are not considering influences, not considering them correctly nor proportional to their actual impact.
Thanks for the effort made.

Jquip

John Whitman: “What is the fixation with climate science skeptics often using military analogies?”
Skeptics tend to be capitalists => Capitalists tend to be conservative => Conservatives tend to be pro-military => The Military uses models *even worse than* Climatology => Ergo, Skeptics are pro-‘worse than’-models => Therefore, shut up.
Quod erat Climatum.

@Steven Mosher says: October 24, 2013 at 8:40 am
=================================================
All very well, Mr. Mosher, but the models you reference were all being used for a clear purpose. Climate models, however, are not, as the IPCC have so clearly stated,.
http://www.cfact.org/2013/02/06/global-warming-was-never-about-climate-change/
IPCC official Ottmar Edenhofer, speaking in November 2010, advised that: “…one has to free oneself from the illusion that international climate policy is environmental policy. Instead, climate change policy is about how we redistribute de facto the world’s wealth…” – See more at: http://www.cfact.org/2013/02/06/global-warming-was-never-about-climate-change/#sthash.dA4YBPID.dpuf
You are not comparing like with like. The Canadian modellers may think that they are pursuing science, but the reality is is that they are just another group of useful idiots.
More from the above link…
Opening remarks offered by Maurice Strong, who organized the first U.N. Earth Climate Summit (1992) in Rio de Janeiro, Brazil, revealed the real goal: “We may get to the point where the only way of saving the world will be for industrialized civilization to collapse. Isn’t it our responsibility to bring this about?”
Former U.S. Senator Timothy Wirth (D-CO), then representing the Clinton-Gore administration as U.S Undersecretary of State for global issues, addressing the same Rio Climate Summit audience, agreed: “We have got to ride the global warming issue. Even if the theory of global warming is wrong, we will be doing the right thing in terms of economic policy and environmental policy.” (Wirth now heads the UN Foundation which lobbies for hundreds of billions of U.S. taxpayer dollars to help underdeveloped countries fight climate change.)
Also speaking at the Rio conference, Deputy Assistant of State Richard Benedick, who then headed the policy divisions of the U.S. State Department said: “A global warming treaty [Kyoto] must be implemented even if there is no scientific evidence to back the [enhanced] greenhouse effect.”
In 1988, former Canadian Minister of the Environment Christine Stewart told editors and reporters of the Calgary Herald: “No matter if the science of global warming is all phony…climate change [provides] the greatest opportunity to bring about justice and equality in the world.”
In 1996, former Soviet Union President Mikhail Gorbachev emphasized the importance of using climate alarmism to advance socialist Marxist objectives: “The threat of environmental crisis will be the international disaster key to unlock the New World Order.”