Guest Essay by: Ken Gregory
The Canadian Centre for Climate Modeling and Analysis located at the University of Victoria in British Columbia submitted five runs of its climate model CanESM2 for use in the fifth assessment report of the International Panel on Climate Change (IPCC). The climate model produces one of the most extreme warming projections of all the 30 models evaluated by the IPCC. (See Note 1.) The model badly fails to match the surface and atmosphere temperature observations, both globally, regionally, as presented in six graphs.
Global Trends
The graph below compares the near-surface global temperatures to the model runs.
Figure 1. Canadian climate model simulations of near-surface global temperatures and three datasets of observations.
The five thin lines are the climate model runs. The thick black line is the average (mean) of the five runs. The satellite data in red is the average of two analysis of lower troposphere temperature. The radiosonde weather balloon data is from the NOAA Earth Space Research Laboratory. The surface data is the HadCRUT4 dataset. (See Note 2 for the data sources.) The best fit linear trend lines (not shown) of the model mean and all datasets are set to zero at 1979, which is the first year of the satellite data. We believe this is the best method for comparing model results to observations.
Any computer model that is used to forecast future conditions should reproduce known historical data. A computer climate models that fails to produce a good match to historical temperature data cannot be expected to give useful projections.
Figure 1 shows that the computer model simulation after 1983 produces too much warming compared to the observations. The discrepancy between the model and the observation increase dramatically after 1998 as there has been no global near-surface warming during the last 16 years. With the model and observation trends set to zero in 1979, the discrepancy between the model mean of the near-surface global temperatures and the surface observations by 2012 was 0.73 °C. This discrepancy is almost as large as the 0.8 °C estimated global warming during the 20th century. The model temperature warming trend as determined by the best fit linear line from 1979 to 2012 through the model mean is 0.337 °C/decade, and the average trend of the three observational datasets is 0.149 °C/decade. Therefore, the model temperature warming rate is 226% of the observations (0.377/0.147 = 226%).
The large errors are primarily due to incorrect assumptions about water vapor and cloud changes. The climate model assumes that water vapor, the most important greenhouse gas, would increase in the upper atmosphere in response to the small warming effect from CO2 emissions. A percentage change in water vapor has over five times the effect on temperatures as the same percentage change of CO2. Contrary to the model assumptions, radiosonde humidity data show declining water vapor in the upper atmosphere as shown in this graph.
The individual runs of the model are produced by slightly changing the initial conditions of several parameters. These tiny changes cause large changes of the annual temperatures between runs due to the chaotic weather processes simulated in the model. The mean of the five runs cancels out most of the weather noise because these short term temperature changes are random.
The climate model assumes that almost all of the temperature changes are due to anthropogenic greenhouse gas emissions and includes insignificant natural causes of climate change. The simulated weather noise includes the effects of ENSO (El Niño and La Niña), but does not include multi-decadal temperature changes due to natural ocean oscillations or solar-induced natural climate change other than the small changes in the total solar irradiance. The historical record includes the effects of large volcanoes that can last for three years. The projections assumes no large future volcanoes. There has been no significant volcanoes since 1992 that could have affected temperatures as shown in this graph of volcanic aerosols.
The model fails to match the temperature record because the model over estimates the effects of increasing greenhouse gases and it does not include most natural causes of climate change.
Figure 2 compares the mid-troposphere global temperatures at the 400 millibar pressure level, about 7 km altitude, to the model. (millibar = mbar = mb. 1 mbar = 1 hPa.)
Figure 2. Canadian climate model simulations of mid-troposphere global temperatures and two datasets of observations. The balloon data is at the 400 mbar pressure level, about 7 km altitude.
The discrepancy in 2012 between the model mean of the global mid-troposphere temperatures and the satellite and balloon observations were 1.26 °C and 1.04 °C, respectively. The model temperature trend is 650% of the satellite trend.
The satellites measure the temperature of a thick layer of the atmosphere. The satellite temperature weighting function describes the relative contributions that each atmospheric layer makes to the total satellite signal. We compared the balloon trends weighted by the lower troposphere satellite temperature weighting functions to the near-surface observed trends for global and tropical data. Similar comparisons were made for the mid-troposphere. The weighted thick layer balloon trends for the lower and mid-troposphere were similar to the near-surface and 400 mbar balloon trends, respectively. We conclude that the satellite lower-troposphere and mid-troposphere temperature trends are approximately representative of the near-surface and 400 mbar temperature trends, respectively. (See notes 3 for further information.)
Tropical Trends
Figure 3 compares the near-surface temperatures in the tropics from 20 degrees North to 20 degrees South latitude to the climate model simulations. The model temperature trend is 300% of the average of the three observational trends. The discrepancy of model to near-surface observation trends in the tropic is much greater than for the global average.
Figure 3. Canadian climate model simulations of near-surface tropical temperatures and three datasets of observations.
Figure 4 compares the warming trend in the tropical mid-troposphere to the observations. The increasing tropical mid-troposphere water vapor in the model makes the warming trend at the 400 mbar pressure level 55% greater than the surface trend.
The discrepancy in 2012 between the model mean of the mid-troposphere tropical temperatures and the balloon observations was 1.33 °C. The model temperature trend is 560% of the average of the two observational trends.
Figure 4. Canadian climate model simulations of mid-troposphere 400 mbar tropical temperatures and two datasets of observations.
The discrepancy is even greater at the 300 mbar pressure level, which is at about 9 km altitude.
Figure 5 compares the model to the balloon temperatures at the 300 mbar pressure level in the tropics. The tropical warming trend at 300 mbar in the model is exactly twice the model surface trend. In contrast, the temperature trend of the balloon data at 300 mbar is an insignificant 3% greater than the surface station trend. The discrepancy in 2012 between the model mean of the mid-troposphere tropical temperatures at the 300 mbar level and the balloon observations was 1.69 °C.
The temperature trends (1979 to 2012) of the tropical atmosphere at 7 km and 9 km altitude are an astonishing 470% and 490% of radiosonde balloon data. These are huge errors in the history match!
Figure 5. Canadian climate model simulations of mid-troposphere 300 mbar tropical temperatures and weather balloon observations.
South Trends
In the far south, the near-surface modeled temperature tend is in the opposite direction from the observations. The figure 6 compares the model near-surface temperatures from 50 degrees South to 75 degrees South latitude to the observations. The modeled temperatures from 1979 are increasing at 0.35 °C/decade while the surface temperatures are decreasing -0.07 °C/decade.
Temperatures over most of Antarctica (except the Antarctic Peninsula) have been falling over the last 30 years. The Antarctic sea ice extent is currently 1 million square km greater than the 1979 to 2000 mean. The sea ice extent has been greater than the 1979 to 2000 mean for all of 2012 and 2013 despite rising CO2 levels in the atmosphere.
Figure 6. Canadian climate model simulations of near-surface southern temperatures (50 S to 75 S latitude) and two datasets of observations.
Summary of Trend Errors
The table below summarizes the model trend to observation trend ratios.
| Model Trend to Observations Ratios (1979 to 2012) | |||
| Model/Satellite | Model/Balloon | Model/Surface | |
| Global Surface | 254% | 209% | 220% |
| Global 400 mb | 650% | 315% | |
| Tropics Surface | 304% | 364% | 249% |
| Tropics 400 mb | 690% | 467% | |
| Tropics 300 mb | 486% | ||
| South Surface | 3550% | -474% |
The table shows that the largest discrepancies between the model and observations are in the tropical mid-troposphere. The model error increases with altitude and is greatest at the 300 mbar pressure lever at about 9 km altitude. The ratio of the modeled tropical mid-troposphere 300 mbar warming rate to the surface warming rate is 200%, and is a fingerprint of the theoretical water vapor feedback. This enhanced warming rate over the tropics, named the “hot spot”, is responsible for 2/3 of the warming in the models. The fact that the observations show no tropical mid-troposphere hot spot means that there is no positive water vapor feedback, so the projected warming rates are grossly exaggerated.
Model results with large history match errors should not be used for formulating public policy. A model without a good history match is useless and there can be no confidence in its projections. The lack of a history match in the Canadian model output shows that the modeling team have ignored a fundamental requirement of computer modeling.
A global map of the near-surface air temperature from the model for April 2013 is here.
Anti-Information
Patrick Michaels and Paul “Chip” Knappenberger compared the model output to actual 20th century temperature data to determine what portion of the actual data can be explained by the model. They wrote, “One standard method used to determine the utility of a model is to compare the “residuals”, or the differences between what is predicted and what is observed, to the original data. Specifically, if the variability of the residuals is less than that of the raw data, then the model has explained a portion of the behavior of the raw data.”
In an article here, they wrote “The differences between the predictions and the observed temperatures were significantly greater (by a factor of two) than what one would get just applying random numbers.” They explained that a series of random numbers contain no information. The Canadian climate model produces results that are much worse than no information, which the authors call “anti-information”.
The Canadian Model Contributes to Fuel Poverty
The results from the Canadian climate model were used in a U.S. Global Change Research Program report provided to the US Environmental Protection Agency to justify regulating CO2. The authors of that report were told that the Canadian climate model produces only anti-information. They confirmed this fact, but published their report unchanged. The Canadian government has indicated it will follow the lead of the U.S.A. in regulating CO2 emissions.
The Canadian climate model is also used by the IPCC to justify predictions of extreme anthropogenic warming despite the fact that the model bears no resemblance to reality. As does the climate model, the IPCC ignores most natural causes of climate change and misattributes natural climate change to greenhouse gas emissions. Here is a list of 123 peer-reviewed papers published from 2008 to 2012 on the solar influence on climate that were ignored by the IPCC in the fifth assessment report.
Climate alarmism based on climate models that don’t work has so far cost the world $1.6 trillion in a misguided and ineffective effort to reduce greenhouse gas emissions. These efforts have caused electricity prices to increase dramatically in Europe causing fuel poverty and putting poor people at risk. High fuel costs and cold winter weather are blamed for 30,000 excess deaths in Britain last year. Europe’s energy costs have increased by 17% for consumers and 21% for industry in the last four years. The Canadian climate model failures has contributed to this misery.
Canadian politicians and taxpayers need to ask why we continue to fund climate models that can’t replicate the historical record and produce no useful information.
Ken Gregory, P.Eng.
Ten years of providing independent
climate science information
Notes:
1. The Canadian Earth System Model CanESM2 combines the CanCM4 model and the Canadian Terrestrial Ecosystem Model which models the land-atmosphere carbon exchange. Table 9.5 of the IPCC Fifth Assessment Report Climate Change 2013 shows that the CanESM2 transient climate sensitivity is 2.4 °C (for double CO2). The 30 model mean transient climate sensitivity 90% certainty range is 1.2 °C to 2.4 °C.
2. The Canadian climate model CanESM monthly data was obtained from the KNMI Climate Explorer here. The satellite data was obtained from the University of Alabama in Huntsville here (LT) and here (MT), and from Remote Sensing Systems here (LT) and here (MT). The radiosonde weather balloon data was obtained from the NOAA Earth System Research Laboratory here. The global surface data is from the HadCRUT4 dataset prepared by the U.K. Met Office Hadley Centre and the Climate Research Unit of the University of East Anglia, here. The HadCRUT4 tropical data (20 N to 20 S) was obtained from KNMI Climate Explorer. An Excel spreadsheet containing all the data, calculations and graphs is here.
3. The average global weather balloon trends of all pressure levels from 1000 mbar to 300 mbar weighted by the LT satellite temperature weighting functions is only 3% greater than the balloon trend at 1000 mbar, confirming that the LT satellite trends is representative of the near-surface temperature trend. The weighted average tropical weather balloon trends are 3% greater than the average of the 1000 mbar balloon trend and the surface station trend. The temperature weighting functions for land and oceans were obtained from the Remote Sensing Systems website. The average global and tropical balloon trends of all pressure levels weighted by the mid-troposphere (MT) satellite temperature weighting functions are 13% and 4% greater than the balloon trends at 400 mbar. If the MT satellite trends were adjusted by these factors to correspond to the 400 mbar pressure level, the global and tropical satellite trend adjustments would be -0.017 °C/decade and -0.004 °C/decade, respectively. Since the MT satellite trends are already less than the 400 mbar balloon trends, no adjustments were applied.
=============================
PDF version of this article is here
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Try comparing to radiosonde and satellite that will look even more ridiculous not even a Fail maybe a fraud LOL
A bit OT (delete or put elsewhere if you wish) but Solar 24 picking up a bit
http://www.swpc.noaa.gov/SolarCycle/sunspot.gif I was wondering if all the low’s SNN (but not very very low) add up to be considerable flux over time even though the cycle per se is “low” that may support Leif’s assertions re solar activity climate to some extent anyway. It (the idea) may be complete nonsense as well LOL
Clearly, reality is flawed. Why have there been no adjustments?
Solid well written article. Like the fuel poverty part. We have to keep hammering away at the alarmist side. They are trying to stand their ground and repeat their cult mantras of “Climate Change is Real” and “End the Denial” but the public is starting to notice that the alarmist arguments are just a religious chant sounding like la la la la.
Thanks, Ken. Good article.
It is not surprising, but it is sad that policy is being based on unrealistic models, all over the world.
The underlying cause is not in the models, or even in post-modern climatology of the IPCC. It is more basic, in the advance of socialism.
I think CGI did the coding
Why do the models not take reality into account?
http://www.woodfortrees.org/plot/hadcrut4gl/from:1987/to:2014/plot/hadcrut4gl/from:2002/to:2014/trend/plot/hadcrut3gl/from:1987/to:2014/plot/hadcrut3gl/from:2002/to:2014/trend/plot/rss/from:1987/to:2014/plot/rss/from:2002/to:2014/trend/plot/hadsst2gl/from:1987/to:2014/plot/hadsst2gl/from:2002/to:2014/trend/plot/hadcrut4gl/from:1987/to:2002/trend/plot/hadcrut3gl/from:1987/to:2002/trend/plot/hadsst2gl/from:1987/to:2002/trend/plot/rss/from:1987/to:2002/trend
it is globally cooling?
“Any computer model that is used to forecast future conditions should reproduce known historical data. A computer climate models that fails to produce a good match to historical temperature data cannot be expected to give useful projections.”
well that’s patently false.
Some simple examples come to mind, like the distance to empty model in my cars navigation system. Its always wrong but useful
but today I’ll give you a life an death one from operational analysis.
In order to make decisions about our future defenses the DoD routinely uses models that are not verified against the past and can never be verified against the past. The models are even used in theatre to evaluate strategic and tactical decisions.
Example: the design of the F-22. During the design of that system the model Tac Brawler was used to inform decisions about the number of missiles the aircraft would carry; 4 6 or 8.
To make the decision the modelling departments had to simulate future air battles.
1. the planes and weapons being tested were all imaginary projections
2. there was no historical data to verify against.There is very little real data on
air combat between aircraft in the modern era.
Despite this utter lack of historical data to verify against the output of the models was useful
in informing decisions.
Where you do have historical data to verify against you can even use bad fits to inform decisions. For example: In modelling how many bombs its takes to cripple a runway you
would construct a model. The model may predict 3 bombs and your historical data may tell you
that the model always underestimates the total required. So the models hindcast is low, say by one bomb. Going forward say you want to model the effect of a bomb with an increased number of cluster uniits. You use your existing model. It predicts 2 bombs for this future weapon. Is that useful? Sure it is. If I am attack planning i use that model ( an example would be bug splat ) the model predicts 2, and as a planner I decide that 3 should be the planning number. Why? because my model that fails to hindcast is characteristically low. To be safe Im not going to count on 2 bombs doing the job. I’ll need to plan for 3. meanwhile the modelling guys continue to refine their model. They dont consider their model falsified. They consider it as having limited utility. Important ultility ( it gives you a ‘floor” estimate) but utility nonetheless.
As an Alumni of UVic it saddens me to see such obvious propaganda being displayed, anyone with any pride would keep this model’s output as their biggest secret. Tell people that the dog ate your model output, or better yet tell them that occasionally water from the deep ocean grabs model outputs and wisks them away to the deep ocean, really deep, so don’t bother looking. BC in general and Victoria in particular has always been a “Hippie” which means “green” haven. Canada has its only federal Green party representative there, and she is a real nutter.
I guess that means if the model differs from reality then reality must be wrong. Or possibly the modelers have become detached from reality.
Paging Arch Warmists weaver?
This is symptomatic of the canadian bureaucracies approach to CAGW.
Complete and total F.U.D.
Whether through incompetence,cowardice, malice or orders from above they have betrayed the citizens of Canada.
Utter and expensive nonsense, such as “Environment Canada’s Science”, billions of dollars spent on studies of the effect of AGW, while never proving for themselves that there is any such beast.
Climate Change mitigation policies are all the rage, yet there exists zero science confirming the need for these policies, even government admits this as they fail to document the science they claim their policy is based on.
This whole scheme bears the handprints of a group of people centred around the federal Liberal Party of Canada.
With research product like, “The Impact of a Human Induced Shift in North Pacific Circulation on North American Surface Climate, (J.C. Fyfe, 2008) it is no wonder that CCCMA has also managed to produce the first elected Provincial Green Party MLA in Canada, “Nobel Prize winner” Dr. Andrew J. Weaver.
As with many academics these days, Dr. Weaver seems to keep fingers in several pies at once. He is also a principal of Solterra Solutions Ltd. a private corporation providing “Climate, Forensic and Educational Services” including promotion of Weaver’s books on Global Warming.
http://www.solterrasolutions.com/index.php?page=2
I like using graphs from this site in my classroom, but the Y axes are mislabeled. Would it be difficult to change them to “temperature anomalies” or “deviations”?
Steven Mosher:
Usually your examples have some merit and represent a valid – or at least arguable – point of view, but your examples of models that don’t need to reflect past data are unrelated to climate models and do not strengthen your argument.
I’m a Canadian whose taxes pay for this, but this isn’t my complaint/question. The question is, why would ANY SCIENTIST submit for consideration models that fail so badly to match observation?
There has to be some belief that complex reality disguises itself ONLY IN THE SHORT TERM, a period of time of no real concern. It would be like saying things fall down, not up, now, yes, but tomorrow they will go up because my model says the sky sucks more than the earth.
The only other reason I can imagine is that the scientists involved have some OTHER temperature profile they are matching. Does the climate war disconnect go this far, that the warmists and the skeptics are using two different sets of “observations”?
“Despite this utter lack of historical data to verify against the output of the models was useful
in informing decisions.”
Of course it was useful. The opinions of various decision makers were useful. But that fact doesn’t make the opinions of various decision makers into science or something like science. And we are talking about science here because we are using words like ‘verify’ and the crucially important ‘predict’.
If there is no prediction there is no science. And the issue here, as always, is whether the models meet the standards of science. If they do not meet the standards of science then they cannot be offered as products of science. Just ask the IPCC if they are willing to withdraw the claim that the models are scientific. Ask yourself the same question and give us your answer.
As a postmodernist, you are unwilling to set forth standards for science. You are unwilling to set forth standards for scientific prediction. Regarding prediction, you are totally willing to speak with the vulgar. For example, if I say that Jupiter’s surface will become purple polka dots against a white background at exactly 3 pm today, EDT, you will accept that as a scientific prediction just a highly unlikely prediction. Nonsense. All scientific predictions must make reference to some set of well confirmed physical hypotheses which can be used to explain the predicted phenomenon. See Kepler’s three laws and Newton’s deduction of them for crystal clear examples.
Models and statistical analysis are useful in decision making but they do not reach the level of science. They do not do prediction. They do analysis. A consultant can legitimately offer his models and his time-series analysis as tools that can improve a corporation’s planning for the future. That consultant carries with him a great deal of tacit knowledge that he has acquired through experience. That tacit knowledge makes the consultant a very useful resource for decision makers. But the models, the statistical analysis, and the tacit knowledge do not rise to the level of science. They cannot magically become well confirmed physical hypotheses that can be used to predict and to explain the phenomena of interest.
If the Canadian model discussed in Gregory’s article above is offered by its makers as a scientific tool, and it should be because all climate modelers treat their models as substitutes for the scientific theory that they do not possess, then it is an abject failure either (Gregory) because it cannot reproduce historical data or (me) it cannot meet the standards for scientific prediction. In either case, the IPCC and all its friends should “fess up” and admit that modeling is not science.
Mosher: “In order to make decisions about our future defenses the DoD routinely uses models that are not verified against the past and can never be verified against the past. The models are even used in theatre to evaluate strategic and tactical decisions.”
Over history they’ve used sheep knuckle bones, half-flattened twigs of wood, astrologers, and animal sacrifices. Not saying they were ‘right,’ but they had utility. And since that, then we know that you’re totally down with lynching a man and reading his entrails because:
“They dont consider their model falsified. They consider it as having limited utility. Important ultility ( it gives you a ‘floor” estimate) but utility nonetheless.”
So I’ll tell you what. I’ll give you your point on utility, if you’ll start predicting the climate using palmistry.
The Canadian model is a success. It proves that CO2 is NOT the driving “force” that caused “Global Warming” [Global Temperature Stagnation] from 1979 until 2012.
Even though he is a zoologist, David Susuki is an example of our Canadian scientists. Need I say more?
That’s as bad as bad can get. The entire theory of “Global Warming” must
be reevaluated by science(never mind the trillion dollar AGW Industry).
Canada disappoints
-gavin.
C.M. Carmichael says:
October 24, 2013 at 8:49 am
“BC in general and Victoria in particular has always been a “Hippie” which means “green” haven. Canada has its only federal Green party representative there, and she is a real nutter.”
That part of the world attracts them. In Eugene OR if you are discovered eating the wrong food an “intervention” is required by the entire community of your friends.
Steven Mosher says:
October 24, 2013 at 8:40 am
Steven, in your example there is no historical data … which is not the situation that Ken describes in the part you quoted.
What he is talking about is a situation where there is loads of historical data, but the computer model does a crappy job matching it. In your example, it would be a situation where someone decided to use a model for the design of the F-22, despite the fact that the model gave wildly wrong answers regarding historical air combat …
And yes, to be pedantic, you may be able get bits and scraps of useful information from even the lousiest model … but that’s not what Ken’s talking about either.
Bottom line?
Ken is right. In general, if you trust a model to predict the future when it has shown that it gives wildly wrong answers about the past, you’re a fool.
w.
As a BC resident, I can add to C.M. Carmichael’s comment. A former leader in the U. Vic. Climate modelling group, Andrew Weaver, is now the sole Green Party BC provincial member of the legislative assembly and is known locally as a CAGW activist.
Steven Mosher says:
October 24, 2013 at 8:40 am
“but today I’ll give you a life an death one from operational analysis.
In order to make decisions about our future defenses the DoD routinely uses models that are not verified against the past and can never be verified against the past. The models are even used in theatre to evaluate strategic and tactical decisions.
Example: the design of the F-22. During the design of that system the model Tac Brawler was used to inform decisions about the number of missiles the aircraft would carry; 4 6 or 8.
To make the decision the modelling departments had to simulate future air battles.
1. the planes and weapons being tested were all imaginary projections
2. there was no historical data to verify against.There is very little real data on
air combat between aircraft in the modern era.
Despite this utter lack of historical data to verify against the output of the models was useful
in informing decisions. ”
We’ll take your word for it, Mosher. Well, I’m kidding. I think you’re making a totally preposterous claim that you could never back up with anything. You’ve lost it. Really? The model decided what, 6 rockets? And? Now Obama kills all kinds of people with hellfire-equipped drones. And when an Oniks 800-P is in proximity he moves away his supercarriers. What exactly did you want to tell us? Are you out of your mind?
There is no back radiation, a failure to understand the difference between a Radiation Field and the real heat flux which for a plane surface is the negative of the vector difference of opposing RFs.
This takes out the 333 W/m^2. Next remove the incorrect assumption of Kirchhoff’s Law of Radiation at ToA. What you then get is a no feedback result.
However, that is wrong because CO2 is the working fluid of the control system that corrects almost perfectly for any change in its concentration hence no present warming as the other warming effects, solar and polluted clouds, return to near zero.
However, beware of the future because the oceans are hiding the enormous cooling as cloud area increases because of low solar magnetic field.