Guest Essay by: Ken Gregory
The Canadian Centre for Climate Modeling and Analysis located at the University of Victoria in British Columbia submitted five runs of its climate model CanESM2 for use in the fifth assessment report of the International Panel on Climate Change (IPCC). The climate model produces one of the most extreme warming projections of all the 30 models evaluated by the IPCC. (See Note 1.) The model badly fails to match the surface and atmosphere temperature observations, both globally, regionally, as presented in six graphs.
Global Trends
The graph below compares the near-surface global temperatures to the model runs.
Figure 1. Canadian climate model simulations of near-surface global temperatures and three datasets of observations.
The five thin lines are the climate model runs. The thick black line is the average (mean) of the five runs. The satellite data in red is the average of two analysis of lower troposphere temperature. The radiosonde weather balloon data is from the NOAA Earth Space Research Laboratory. The surface data is the HadCRUT4 dataset. (See Note 2 for the data sources.) The best fit linear trend lines (not shown) of the model mean and all datasets are set to zero at 1979, which is the first year of the satellite data. We believe this is the best method for comparing model results to observations.
Any computer model that is used to forecast future conditions should reproduce known historical data. A computer climate models that fails to produce a good match to historical temperature data cannot be expected to give useful projections.
Figure 1 shows that the computer model simulation after 1983 produces too much warming compared to the observations. The discrepancy between the model and the observation increase dramatically after 1998 as there has been no global near-surface warming during the last 16 years. With the model and observation trends set to zero in 1979, the discrepancy between the model mean of the near-surface global temperatures and the surface observations by 2012 was 0.73 °C. This discrepancy is almost as large as the 0.8 °C estimated global warming during the 20th century. The model temperature warming trend as determined by the best fit linear line from 1979 to 2012 through the model mean is 0.337 °C/decade, and the average trend of the three observational datasets is 0.149 °C/decade. Therefore, the model temperature warming rate is 226% of the observations (0.377/0.147 = 226%).
The large errors are primarily due to incorrect assumptions about water vapor and cloud changes. The climate model assumes that water vapor, the most important greenhouse gas, would increase in the upper atmosphere in response to the small warming effect from CO2 emissions. A percentage change in water vapor has over five times the effect on temperatures as the same percentage change of CO2. Contrary to the model assumptions, radiosonde humidity data show declining water vapor in the upper atmosphere as shown in this graph.
The individual runs of the model are produced by slightly changing the initial conditions of several parameters. These tiny changes cause large changes of the annual temperatures between runs due to the chaotic weather processes simulated in the model. The mean of the five runs cancels out most of the weather noise because these short term temperature changes are random.
The climate model assumes that almost all of the temperature changes are due to anthropogenic greenhouse gas emissions and includes insignificant natural causes of climate change. The simulated weather noise includes the effects of ENSO (El Niño and La Niña), but does not include multi-decadal temperature changes due to natural ocean oscillations or solar-induced natural climate change other than the small changes in the total solar irradiance. The historical record includes the effects of large volcanoes that can last for three years. The projections assumes no large future volcanoes. There has been no significant volcanoes since 1992 that could have affected temperatures as shown in this graph of volcanic aerosols.
The model fails to match the temperature record because the model over estimates the effects of increasing greenhouse gases and it does not include most natural causes of climate change.
Figure 2 compares the mid-troposphere global temperatures at the 400 millibar pressure level, about 7 km altitude, to the model. (millibar = mbar = mb. 1 mbar = 1 hPa.)
Figure 2. Canadian climate model simulations of mid-troposphere global temperatures and two datasets of observations. The balloon data is at the 400 mbar pressure level, about 7 km altitude.
The discrepancy in 2012 between the model mean of the global mid-troposphere temperatures and the satellite and balloon observations were 1.26 °C and 1.04 °C, respectively. The model temperature trend is 650% of the satellite trend.
The satellites measure the temperature of a thick layer of the atmosphere. The satellite temperature weighting function describes the relative contributions that each atmospheric layer makes to the total satellite signal. We compared the balloon trends weighted by the lower troposphere satellite temperature weighting functions to the near-surface observed trends for global and tropical data. Similar comparisons were made for the mid-troposphere. The weighted thick layer balloon trends for the lower and mid-troposphere were similar to the near-surface and 400 mbar balloon trends, respectively. We conclude that the satellite lower-troposphere and mid-troposphere temperature trends are approximately representative of the near-surface and 400 mbar temperature trends, respectively. (See notes 3 for further information.)
Tropical Trends
Figure 3 compares the near-surface temperatures in the tropics from 20 degrees North to 20 degrees South latitude to the climate model simulations. The model temperature trend is 300% of the average of the three observational trends. The discrepancy of model to near-surface observation trends in the tropic is much greater than for the global average.
Figure 3. Canadian climate model simulations of near-surface tropical temperatures and three datasets of observations.
Figure 4 compares the warming trend in the tropical mid-troposphere to the observations. The increasing tropical mid-troposphere water vapor in the model makes the warming trend at the 400 mbar pressure level 55% greater than the surface trend.
The discrepancy in 2012 between the model mean of the mid-troposphere tropical temperatures and the balloon observations was 1.33 °C. The model temperature trend is 560% of the average of the two observational trends.
Figure 4. Canadian climate model simulations of mid-troposphere 400 mbar tropical temperatures and two datasets of observations.
The discrepancy is even greater at the 300 mbar pressure level, which is at about 9 km altitude.
Figure 5 compares the model to the balloon temperatures at the 300 mbar pressure level in the tropics. The tropical warming trend at 300 mbar in the model is exactly twice the model surface trend. In contrast, the temperature trend of the balloon data at 300 mbar is an insignificant 3% greater than the surface station trend. The discrepancy in 2012 between the model mean of the mid-troposphere tropical temperatures at the 300 mbar level and the balloon observations was 1.69 °C.
The temperature trends (1979 to 2012) of the tropical atmosphere at 7 km and 9 km altitude are an astonishing 470% and 490% of radiosonde balloon data. These are huge errors in the history match!
Figure 5. Canadian climate model simulations of mid-troposphere 300 mbar tropical temperatures and weather balloon observations.
South Trends
In the far south, the near-surface modeled temperature tend is in the opposite direction from the observations. The figure 6 compares the model near-surface temperatures from 50 degrees South to 75 degrees South latitude to the observations. The modeled temperatures from 1979 are increasing at 0.35 °C/decade while the surface temperatures are decreasing -0.07 °C/decade.
Temperatures over most of Antarctica (except the Antarctic Peninsula) have been falling over the last 30 years. The Antarctic sea ice extent is currently 1 million square km greater than the 1979 to 2000 mean. The sea ice extent has been greater than the 1979 to 2000 mean for all of 2012 and 2013 despite rising CO2 levels in the atmosphere.
Figure 6. Canadian climate model simulations of near-surface southern temperatures (50 S to 75 S latitude) and two datasets of observations.
Summary of Trend Errors
The table below summarizes the model trend to observation trend ratios.
| Model Trend to Observations Ratios (1979 to 2012) | |||
| Model/Satellite | Model/Balloon | Model/Surface | |
| Global Surface | 254% | 209% | 220% |
| Global 400 mb | 650% | 315% | |
| Tropics Surface | 304% | 364% | 249% |
| Tropics 400 mb | 690% | 467% | |
| Tropics 300 mb | 486% | ||
| South Surface | 3550% | -474% |
The table shows that the largest discrepancies between the model and observations are in the tropical mid-troposphere. The model error increases with altitude and is greatest at the 300 mbar pressure lever at about 9 km altitude. The ratio of the modeled tropical mid-troposphere 300 mbar warming rate to the surface warming rate is 200%, and is a fingerprint of the theoretical water vapor feedback. This enhanced warming rate over the tropics, named the “hot spot”, is responsible for 2/3 of the warming in the models. The fact that the observations show no tropical mid-troposphere hot spot means that there is no positive water vapor feedback, so the projected warming rates are grossly exaggerated.
Model results with large history match errors should not be used for formulating public policy. A model without a good history match is useless and there can be no confidence in its projections. The lack of a history match in the Canadian model output shows that the modeling team have ignored a fundamental requirement of computer modeling.
A global map of the near-surface air temperature from the model for April 2013 is here.
Anti-Information
Patrick Michaels and Paul “Chip” Knappenberger compared the model output to actual 20th century temperature data to determine what portion of the actual data can be explained by the model. They wrote, “One standard method used to determine the utility of a model is to compare the “residuals”, or the differences between what is predicted and what is observed, to the original data. Specifically, if the variability of the residuals is less than that of the raw data, then the model has explained a portion of the behavior of the raw data.”
In an article here, they wrote “The differences between the predictions and the observed temperatures were significantly greater (by a factor of two) than what one would get just applying random numbers.” They explained that a series of random numbers contain no information. The Canadian climate model produces results that are much worse than no information, which the authors call “anti-information”.
The Canadian Model Contributes to Fuel Poverty
The results from the Canadian climate model were used in a U.S. Global Change Research Program report provided to the US Environmental Protection Agency to justify regulating CO2. The authors of that report were told that the Canadian climate model produces only anti-information. They confirmed this fact, but published their report unchanged. The Canadian government has indicated it will follow the lead of the U.S.A. in regulating CO2 emissions.
The Canadian climate model is also used by the IPCC to justify predictions of extreme anthropogenic warming despite the fact that the model bears no resemblance to reality. As does the climate model, the IPCC ignores most natural causes of climate change and misattributes natural climate change to greenhouse gas emissions. Here is a list of 123 peer-reviewed papers published from 2008 to 2012 on the solar influence on climate that were ignored by the IPCC in the fifth assessment report.
Climate alarmism based on climate models that don’t work has so far cost the world $1.6 trillion in a misguided and ineffective effort to reduce greenhouse gas emissions. These efforts have caused electricity prices to increase dramatically in Europe causing fuel poverty and putting poor people at risk. High fuel costs and cold winter weather are blamed for 30,000 excess deaths in Britain last year. Europe’s energy costs have increased by 17% for consumers and 21% for industry in the last four years. The Canadian climate model failures has contributed to this misery.
Canadian politicians and taxpayers need to ask why we continue to fund climate models that can’t replicate the historical record and produce no useful information.
Ken Gregory, P.Eng.
Ten years of providing independent
climate science information
Notes:
1. The Canadian Earth System Model CanESM2 combines the CanCM4 model and the Canadian Terrestrial Ecosystem Model which models the land-atmosphere carbon exchange. Table 9.5 of the IPCC Fifth Assessment Report Climate Change 2013 shows that the CanESM2 transient climate sensitivity is 2.4 °C (for double CO2). The 30 model mean transient climate sensitivity 90% certainty range is 1.2 °C to 2.4 °C.
2. The Canadian climate model CanESM monthly data was obtained from the KNMI Climate Explorer here. The satellite data was obtained from the University of Alabama in Huntsville here (LT) and here (MT), and from Remote Sensing Systems here (LT) and here (MT). The radiosonde weather balloon data was obtained from the NOAA Earth System Research Laboratory here. The global surface data is from the HadCRUT4 dataset prepared by the U.K. Met Office Hadley Centre and the Climate Research Unit of the University of East Anglia, here. The HadCRUT4 tropical data (20 N to 20 S) was obtained from KNMI Climate Explorer. An Excel spreadsheet containing all the data, calculations and graphs is here.
3. The average global weather balloon trends of all pressure levels from 1000 mbar to 300 mbar weighted by the LT satellite temperature weighting functions is only 3% greater than the balloon trend at 1000 mbar, confirming that the LT satellite trends is representative of the near-surface temperature trend. The weighted average tropical weather balloon trends are 3% greater than the average of the 1000 mbar balloon trend and the surface station trend. The temperature weighting functions for land and oceans were obtained from the Remote Sensing Systems website. The average global and tropical balloon trends of all pressure levels weighted by the mid-troposphere (MT) satellite temperature weighting functions are 13% and 4% greater than the balloon trends at 400 mbar. If the MT satellite trends were adjusted by these factors to correspond to the 400 mbar pressure level, the global and tropical satellite trend adjustments would be -0.017 °C/decade and -0.004 °C/decade, respectively. Since the MT satellite trends are already less than the 400 mbar balloon trends, no adjustments were applied.
=============================
PDF version of this article is here
Robert Brown says:
October 24, 2013 at 12:01 pm
This is a good start. Now repeat this analysis for ALL of the models that contribute to CIMP5, one at a time.
**************
Actually this was done in my latest paper:
Scafetta, N. 2013. Discussion on climate oscillations: CMIP5 general circulation models versus a semi-empirical harmonic model based on astronomical cycles. Earth-Science Reviews 126, 321-357.
http://www.sciencedirect.com/science/article/pii/S0012825213001402
Here figures 4-11 contain 48 panels where all 48 CIMP5 GCMs and their 162 simulations were extensively shown and analyzed. In particular the CanESM2 model simulations are shown in Figure 4F and later more extensively in Figure 17.
All CIMP5 GCMs present more or less the same problems. The disparity vs. the data since 2000 is just one of the aspects. The CanESM2 model performs better than the average. However all models miss the complex harmonic component of the climate by a significant factor.
Jquip,
There is no law that controls the veracity of what newspapers can report. On the topic of climate change Canadian papers regularly print absolute drivel. When challenged, they don’t have to substantiate their sources.
Theoretically, the various provincial press councils act as an impartial arbiter of the “truth.”
However, they are voluntary industry organizations paid for by the newspapers.
When one tries to challenge an untruthful scare story through the press council they simply say that they don’t have the resources to arbitrate a “scientific” debate. Their ridiculous definition of “scientific” makes it impossible for them to compare IPCC forecasts to what actually happened, for example.
Therefore, the demonstrably false “it’s worse than we thought ” story stands.
On behalf of all Canadians i sincerely apologize. Be assured that the influence of Weaver, Suzuki, and perhaps our biggest embarrassment, Greenfleece is on the wane. There is still a lot of ‘green’ up here but not the type that makes one go insane.
Junkie: “When one tries to challenge an untruthful scare story through the press council they simply say that they don’t have the resources to arbitrate a “scientific” debate.”
So just like the US then: “If it bleeds, it leads.” Thanks for the response.
PS at least Mosh isn’t Canadian. Not sure what model produced him but obviously it could use a little tweaking.
A broken clock is correct twice a day. A climate model is not — not twice a day, not twice a decade, not twice an ever. And there is reason for that; these models aren’t meant to model reality. These models are meant to model an alternative reality — a scary, doomsday, Chicken Little, imaginary reality. The sole purpose is to fuel political propaganda.
But the good news is the tide is turning, as evidenced in Australia and Germany. Taxpayers are no longer happy paying for billion dollar “low/no utility” broken climate clocks.
“3550%”
Great thing is that whey they are that far off, no one can say their models just need a ‘little tweak’…
Why didn’t you provide any links to the CCCma web site? The latest global model there is CanESM2, not CanESM: http://www.cccma.ec.gc.ca/diagnostics/cgcm4/cgcm4.shtml
There is a web page at CCCma listing dozens of people claimed to be Nobel 2007 winners: http://www.ec.gc.ca/sc-cs/default.asp?lang=En&xml=32AAA89D-4BDE-4A76-B1A0-4277E5123F77
Eric Gisin says: October 24, 2013 at 7:57 pm
The first sentence of the essay identifies the model as CanESM2. Also, see note 1. Yes, I should have put CanESM2 rather than CanESM in the graphs’ titles.
I grew up on the Canadian prairie where there are four distinct seasons. Almost Winter, Winter, Still Winter, and Mosquitoes. Alas my American friends, what you mistake for a poor quality climate model is simply our optimism on display.
Garbage in, garbage out, eh?
Inspired by Doug Proctor’s analogy @ur momisugly October 24, 2013 at 9:44 am
Doug’s disjointed but profound last paragraph reads . . . .
“. . . . which wouldn’t be bad if we weren’t a) forced to go along, b) forced to pay for the ride, and c) told that those without a clue as to where we are going are going to be holding the steering wheel and aren’t sure there are any brakes in the car.”
Re-written thus . . . .
Using tax-payers money, we have taken delivery of a brand new luxurious coach.
We will be driving the coach on our ‘Magical Green Mystery Tour’ in order to help save the planet.
Everyone is compelled to board our coach – all of you are forced to go along – no if’s or buts.
There is no need to bring your overcoats as we guarantee your journey will get hot.
Although we will be heading in the right direction, we do not know where we will be going – yet.
Everyone will trust our judgement based on a series of wiggly road maps that confirm our route.
You will also need to pay extra for your seats at above inflation rates.
Those gullible enough will have the best seats.
For passengers who bring their own canister of fuel – we will pay you a subsidy.
Those who question wether we are doing the ‘right thing’ will be forced to stand in the aisle.
Although it may sound unrealistic, our target is 12,500 miles without stopping.
And, after-all, we are not able to stop due to our coach not having any brakes.
PS. Our driver is registered blind.
It will be a very bumpy ride.
Back in the old days, before the land temperature records were subjected to bouts of data diddling to conceal the awful truth, one could scrutinise the graphs back to 1850 and clearly see the alternate warmer/cooler regimes in roughly 30-year cycles that even the dullest brain could imagine was a manifestation of natural cycles.
At that time, the obvious path for students of the new discipline of climate science would have been to study these cycles with sufficient rigour to ensure that all of the physical principles that underlie them were thoroughly understood as a basis for future weather prediction and modelling.
But the true climate change deniers first had their way, doing what is unforgiveable (or even fraudulent) in real science, by retrospectively changing old data until it served their ends, by portraying the Earth as a place of Gaian perfection with only modest diversions from the supposed “average” temperature. Then, having perverted their discipline so soon after its inception by cleaving to a fixation with CO2 as the bad guy responsible for “unprecedented” late-20th-century warming based on Green activist preoccupations, with its incipient guilt-trips plus some flimsy science based on underworked 19th century radiative speculations, they quickly painted themselves into a corner.
One might think that modelling was the way out, with its inherent flexibility to keep torturing the data and algorithms in different ways with the biggest supercomputers available until they squeal out the desired result, with a QED for both accurate prediction and hindcasting.
That they have ALL failed to do so, in so uniformly spectacular a way, would lead one to suppose that either:
a) They are incompetent modellers, as any accountant/marketing executive/politician/consultant with half a brain can make an Excel speadsheet prove anything he wants. OR
b) The basic premise of the models, which all place CO2 as the dominant driver of current and future weather patterns, is just plain wrong.
I’m going with b), because there is clearly no lack of imagination in finding new fudge factors to explain the divergence between models and observation. They’ve used aerosols that only operate for selective time periods, “missing” heat that has magically disappeared into the abyssal oceans without showing itself on the way down, melting Arctic ice that makes places colder because it was so hot. You name it, and they’ve already found a way to insult us with its irrelevance. They’ve even claimed lately that “a 16-year pause in warming is not inconsistent with our models.” Oh yeah? Why don’t you show us the results of one of those model runs, then? Is it that they reveal some other inconvenient truths?
In most professions, when you get things wrong by such a large margin you do the decent thing and resign before you get fired. But these guys simply apply for a bigger computer, so that they can get things wrong faster and to an additional three significant digits of precision.
Since the brief glorious moment in 1998’s El Nino when the models resembled reality, they have been higher than kites. I wonder what digital drugs they’re being fed.
Good work as usual from Friends of Science, Ken.
Regarding alleged oil company collaboration in the Global Warming Scam:
I suggest that Shell and BP did collaborate in the CAGW scam from early days, and Exxon did not.
It appears that Exxon later caved in due to green propaganda and intense market pressure, especially in Europe.
Many of the members of Friends of Science are retired oil company scientists.
I am also an old oil man, but although I admire Friends of Science, I am not a member.
I strongly oppose CAGW alarmism and green energy fraud because it is irrational, immoral and destructive to humanity AND the environment.
It is truly regrettable, and even reprehensible that energy companies have capitulated to global warming hysteria and are sponsoring the very people that seek their destruction.
We wrote this in 2002 and have been proven correct to date:
2002
[PEGG, reprinted at their request by several other professional journals , the Globe and Mail and la Presse in translation]
http://www.apegga.org/Members/Publications/peggs/WEB11_02/kyoto_pt.htm
On global warming:
“Climate science does not support the theory of catastrophic human-made global warming – the alleged warming crisis does not exist.”
On green energy:
“The ultimate agenda of pro-Kyoto advocates is to eliminate fossil fuels, but this would result in a catastrophic shortfall in global energy supply – the wasteful, inefficient energy solutions proposed by Kyoto advocates simply cannot replace fossil fuels.”
If the large energy companies want to regain the moral high ground, they should adopt these two statements as their policies on climate and energy.
Failure to do so will perpetuate the status quo – where the big energy companies, who did not originate the global warming scam but acquiesced to it, will unfairly receive most to the blame when it unravels.
And unravel it will, as natural global cooling resumes in the near future, and Excess Winter Mortality figures tragically climb in certain countries – a strong probability, in my opinion.
In our modern complex world, fossil fuel energy is what keeps most of us in Northern climes from freezing and starving to death.
Foolish politicians, particularly in Western Europe, have badly compromised their energy systems with nonsensical grid-connect wind and solar power schemes. This could end badly, and it probably will.
Anyone who does not want to be tarred with the responsibility for this probable imminent increase in human suffering needs to act quickly and decisively.
Regards to all, Allan
[PEGG is what group? Mod]
Another of those pesky Climate Models that has not had an Eureka moment. None of them has had such.
And then the IPCC expects an aggregation of incorrect models to magically supply a correct answer.
Must be powerful stuff their smoking/snorting.
Hang on a second…
Ever since the current Prime Minister took office, the whole “global warming/climate change” alarm bells stopped ringing. What “Canada” has indicated is that if the US is on board, we will have to be also. That’s not the same as supporting it. And Alberta has exactly ONE political party, so claiming they’re all “on board” isn’t quite accurate. Hopefully that will be remedied next election anyway, since the “Progressive Conservatives” have completely dropped their “Conservative” part and are now operating wholly as “Progressives”.
Also, I haven’t read all the comments, but Mosher is, as usual, wrong. The F22 and other flight models are based on over a century of historical data, much of which involved loss of life to acquire. Claiming that modelling an airframe in various configurations and loadouts performing various maneuvers is even all that difficult is an outright lie. We know how to model physical processes that are understood.
The FACT is that modelling climate will always be voodoo, for the simple fact that we don’t understand even half of the processes involved. This is proven by the abject failure of these and all of the other models. At some point the people doing this stuff will HAVE to come to the realization that their hypothesis about CO2 causing warming has been disproved, utterly. Maybe, when that happens, the honest few can get to work studying CLIMATE instead of voodoo.
CO2 does not drive climate.
Thanks Mike Jonas for mentioning that models should reliably backcast to the 1940’s or earlier, which they can’t and WON’T, because it’s largely impossible and ruins their hyp(e)othesis utterly.
Correct me if I’m wrong but it’s obvious they don’t track PDO, AMO or ENSO very well, if at all. Those 3 are likely predictable and very influential on the global climate, even regionally.
If all they do is give CO2 too much weighting in the code they’re doomed to fail every time. Solar cycles are predictable among differing timescales but number of sunspots in the shorter 9 – 11 year cycle is difficult, as NASA got their first predictions of cycle 24 spectacularly wrong and had to reassess several times.
I’m not saying we should scrap models altogether because with PROPER science, rather than these half-baked political dogmas and the criminal and greedy elements that reside within them as well as the ones they create, some decades in the future will come software that will come close, until Nature decides to throw a curve ball. That’ll always happen.
But policy decisions using any model until it’s proven at least 95% effective is foolish, especially looking at the results of CanESM2 – would be diabolically retarded.
I forgot to say that it seems like the Canadian Grabbermint and EPA got the tool to sell their message. But I would like to see how they’d sell their message now that the word’s out on the 17 – 20 year pause.
How would the scamming adjusters at GISS, BoM and NOAA explain their adjustments to data all over the world DURING the pause. Serious jail time would send a clear message for anyone trying to defraud science ever again.
Mosher: I think you’re confounding two concepts, one of which is worth defending and the other of which is not.
1. A poor result as stepping stone to a good result. It’s true that many of the commentators here would condemn the Wright Brothers’ initial experiments. You’d hear things like “That doesn’t look like a bird!”, “Man has never flown and never will”, and “Look how many times they’ve failed to fly. You’d think they’d get a clue!”.
But the point, of course, is that a poor result can still give some useful information and by examining why the result is poor, you can improve.
2. A poor result is more useful than no result. Thus your bomb crater example.
#2, however, reminds me of college where I got bored of the unimaginative example problems in Differential Equations (or was it Partial Differential Equations?), and convinced the professor that we should try a real-world example. So we estimated several reasonable values for cooking a potato and worked the equations and… and the result is that it should take 6 hours to bake a potato.
The answer was ridiculous. It wasn’t “better than nothing”. It wasn’t useful as “an upper bound”. It was useless. Somewhere, our reasonable approximations for the conditions of baking a potato were unreasonable. If we’d looked long and hard enough, we may have found out where, and it would have been a valuable lesson.
People may pretend that their model falls into class #2, but they’re fooling themselves. Perhaps their boss in the Air Force won’t accept the answer that “Lieutenant Jones is an expert at runways and bombs and he estimates it will take 10 bombs to disable this runway”, so they come up with their poor model, and then adjust its results until it agrees with Lieutenant Jones, and then present it to their boss as, “Our most sophisticated model indicates that it will take 10 bombs +/- 1 bomb to disable this runway.”
The Canadian climate model is much more useless than your suggested bomb model (that’s always low by 1 bomb). It’s at least multiplicatively wrong, and perhaps exponentially. It’s not useful as an upper bound — which it’s been used as — and it radically distorts the “CI’s” of the model ensemble, so that reality barely falls into the CI.
A true class #2 model is something like gravity ignoring air friction. Not the climate model under discussion.
Does anyone know where one might find some ACTUAL historical global temperature charts that are NOT based upon departures from normal or “anomalies”? Scaling graphs in tenths of degrees is in itself very misleading to the great unwashed masses as it exagerates what the sam hill is going on in the real world. I googled to no avail to find some actual temperature data in graphical form, or any other form for that matter.
Sounds like a model that goes backwards on its key metric. Time to defund this one, snip from the pack, and focus on the models that have shown more predictive skill.
Bob Highland says:
October 25, 2013 at 12:01 am
Very well said. Regarding most comments, I am very encouraged that just about everyone understands the importance and use of historical data.
All Models are wrong to some degree. I am sure that complex GCM software is no different to any other large computer code. There will always remain plenty of hidden bugs it is just that they haven’t been found yet. There is often a disconnect between the chief scientist and the programmers.
I once worked on a MHD Plasma physics code which simulates magneto hydrodynamics and transport of energy and impurities in magnetically confined plasmas for Fusion research. The code is around 1.5 million lines of Fortran. At that time there was only one physicist programmer really understood the full structure. That one key person kept most of the details to himself because that way he had a job for life. I expect it is just the same situation for all the big climate models. Just one or two programmers in the background really know the code while the climate scientists who give talks about how “robust” their results are and the “consensus” do zero or very little actual coding. One other tendency of large software projects is code inertia. Once a large block of code gets written it tends to get layered over with upgrades rather than being thrown away to rewrite from scratch.
Every model is just an over-simplification of reality. This is particularly true for GCMs in the way they treat water vapor and cloud feedbacks. Obviously these two processes are intimately connected with each other and both involve positive and negative feedback. No GCM can accurately simulate cloud formation because the micro-physics of cloud formation is still poorly understood. Ken has shown above how the models do not describe water vapor in the upper troposphere – which is where the greenhouse effect is strongest. They also cannot describe how cloud cover changes in response to warming.