By Larry Kummer, from the Fabius Maximus website
Summary; Public policy about climate change has become politicized and gridlocked after 26 years of large-scale advocacy. We cannot even prepare for a repeat of past extreme weather. We can whine and bicker about who to blame. Or we can find ways to restart the debate. Here is the next of a series about the latter path, for anyone interested in walking it. Climate scientists can take an easy and potentially powerful step to build public confidence: re-run the climate models from the first 3 IPCC reports with actual data (from their future): how well did they predict global temperatures?
“Trust can trump Uncertainty.”
— Presentation by Leonard A Smith (Prof of Statistics, LSE), 6 February 2014.
The most important graph from the IPCC’s AR5
Figure 1.4 from p131 of AR5: the observed global surface temperature anomaly relative to 1961–1990 in °C compared with the range of projections from the previous IPCC assessments. Click to enlarge.
Why the most important graph doesn’t convince the public
Last week I posted What climate scientists did wrong and why the massive climate change campaign has failed. After 26 years, one of the largest longest campaigns to influence public policy has failed to gain the support of Americans, with climate change ranking near the bottom of people’s concerns. It described the obvious reason: they failed to meet the public’s expectations for behavior of scientists warning about a global threat (i.e., a basic public relations mistake).
Let’s discuss what scientists can do to restart the debate. Let’s start with the big step: show that climate models have successfully predicted future global temperatures with reasonable accuracy.
This spaghetti graph — probably the most-cited data from the IPCC’s reports — illustrates one reason for lack of sufficient public support in America. It shows the forecasts of models run in previous IPCC reports vs. actual subsequent temperatures, with the forecasts run under various scenarios of emissions and their baselines updated. First, Edward Tufte probably would laugh at this The Visual Display of Quantitative Information
— too much packed into one graph, the equivalent of a PowerPoint slide with 15 bullet points.
But there’s a more important weakness. We want to know how well the models work. That is, how well each forecast if run with a correct scenario (i.e., actual future emissions, since we’re uninterested here in predicting emissions, just temperatures).
The big step: prove climate models have made successful predictions
“A genuine expert can always foretell a thing that is 500 years away easier than he can a thing that’s only 500 seconds off.”
— From Mark Twain’s A Connecticut Yankee in King Arthur’s Court
.
A massive body of research describes how to validate climate models (see below), most stating that they must use “hindcasts” (predicting the past) because we do not know the temperature of future decades. Few sensible people trust hindcasts, with their ability to be (even inadvertently) tuned to work (that’s why scientists use double-blind testing for drugs where possible).
But now we know the future — the future of models run in past IPCC reports — and can test their predictive ability.
Karl Popper believed that predictions were the gold standard for testing scientific theories. The public also believes this. Countless films and TV shows focus on the moment in which scientists test their theory to see if the result matches their prediction. Climate scientists can run such tests today for global surface temperatures. This could be evidence on a scale greater than anything else they’ve done.
Testing the climate models used by the IPCC
“Probably {scientists’} most deeply held values concern predictions: they should be accurate; quantitative predictions are preferable to qualitative ones; whatever the margin of permissible error, it should be consistently satisfied in a given field; and so on.”
— Thomas Kuhn in The Structure of Scientific Revolutions
(1962).
The IPCC’s scientists run projections. AR5 describes these as “the simulated response of the climate system to a scenario of future emission or concentration of greenhouse gases and aerosols … distinguished from climate predictions by their dependence on the emission/concentration/radiative forcing scenario used…”. The models don’t predict CO2 emissions, which are an input to the models.
So they should run the models as they were when originally run for the IPCC in the First Assessment Report (FAR, 1990), in the Second (SAR, 1995), and the Third (TAR, 2001). Run them using actual emissions as inputs and with no changes of the algorithms, baselines, etc. How accurately will the models’ output match the actual global average surface temperatures?
Of course, the results would not be a simple pass/fail. Such a test would provide the basis for more sophisticated tests. Judith Curry (Prof Atmospheric Science, GA Inst Tech) explains here:
Comparing the model temperature anomalies with observed temperature anomalies, particularly over relatively short periods, is complicated by the acknowledgement that climate models do not simulate the timing of ENSO and other modes of natural internal variability; further the underlying trends might be different. Hence, it is difficult to make an objective choice for matching up the observations and model simulations. Different strategies have been tried… matching the models and observations in different ways can give different spins on the comparison.
On the other hand, we now have respectably long histories since publication of the early IPCC reports: 25, 20, and 15 years. These are not short periods, even for climate change. Models that cannot successfully predict over such periods require more trust than many people have when it comes to spending trillions of dollars — or even making drastic revisions to our economic system (as Naomi Klein and the Pope advocate).
Conclusion
Re-run the models. Post the results. More recent models presumably will do better, but firm knowledge about performance of the older models will give us useful information for the public policy debate. No matter what the results.
As the Romans might have said when faced with a problem like climate change: “Fiat scientia, ruat caelum.” (Let science be done though the heavens may fall.)
“In an age of spreading pseudoscience and anti-rationalism, it behooves those of us who
believe in the good of science and engineering to be above reproach whenever possible.“
— P. J. Roach, Computing in Science and Engineering, Sept-Oct 2004 — Gated.
Other posts in this series
These posts sum up my 330 posts about climate change.
- How we broke the climate change debates. Lessons learned for the future.
- A new response to climate change that can help the GOP win in 2016.
- The big step climate scientists can make to restart the climate change debate – & win.
For More Information
(a) Please like us on Facebook, follow us on Twitter, and post your comments — because we value your participation. For more information see The keys to understanding climate change and My posts about climate change. Also see these about models…
- About models, increasingly often the lens through which we see the world.
- Will a return of rising temperatures validate the IPCC’s climate models?
- We must rely on forecasts by computer models. Are they reliable?
- A frontier of climate science: the model-temperature divergence.
- Do models accurately predict climate change?
(b) I learned much, and got several of these quotes, from a 2014 presentations by Leonard A Smith (Prof of Statistics, LSE): the abridged version “The User Made Me Do It” and the full version “Distinguishing Uncertainty, Diversity and Insight“. Also see “Uncertainty in science and its role in climate policy“, Leonard A. Smith and Nicholas Stern, Phil Trans A, 31 October 2011.
(c) Introductions to climate modeling
These provide an introduction to the subject, and a deeper review of this frontier in climate science.
- “A Model World” by Jon Turney in Aeon, 16 December 2013.
- “Climate Modeling 101: What are climate models and why are they important?” by the National Academy of Science.
- An introduction to climate models by the World Meteorological Society.
- “The Physics of Climate Modeling” by Gavin A. Schmidt in Physics Today, January 2007.
Judith Curry (Prof Atmospheric Science, GA Inst Tech) reviews the literature about the uses and limitation of climate models…
- What can we learn from climate models?
- Philosophical reflections on climate model projections.
- Spinning the climate model – observation comparison — Part I.
- Spinning the climate model – observation comparison: Part II.
(d) Selections from the large literature about validation of climate models
- “How Well Do Coupled Models Simulate Today’s Climate?“, BAMS, March 2008 — Comparing models with the present, but defining “present” as the past (1979-1999).
- “Should we believe model predictions of future climate change?”, Reto Knutti, Philosophical Transactions A, December 2008.
- “Should we assess climate model predictions in light of severe tests?”, Joel Katzav, Eros, 7 June 2011.
- “Reliability of multi-model and structurally different single-model ensembles“, Tokuta Yokohata et al, Climate Dynamics, August 2012. Uses the rank histogram approach.
- “The Elusive Basis of Inferential Robustness“, James Justus, Philosophy of Science, December 2012. A creative look at a commonly given reason to trust GCMs.
- “Test of a decadal climate forecast“, Myles R. Allen et al, Nature Geoscience, April 2013 — Gated. Test of one model’s forecasts over subsequent 10 years. Doesn’t state what emissions data used for validation (scenario or actual). The forecast was significantly below consensus, and so quite accurate. Which is why we hear about it.
- “Overestimated global warming over the past 20 years” by John C. Fyfe et al, Nature Climate Change, Sept 2013.
- “Can we trust climate models?” J. C. Hargreaves and J. D. Annan, Wiley Interdisciplinary Reviews: Climate Change, July/August 2013.
- “The Robustness of the Climate Modeling Paradigm“, Alexander Bakker, Ph.D. thesis, VU University (2015).
- “Uncertainties, Plurality, and Robustness in Climate Research and Modeling: On the Reliability of Climate Prognoses“, Anna Leuschner, Journal for General Philosophy of Science, in press. Typical cheerleading; proof by bold assertion.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

“Why should we re-run our models if you only want to find something wrong with them?”
Because that’s exactly what real science does!!!
Bullseye in one! +1.
+100 Oh the stupid, how it burns!
Well, seems somebody tried: http://www.nature.com/articles/srep09957
Francisco,
Thanks for the pointer to that article! I’ll read with interest and add it to the references.
I’d really appreciate to know what you think. I’ve read it but would like to have another opinion (possibly more educated, which is not difficult)
Ooops!! http://wattsupwiththat.com/2015/04/21/study-global-warming-actually-more-moderate-than-worst-case-ipcc-models/
Francisco,
It’s well over my pay grade. I’ve asked a few climate scientists for comments. Also, as a counterpoint to the WUWT post, see the review at RealClimate:
http://www.realclimate.org/index.php/archives/2015/05/global-warming-and-unforced-variability-clarifications-on-recent-duke-study/
The Nature website also gives links to other websites discussing this article:
http://www.nature.com/articles/srep09957/metrics
In a nutshell: The Nature paper has calculated that there is greater than 95% chance that CMIP5 models using RCP4.5 and 8.0 are wrong, and greater than 90% chance that models using RCP6.0 are wrong, based on Global Mean Temperature data from GISTEMP. This is much lower than 99%, therefore the CMIP5 models have not been proven wrong.
This paper has a CAGW apologist tone to it, essentially saying that we need to continue to assume CAGW is occuring because we are not yet 99% sure that it isn’t. Excuse me while I go puke.
Average Joe,
It was be interesting to see the corresponding figures if the GMT were based on UAH or RSS rather than GISTEMP.
That’s ‘would be very’ not ‘was be’.
It could be my imagination, but I could swear I saw the words “predict”, “predicted”, “forecasts”, “predictions”, “predicting”…more times than I had the patience to count, and also saw, separately, the word “projections” at least once, indicating that the writer evidently understands that these are indeed distinct concepts.
Golly!
Menicholas,
I don’t understand your point. I quote & use the IPCC’s definitions of projection vs. prediction. It’s essential to the post.
I am rather busty right now, but will post you links to discussions here which should shed some light on the reason for my comment.
I am slightly confused at this point, and am wondering sir…do you count yourself as on the skeptical side of the Great Global Warming Non-Debate, on the side of what skeptics refer to as Warmistas, or perhaps a Lukewarmer, or what?
Just curious. You of course are not obligated to satisfy my curiosity on this. It is the title of this post that makes me think it is written from a warmista perspective, because why on Earth would anyone want to reset this debate otherwise?
I would point out that I find the title at least somewhat misleading in another regard, that being that “climate scientists” are engaging in debate on any of the issues before us.
In fact, to the chagrin of the skeptical community, they refuse to engage in any organize public discussions of any of the relevant issues, and are pretty much monolithic in this regard.
So, just what debate does the title refer to?
Forgive my rather blush-worthy typo…I am busy right now.
Menicholas,
“do you count yourself…”
That is a powerful question! You can find the answer in the links given in the For More Information section above. But I suggest that it is not relevant to this discussion, here and now.
Think of this essay as a tool, a rock thrown into a pond. Who or why it was thrown don’t matter. The object and its effects are independent of the thrower. Just as it no longer matters how the public policy debate about climate change has become gridlocked, or who’s responsible.
This a proposal to restart the debate. Climate scientists have to see it as in their interest to do so (if they are confident in their models, they can run this test and “win”). If not, the public can ask for this test, one that might end the incessant bickering that substitutes for a debate.
Note that Naked Capitalism (a popular liberal-left website) included this on their daily links. That’s a sign of broad appeal necessary for any proposal that has even a tiny chance of success.
I’m working on additional steps. I hope some of those reading this also will push this proposal. I don’t see anything else on the horizon that might affect the policy debate — except perhaps extreme weather (e.g., two large hurricanes hitting East coast cities, magnified in people’s minds by alarmists — allowing bills to be pushed through Congress).
Menicholas: Since Fabius Maximus will not answer your inquiry regarding its position on CAGW, I will. FM is a warmist. The refusal to answer your question is a key indicator of FM`s wish to be seen as “objective” and to disguise any real or imagined agenda. This is the typical stance of a biased observer seeking to cloak themselves in the righteous adornment of objectivity.
kelleydr:
The following two paragraphs are written in the disambiguated language that is developed at http://wmbriggs.com/post/7923/ . Terms that are polysemic (have more than one meaning) in the literature of global warming climatology unless disambiguated are placed in quotes.
I note that FM is an equivocator and that he draws a conclusion from at least one equivocation thus being guilty of application of the equivocation fallacy. In this way FM draws the false conclusion that projections (which he sometimes calls “predictions”) can be validated when they can only be evaluated. If FM were to rewrite his article in the disambiguated language that is referenced in the first paragraph he would find that all of his “models” are modèles, that they make projections but not predictions and that these projections are susceptible to evaluation but not validation.
Models are built under the scientific method of investigation but Modèles are built under a pseudoscientific method of investigation. A consequence from FM’s use of an ambiguous language in making his argument is for a pseudoscience to be dressed up to look like a science.
Several years ago, the chair of Earth Sciences at Georgia Tech asked me to prepare the manuscript for an article to be published in her blog on the topic of “The Principles of Reasoning: Logic and Climatology.” In the ensuing study I observed frequent applications of the equivocation fallacy in the literature of global warming climatology. Applications of this fallacy were frequently made by skeptics as well as warmists.
Menicholas,
Your comment suggests that you are too sharp to be deceived by nonsense (making stuff up) from the likes of kelleydr, but that comment does illustrate the dysfunctional nature of the public debate about climate, with partisans defending their tribes — uninterested in truth or logic.
My views (as shown in my post) are described here. I’ve been attacked by “skeptics” (a weird label, but suitable for this mad tribal war).
More relevant here, I’ve been denounced by Leftists like Brad DeLong (Prof Economics, Berkeley) for defending Roger Pielke Jr. (who was guilty of repeating well-established findings in the peer-reviewed literature. I was attacked — quite speciously (e.g., by Politifact) — for showing that the PBL survey of climate scientists (the best such done to date) showed that only a minority (a large minority) supported the key finding of AG4 & AR5 at the 95% confidence level (i.e., more than half of warming since 1950 caused by anthropogenic greenhouse gases) .
As conducted today, I believe the public policy debate does not serve the interests of America, but rewards only the political interests of Left and Right. One way to resolve this is finding tests that both sides believe fair, so we can move beyond the name-calling and make sound decisions.
Another path forward would be for one side to adopt policies that a majority of Americans can support. I doubt the Left will do so. But climate change can help the GOP win in 2016.
Terry,
“that projections (which he sometimes calls “predictions”)”
I am discussing the IPCC reports, and so use their definitions for projection and prediction.
“I note that FM is an equivocator and that he draws a conclusion from at least one equivocation thus being guilty of application of the equivocation fallacy.”
Wow. Q.E.D.
Editor of the Fabius Maximus website:
You can’t count on the IPCC to help you to avoid inadvertent applications of the equivocation fallacy. You and your colleagues at FM have to do this yourselves. To do this you must employ a disambiguated language in making global warming arguments. (Equivocation alert: in the literature of global warming climatology “warming” is among the polysemic terms that are used in making arguments.)
“Why should we re-run our models” – Well, somebody kept some old copies of the reports – the models didn’t predict the “pause”. Newer models have been “tweaked” to reduce the scarily high growth, but not by much, so really aren’t much better.
Long story short: the models are crap and any policy decisions made based on the, likewise misdirected crap.
Or shorter still: the shit has hit the fan.
Look, all scientists’ models have some smooth exponential curve as their climate prediction outcomes. That’s simply not how the climate works.
To even vaguely model climate correctly, you have to have some kind of Fourier series modelling, with periodicities representing natural climate cycles and amplitudes presumably modelled to try and fit to natural data.
So, that would include the following:
1. QBO – in the 1 – 3 yr periodicity range.
2. El NIno/La Nina cycles – in the 5 – 8 yr range.
3. Solar Cycles 11/22 yr range.
4. Lunar cycles – 18.6yr cycle.
5. Oceanic Oscillations – in the 30 – 75 year range.
6. Etc etc etc.
Of course, those are just certain input parameters and they do not reflect how they all integrate together, which presumably must be reflected by the effects on cloud formation, storminess.
How do you put in stochastic variables like major volcanic eruptions, earthquakes etc? Are they really stochastic or do they too have fuzzy periodicities??
If you look at the sorts of projections Landscheidt made, he never had sinusoidal curves or exponential curves – he had curves which reflected multiple variables and multiple periodicities.
If you want to say that models have a useful role, they must reflect natural processes and mirror real temperature evolutions.
What they must NOT do is obsess about carbon dioxide. They must not assume that strong stability is not built into the system (because it clearly is, be that in interglacials or within ice ages) and they must understand how to overcome those mechanisms to drive changes between glacials and inter-glacials.
Perhaps the biggest imponderable now is whether we are still in the stability of an interglacial, or the rapid warming which precedes entry into the next ice age.
I don’t know the answer to that and I wonder, quite frankly, if anyone does.
Until people admit just what they don’t know and just how valid the assumptions made to develop models are, no-one is going to trust modellers again.
They’ve wasted £100bn in a generation and if they worked in financial services, their corporations would be paying £1trn in fines, their sector would be decimated by unemployment and the scandal would be on the front pages of every newspaper for 3 yrs minimum.
Next time, if there is a next time, the scientists serve their funders, not the other way around………
Climate Scientists will, no doubt, argue that the original models are ‘old science’ and we are better informed now.
To which we should reply: “But at the time, you told us that this was settled science…?”
That also lays the ground work for distrust in their current ‘projections’. You are correct in that until they can show some level of precision in past ‘projections’, why should we accept anything they say currently.
This is easy, take the new science back to 1998 and rerun the models… they still fail, both forward and backwards. We have the data since 1998.. so does it fit? Or from the time the world began in 1979, the world was perfect and calm up till that time. ” it worse than that Jim, AGW is dead Jim, it’s dead. “
They have already re-run their models. That’s why they aren’t saying anything…
I’m not sure of ‘aren’t saying anything’, but clearly the results are much less useful for CAGW alrmism than they liked. Lack of a certain argument is often a proof that the argument does not work, which suggests (I love this word) the original theory was, to some extent, not producing good predictions.
The guys at CAGW department would like to find a model which predicts recent pause, but which rapidly goes exponential in future, and which could be called reasonably sound.
I have been somewhat worried at dark nights – what about if the cagwists are right and the West Side Highway will be under water in
2008 20182028. With police cars and different birds, trees and tape on the windows. But no, it is 13 years in future and there is no way Hansen was right. But it could be Hansen can’t be interviewed for a new prediction in 2028. You know, science advances one funeral at a time.(And when I do silly mistakes, like replacing ‘be’ with ‘the’, forgive me. English is my second language and while I type at superb speed, I’m getting old.)
How well do the models hindcast more than a few decades?
As the author noted, hindcasts have very little persuasive power, for the very good reason that ANYBODY can produce a model that fits the observations when the required record of observations is sitting in front of them.
I’m pessimistic of working hindcast, because knowing the climate structure (like what is the variance of variables) and knowing the state of the system (like how warm is Atlantic surface water at given time) are different problems, and the latter is needed to do precise predictions with CO2 and aerosols measured a posteriori. And the latter is an impossible, weather-related, problem.
It also should be requirement that every individual, real world, scientific principle in involved in climate must be able to be identified in the model. The scientific equations HAVE to be visible. Otherwise we have just a complex set of polynomials that are attempting to mimic a graph and a climate.
Dr. David Evans is providing the equations and some nice visuals of the climate models.
Just wish that I paid more attention during math class. Never to late to learn. 🙂
…that are attempting to mimic a highly adjusted surface record.
To do this correctly the models must be run against the satellite record for the troposphere, verses the model prediction for the same. Nothing else is cogent to CAGW theory.
Dr. David Evans is reintroducing his Solar Model over at joannenova.com.au.
He is reviewing the current GCM now in detail. After that he will reintroduce his theory.
Should be an interesting.
First, I asked Stephen Belcher, the head of the Met Office Hadley Centre, whether the recent extended winter was related to global warming. Shaking his famous “ghost stick”, and fingering his trademark necklace of sharks’ teeth and mammoth bones, the loin-clothed Belcher blew smoke into a conch, and replied,
“Here come de heap big warmy. Bigtime warmy warmy. Is big big hot. Plenty big warm burny hot. Hot! Hot hot! But now not hot. Not hot now. De hot come go, come go. Now Is Coldy Coldy. Is ice. Hot den cold. Frreeeezy ice til hot again. Den de rain. It faaaalllll. Make pasty.”
(from “When it comes to climate change, we have to trust our scientists, because they know lots of big scary words” from Sean Thomas Telegraph blogs June 19th, 2013
Brilliant.
To the day of this very early morning in NE Oregon, your re-quote of Sean remains the best there is in climate change comments. So good and I am insanely jealous of that wordsmith. Way better than any of my much drier, imaginatively poorer, remarks. The old New York Times political cartoons are dust under that man’s feet.
Wallowa?
Wow – there’s another skeptic in Oregon? Don’t tell Charlie Hales or Kate Brown. They’ll hunt us down.
Of course, if you’re on the east side of the state, you might be okay – I’m stationed right outside of Portlandia – greenies, wiccans, and ‘keep Portland weird’ bumper stickers. It’s enough to drive you nuts.
Very close! Graduated HS there in 73 as a wet-eared 16 year old bookish bespeckled redheaded leprechaun.
Replying to Joel: Well, being a trans-Cascadian Oregonian (Eugene and Frenchglen), I can report that there are plenty of skeptics in Eugene, but not in the same proportion as in Harney County. Those on the west side have to keep their heads down to avoid being hassled by the proponents of the Cultural Revolution. On the east side we don’t have to worry. Of course, on both sides of the Cascades, we’re all much better armed than said proponents.
ROFL !
Ain’t that the truth. “Blowing smoke” LOL
I’m with Pamela on this one. Sometimes one comes across such a perfect expression of irony and wit that it is utterly impossible to improve upon it. This definitely qualifies. Simply sublime.
A model has a learning fase in the past, a testing fase in the past, and after that you can use it for predictions. If you use data from the testing fase in order to tune your model, you can not test it. So although it is difficult you should not use recent data. If you like to model and make predictions, you should first study the subject of forecasting in general.
“Fase”?
Face. Oh course.
Which brings my mind our good specialist (no pun or sarc) Brandon Shollenberger, who failed to read Danish at dmi.dk couple of days ago. Read harder! You ain’t trying hard enough if hotpotatoes can outspell you!
phase?
Without teeth, it’d be ‘ghati’.
Phat city!
(Not a Filipino)
Does anyone think that the models are intended to produce a reasonable prognostication of future conditions? Look at how the modellers censor the results before release. We don’t get to see every run, and AFAIK some runs are curtailed when they go wild. How do they know what wild is? Why do then lump all the results together in one graph? Is there no realization that some run consistently hot compared to observation? Why isn’t, say, the Canadian model ditched for poor performance? Because of reluctance to offend a modelling group? Or because this is a political enterprise intended to provide ammunition for a political agenda?
I find the assumption that those who run the models want accurate results tends to conflict with reality. And yes, I think the author knows that too and that the value of this challenge is that it will be ignored and we’ll all know what to make of that.
“intended to produce a reasonable prognostication of future conditions”
No, they are intended to stampede the unthinking masses into buying the CAGW fraud.
What level of significance needs to be reached in testing 99%, 95%, 90%…..?
Sundance,
I believe 95% is the generally accepted minimum for significance in both science and public policy. There are, however, some recent papers suggesting that is too low for some applications.
The more money you want to spend, the higher your confidence level needs to be.
For the changes they are demanding, we need at least 97% confidence.
Piffle!
As the author states:
“Models that cannot successfully predict over such periods require more trust than many people have when it comes to spending trillions of dollars — or even making drastic revisions to our economic system…”
Confidence schmofidence!
They cannot predict, so we simply have to have more TRUST when it comes to spending our TRILLIONS, and such trivialities as MAJOR REVISIONS TO OUR ECONOMIC SYSTEM!
Am I hallucinating?
Are these people tripping, or just way too high?
Sorry to shout, but you literally cannot make this crap up… but they did!
Seriously, I think it would be best if these nutjobs just take the billions they have already raped from our national coffers and just go away!
Follow Mike Tyson’s lead, and just fade into Bolivian.
Which they may already be doing.
Depends… I’ve worked with as low as 51%.. basically if you have to make a decision you take the significance the data gives you.
You all forget that this is applied science..
This test has already been done; the models have been run against the actuals for twenty years and have failed. We know they don’t work. They were run using the then actual data. What’s the point of fiddling them again? They still won’t work because they are missing equations and algorithms they need because these are not known. As with any other computer program if you can’t accurately and completely specify the problem to be solved you won’t get the right answer. I fail to understand why so many seem to think these models are magic – they’re only computer programs. If you input guesses you’ll get guesswork out.
Chris,
Please provide a citation for “models have been run against the actuals for 20 years”. That is, comparing what temperatures do the models predict when input with the actual history of emissions.
Tom Yoko,
About Hansen’s 1988 climate forecast.
Thanks to you and Bill for mentioning this. Hansen’s forecast shows the importance of evaluating climate models by running the models today with input of actual emissions. None of Hansen’s three scenarios accurately matched greenhouse gas emissions.
Scenario “B” yielded a more-or-less accurate forecast until the pause; “C” matched actual temps during the pause. But these are accidents, since they result from forecasts using emissions assumptions different than actuals — and so are of little use in model evaluation.
For details see this 2006 article by Roger Pielke Jr (Prof Environmental Studies, U CO-Boulder).
http://cstpr.colorado.edu/prometheus/archives/climate_change/000836evaluating_jim_hanse.html
Also thanks for link to this 2014 update, with esp useful info on the actual CO2 growth rate far above Hansen’s highest assumption:
http://www.c3headlines.com/2014/10/nasa-hansen-climate-model-vs-reality-2014-co2-global-warming.html
There’s another lesson here. Look at the Skeptical Science entry about this one issue: 3 well-written & illustrated pages. This is typical of alarmists’ websites, showing their lavish support. Most (all?) skeptics’ websites, run by volunteers and funded by a small flow of donations & ad income, look like chalk sidewalk drawings by comparison.
http://www.skepticalscience.com/Hansen-1988-prediction-basic.htm
The IPCC/Hansen climate model predictions did have a number of scenarios, of which at least one was basically bang-on what happened with actual emissions.
In IPCC Far, the ” business as usual” scenario was very close in terms of CO2 levels. In IPCC Tar and AR4, scenario A1B was very close in terms of CO2. IPCC AR5, it appears RCP’s 4.5 and 6.0 are close enough for now. In Hansen’s 1988 predictions, Scenario B is exactly on the same track as actual CO2 levels.
In all of these reports, methane was over-estimated but this doesn’t result in much difference in terms of final forcing or temperature results.
Bill,
Thanks for the additional color on this. I too have seen mentions of these things, but no rigorous demonstration of an original model run using actual emissions vs. actual temperatures over the past 15+ years.
Can you point us to anything like this?
Absolute rubbish. Hansen’s predictions have been laughably horrible. See the graph below
http://c3headlines.typepad.com/.a/6a010536b58035970c01b8d0761e32970c-pi
http://www.c3headlines.com/2014/10/nasa-hansen-climate-model-vs-reality-2014-co2-global-warming.html
Believe me, if Hansen’s predictions were anything remotely like similar to what has actually happened it would be front page news in every paper in the world 24/7 for months.
Here is my compilation of the IPCC/Hansen climate model predictions in which CO2 emissions/levels most closely matched the actual CO2 emissions/level starting from when the predictions were made versus the average of UAH and RSS temperature record. (I’m not using the adjusted fake temperature series from the NCDC – renamed in the last month to National Centre for Environmental Information or ADJUSTERS for short).
http://s11.postimg.org/i2o555moz/RSS_UAH_vs_IPCC_Predictions_Aug_2015.png
Bill Illis,
No wonder Mann has been so ornery. Planet Earth is making a fool of him.
DB,
“making”?
Menicholas,
Good point. Maybe ‘exposing Mann for what he is’ would be more accurate.
“Fake, but accurate.”
Robert Redford as “Dan Rather” in the film “Truth” (coming to a theater near you in October!)
I would bet good money that some of these simulations have already been run but not reported.
Peter,
I don’t like to guess in my posts — but I agree. This is an obvious test. My guess (emphasis on guess) is that it would be front page news if the models from any of the first three assessment reports accurately predicted global atmosphere temperatures given actual emission history.
Make certain they include the REAL V.W. emissions!
Why on Earth would they want to?
1) They have won in the popular press. Read Nat. Geo. or Sci. Am.
2) They have won in the Main Stream Media. The 97% consensus is given as fact.
3) They have won in the scientific literature. Try publishing a skeptic paper in Nature Climate Change or Science (AAAS)
4) They have won in the funding arena. M. Mann is said to have garnered over $10 million. The researcher behind the recent RICO-20 stunt has pulled in millions, as well. This type of funding is simplyunheard of in any other area of science.
5) They have won in the policy arena. The destruction of the US electric grid and the war on fossil fuels proceeds apace. (the greenies love it)
6) They have won across the government. “I hope there are no climate change den**rs in the Department of Interior,” – Sally Jewell, secretary of the Department of the Interior.
7) They have won in public opinion. Skeptics are often harassed to the point where careers and livelihoods are threatened.
In what way have they not won, and why should they care?
They do not appear to have persuaded mother Earth. She seems to be suggesting that they are wrong.
As time goes by and the divergence between model predictions and reality widens (as will be the case should the ‘pause’ continue notwithstanding a temporary 2015/6 El Nino blip), there position will become increasingly untenable and may fall like a pack of cards.
This is why we no longer hear about Global Warming but now Climate Change, and why even Climate Change is being muddled with weather weirdening/the prolification of extreme weather events.
richard v,
I agree. That’s imo an under-appreciated aspect of the public policy debate about climate change.
The alarmists “own” the high ground. They dominate in journalism, academia, the major science agencies, etc. By the Third Assessment Report in 2000 they were ready to push for massive public policy changes. But the climate increasingly failed them. First the pause in atmosphere warming, then the pause in many (or most) forms of extreme weather (e.g., landfalling major hurricanes in America).
But Mother Nature is fickle, One or two major events — magnified in the public mind by the massive alarmist machinery — and everything could change. A severe tornado season plus a big tropical storm hitting a major city — and the debate might change with great speed.
Twenty or thirty years from now historians will decide if the current models were correct, but it might not matter. It’s the like the 1970’s joke about the end of a Soviet invasion of western Europe. Two Red Army generals are in Paris toasting their victory. One asks the other, “Who won the air war?”
I recommend moving fast to resolve this debate during the “pause” — this pause in the debate, when cooler minds can be heard.
They haven’t even convinced the general public that CAGW is a real problem that needs fixing.
In what way have they not won? They have not won (yet) in totalling silencing the critics and dissenters, which is why sources like wattsupwiththat are so valuable. Most importantly, the CAGW crowd have so far failed in their political objectives of attaining a global, legally binding treaty that will require developed countries to actually try to reduce their GHG emissions by 70% by 2050, while paying the developing countries blackmail in the form of the (at least) $100 billion a year in the Green Climate Fund. Politicians love to look “green”, but ultimately people vote for jobs and higher standards of living. After the failure of the Paris Conference of the Parties in December, the tactics internationally will change. They will try either to get groups of larger countries to form “Climate Clubs” to force others into stringent emissions by imposing damaging trade sanctions or they will try a “revolution from below” by a grassroots campaign aimed at recruiting the unions, churches, municipalities and environmental non-governmental organizations to intimidate and “shame” their opponents. There is a long war to be fought.
Tony,
When we say people “won”, we usually mean by comparison with their stated goals. There have been no substantial public policy measures made in the US for mitigation or adaptation to climate change. The reason for this failure is that climate change consistently ranks on the bottom of surveys asking the public about their policy priorities.
That’s failure.
A) 300+ power plants closed or in the process of shutting down. Another 300 plants slated for closure. Worse, these plants are not getting mothballed, the important parts are getting destroyed to comply with regulations. This will absolutely preclude the possibility of plant restarts once the disaster of this policy becomes apparent.
B) I have lost track of the 100s of Billions poured into “renewable” energy schemes.
C) The Ethanol Mandate:
1) Compel its use (taxpayer pays)
2) Subsidize its production (taxpayer pays)
3) Tariff and trade barriers on imports. (taxpayer pays)
Are you kidding me?
Are You Flipping Kidding Me?
Tonyl.
Attributing all of these things to climate change policy is incorrect.
(1) Hundreds of power plants are shutting down for a wide range of reasons. Several generations of plants are obsolete due to age and new technology. Others have become uneconomic due to increased regulations on air pollution and massive changes in energy prices. These matters are complex.
(2) Since the early 1970s (especially after the 1973 Arab oil embargo) a major goal of US public policy has been to develop alternative energy sources — both to reduce pollution and diversify our energy sources. The National Renewable Energy Laboratory was created in 1974.
(3) The Ethanol mandates were created by the Energy Policy Act of 2005 and the Energy Independence and Security Act of 2007. They were designed to further several public policy goals, including reducing air pollution and fighting climate change — but providing “energy independence and security” was the most important (as the title suggests).
One hundred percent agree with TonyL. The EPA has been writing draconian regulations which are forcing the shut down of coal fired power plants by requiring reduced CO2 emissions and unrealistically low Hg levels. Coal would easily compete with natural gas in price if the regulations in place in 1990 were still the ones in place today. California EPA (air resources board) has instituted regulations and a CO2 tax on gasoline which results in gas costing a $1 a gallon or more than most other states. Electricity prices in CA have sky-rocketed in the state due to the mandated renewables policy. The state is in the process of spending 100+ billion dollars to build a useless high speed/low speed rail between LA and SF financed by the CO2 tax. Almost $35k of every $100k+ Tesla car is paid for by taxpayers in order to drive up sales.
The rapid increase in cost of energy drives up the cost of everything, including food. To say that the current Progressive Climate policies have had little to no effect is either speaking from ignorance or a lie. These policies need to be reversed. The Green/Progressive agenda has almost accomplished its goal, they are just having a problem putting the last nail or two in the coffin.
If the plants were merely being moth balled for economic reasons, there would be no need to disable them as well.
They don’t seem to be able to close the sale.
And they have the Church in their corner, as well.
There’s one obvious reply: Volkswagen.
Who could have predicted this a couple of weeks ago? Things can change remarkably fast.
I agree that the fight against scientific corruption is incredibly hard and may appear hopeless. But it will be won eventually, though possibly not in my lifetime.
Many commentators have noted that the EU’s green policies (based on junk science) have directly led to this scandal, which has damaged the environment and probably killed at least tens of thousands of people. This is a perfect proof of what sceptics have been saying for years.
A few years ago Mann himself admitted that the sceptics were winning (though of course he attributed it to massive funding from the fossil fuel companies – if only….)
In the end the truth always wins.
In the end, we are all dead.
Who wants to wait for the end?
Not I.
TonyL September 25, 2015 at 5:25 am
That, unfortunately, is an excellent summary.
Just run the models 100 years backwards!
What will they tell us about climate history?
Juergen,
Models have been extensively “hindtested”, as shown in the citations I give. However hindcasting is only the first stage of model verification, and by itself generally considered insufficient. Models are almost inevitably tuned to the past, either consciously or unconsciously by developers.
For similar reasons drugs are tested in double-blind trials.
Yes!
The models have the property “Great Skill” in forecasting. Furthermore, the “Great Skill” is a symmetrical property. This means that the models can predict the past with as great accuracy and precision as they can predict the future.
“Making predictions is hard, especially about the future” – Yogi Berra.
The fudge factors are gathered from only 10 years of data. Somewhere in the literature I have seen backward calculations, which show similar deviations as we observe now …
Notice the dated march of FARS, TARS, SARS, AR4 and WHAT LETTERS DO WE HAVE LEFT FOR THE NEXT BATCH of -ARSe acronyms displayed on the spaghetti graph. Exactly how many restarts and do-overs do climate scientists get before voters get rid of all fund granting climate alarmist politician on the face of the Earth???????
nobody will listen until the eighth report comes out . . .
There is an ole engineering design “saying” that goes …. “If it doesn’t work on paper … then you don’t have any chance whatsoever of it working when you put it to practice”.
Climate modeling computer programs DO NOT work on paper.
“Faith, hope and parity” based expectations of accurate “re-run” results is delusional thinking.
Samuel,
Perhaps you are right. But we have a logjam because many people disagree with you. Both sides yelling at each other will not change that. Hence the need for a test both sides will consider fair.
“Hence the need for a test both sides will consider fair.”
Good luck with that!
That comment gave me a big smile after a hard day. As if this were just an augment over a matter of science. This is a political battle with the State and all its many minions reaching for ever more power. Even the dictators of old never thought of taxing and controlling the very air you breath!
We are seeing the US Empire build a police state, and it will not cease in its efforts to control you simply because you can show the facts are against them.
Editor – FMw.
I know I am right. Like the legal statute of ….. “Ignorance of the Law is no excuse”, …. thus it matters not a “twit” whether it is Judicial Law or Scientific Law.
“Hence the need for a test both sides will consider fair.”
Scientific facts and religious beliefs are incompatible, or oxymoronic if you choose, therefore there is no possibility that a “test” could be created that both sides would consider fair.
“because many people disagree with you.”
You got that right, ….. but very, very few have ever provided common sense thinking, logical reasoning and/or intelligent deductions along with supporting facts or evidence that proved me wrong.
I am strictly science orientated without any personal non-science emotional biases attached.
Update to this post
Roger Pielke Jr (Prof Environmental Studies, U CO-Boulder) proposed such a test in “Climate predictions and observations“, Nature Geoscience, April 2008. Excerpt:
Editor of the Fabius Maximus website:
You say
It is not possible to provide any test that “both sides will consider fair”.
This true whatever Pilke jnr or anybody else suggests as being such a test.
No such test is possible because if it were then it would not be needed: Kiehle’s work would be sufficient to refute all except at most one unidentified climate model.
I again explain the matter.
None of the models – not one of them – could match the change in mean global temperature over the past century if it did not utilise a unique value of assumed cooling from aerosols. So, inputting actual values of the cooling effect (such as the determination by Penner et al.
http://www.pnas.org/content/early/2011/07/25/1018526108.full.pdf?with-ds=yes )
would make every climate model provide a mismatch of the global warming it hindcasts and the observed global warming for the twentieth century.
This mismatch would occur because all the global climate models and energy balance models are known to provide indications which are based on
1.
the assumed degree of forcings resulting from human activity that produce warming
and
2.
the assumed degree of anthropogenic aerosol cooling input to each model as a ‘fiddle factor’ to obtain agreement between past average global temperature and the model’s indications of average global temperature.
Nearly two decades ago I published a peer-reviewed paper that showed the UK’s Hadley Centre general circulation model (GCM) could not model climate and only obtained agreement between past average global temperature and the model’s indications of average global temperature by forcing the agreement with an input of assumed anthropogenic aerosol cooling.
The input of assumed anthropogenic aerosol cooling is needed because the model ‘ran hot’; i.e. it showed an amount and a rate of global warming which were greater than observed over the twentieth century. This failure of the model was compensated by the input of assumed anthropogenic aerosol cooling.
And my paper demonstrated that the assumption of aerosol effects being responsible for the model’s failure was incorrect.
(ref. Courtney RS An assessment of validation experiments conducted on computer models of global climate using the general circulation model of the UK’s Hadley Centre Energy & Environment, Volume 10, Number 5, pp. 491-502, September 1999).
More recently, in 2007, Kiehle published a paper that assessed 9 GCMs and two energy balance models.
(ref. Kiehl JT,Twentieth century climate model response and climate sensitivity. GRL vol.. 34, L22710, doi:10.1029/2007GL031383, 2007).
Kiehl found the same as my paper except that each model he assessed used a different aerosol ‘fix’ from every other model. This is because they all ‘run hot’ but they each ‘run hot’ to a different degree.
He says in his paper:
And, importantly, Kiehl’s paper says:
And the “magnitude of applied anthropogenic total forcing” is fixed in each model by the input value of aerosol forcing.
Kiehl’s Figure 2 can be seen here.
Please note that the Figure is for 9 GCMs and 2 energy balance models, and its title is:
It shows that
(a) each model uses a different value for “Total anthropogenic forcing” that is in the range 0.80 W/m^2 to 2.02 W/m^2
but
(b) each model is forced to agree with the rate of past warming by using a different value for “Aerosol forcing” that is in the range -1.42 W/m^2 to -0.60 W/m^2.
In other words the models use values of “Total anthropogenic forcing” that differ by a factor of more than 2.5 and they are ‘adjusted’ by using values of assumed “Aerosol forcing” that differ by a factor of 2.4.
So, each climate model emulates a different climate system. Hence, at most only one of them emulates the climate system of the real Earth because there is only one Earth. And the fact that they each ‘run hot’ unless fiddled by use of a completely arbitrary ‘aerosol cooling’ strongly suggests that none of them emulates the climate system of the real Earth.
Richard
For logical thinkers the issue of the “fairness” is resolved by placement of this issue in a logical context. This context is provided when “prediction” designates a kind of proposition; a “prediction” then has a probability of being true and thus the value that is assigned to this probability by a model can be tested by comparison to the value of the corresponding relative frequency in a sample drawn randomly from the underlying population and not used in the construction of the model. This process leads to the validation or falsification of the model. The IPCC climate models are insusceptible to validation or falsification because the context that they provide is not logical. In the illogical context that they do supply, IPCC-style “evaluation” is possible but fails to resolve the issue of the fairness.
Terry Oldberg:
You may try to “resolve” the issue of “fairness” to your satisfaction but such a resolution would be meaningless. I am writing to explain why this is in the probably forlorn hope that you will understand.
Logical thinkers know that “fairness” means whatever its user intends it to mean when s/he uses it. That is why “fairness” is only really useful to sophists and to children in school playgrounds. Also, it is why – as I said – “It is not possible to provide any test that “both sides will consider fair”.”.
And – as I explained – “No such test is possible because if it were then it would not be needed: Kiehle’s work would be sufficient to refute all except at most one unidentified climate model.”.
These matters will be obvious to you in the unlikely event that you learn the fundamental principles of logic.
Richard
richardscourtney:
Actually, the fundamental principles of logic are an area of my expertise. I have written and lectured on this topic professionally. Among my written works is the one at http://judithcurry.com/2011/02/15/the-principles-of-reasoning-part-iii-logic-and-climatology/ . Among the audiences that I have addressed on this topic are meetings of the American Nuclear Society, American Chemical Society, American Society for Quality, American Institute of Chemical Engineers and Stanford University. The last time I checked my tutorial on the fundamental principles of logic (http://www.knowledgetothemax.com) was receiving more than 10 hits per day.
Terry Oldberg:
OK. For sake of demonstration, I will assume that you do have some understanding of logic and ask you to show it.
Please explain what you understand to be a definition of “fairness” that would enable the proposed “test that both sides will consider fair”.
And while you are about it, at long last please say what you mean by the word “event”.
Richard
richardscourtney:
Unlike yourself, I would take a logical approach to finding a solution to the problem of the fairness. Logic features statements called “propositions.” I would define the word “prediction” such that it was a kind of a proposition. In logic, a proposition has a probability of being true. Every prediction of a model would have a probability of being true plus a value for this probability.
A science has a theoretical side and an empirical side. Probabilities lie on the theoretical side. The empirical counterpart of a probability is a relative frequency.
Relative frequencies are defined by the counts called “frequencies” in a sample that is drawn from a study’s statistical population. When the sample is selected randomly and unused in the construction of the model its relative frequency values provide for a test of the probability values that are asserted by the study’s model. If it passes this test the model is said to be “validated.” Otherwise it is said to be “falsified.”
Your approach defines “prediction” such that it is not an example of a proposition. In this way you divorce the problem of the fairness from logic. The model can neither be validated nor falsified. However, it can be “evaluated.” Evaluation is a logically nonsensical concept that was invented by the IPCC after Vincent Gray pointed out to IPCC management that its claim to basing its assessments on validated models was false. Though evaluation is logically nonsensical it is the approach that you join the IPCC in favoring.
Terry Oldberg:
OK. I understand that reply: it demonstrates
1. You are unable to provide the requested definitions of what you mean by “fairness” and an “event”.
2. You don’t know or understand anything that constitutes logic (i.e. reasoning conducted or assessed according to strict principles of validity).
3. You think verbosity constitutes cogency. But it does not (as you would know if you were capable of logical reasoning).
Richard
richardscourtney:
That’s an ad hominem argument. Does resort to an obviously fallacious argument signal that you are out of ammunition? If so, the decent thing for you to do is capitulate.
Terry Oldberg:
I strongly commend that you undertake a course in basic logic.
You will then learn that I have NOT made “an ad hominem argument”.
I merely pointed out that your irrational bloviation demonstrated your total ignorance of logical principles: it does demonstrate that, and I stated how it demonstrates that.
Richard
richardscourtney:
According to dictionary.com: “An ad hominem argument is one that relies on personal attacks rather than reason or substance.” Let us examine your argument that
“I merely pointed out that your irrational bloviation demonstrated your total ignorance of logical principles: it does demonstrate that, and I stated how it demonstrates that.”
with an eye toward whether it relies on a personal attack rather than reason or substance.
Not being of the form of a syllogism, this argument cannot rely on reason or substance. Do you provide a point-by-point refutation of my “blovation.” No. Do you prove my “total ignorance of logical principles”? No. You have reason to believe, actually, that I am knowledgeable enough about about logical principles to deliver tutorials about them to audiences of erudite people. Rather than a justified attack on my bad ideas yours was an unjustified attack on my person.
Oldberg:
Yes, that dictionary definition of ad hominem is correct.
I did NOT make an ad hom. argument. I listed YOUR demonstrations of YOUR complete ignorance of logical principles. For example, do you deny that you have failed to state what you mean by the words “fairness” and “event” on which you have chosen to pontificate? Pointing out that you have failed to provide those requested definitions is NOT a “personal attack”: it is a statement of fact that your assertions are gibberish because they have no “substance” of any kind when they rely on undefined words.
And your additional bloviation to which I am replying provides additional demonstration of your ignorance of how to argue logically.
Your boorish behaviour does you no good and I suggest you stop it.
Richard
richardscourtney:
As I understand it, you assert that my complete ignorance of logical principles is proved by my failure to respond to your demand for me to provide my personal definitions for two words. It seems to me that this assertion is illogical for there is not a logical way in which the premise that person A failed to respond to person B’s demand for A’s personal definitions of words can yield the conclusion that B is completely ignorant of logical principles. If you can provide proof to the contrary please provide same.
Terry Oldberg:
OK. You are now demonstrating that you are an idiot.
I listed (indeed, I numbered) three different examples of your ignorance of logical principles that you provided.
And your nonsense about one of the examples is silly.
A basic principle of logic is that a person making an argument is required to define the terms he/she is using when requested. No amount of sophistry can hide the fact of your ignorance of that principle without which logical argument is not possible. And no amount of your idiocy can conceal the fact that you have failed to define what you mean by “fairness” and “event”.
Richard
richardscourtney:
In response to my post of Sept. 30 at 11:19 pm you fail to respond to my request for a proof of the contention that “…my complete ignorance of logical principles is proved by my failure to respond to your demand for me to provide my personal definitions for two words.” Is this because you are unable to prove it? If not, please post the proof.
Terry Oldberg:
Having demonstrated your complete ignorance of logical principles and your idiocy, you now claim you cannot read by writing to me
I here wrote to you saying
Your reply showed you cannot demonstrate ANY understanding of logic and I responded to that by here listing the “proof” of your total ignorance of logical principles which you had provided, and I later here explained one of the listed examples because you claimed you are too thick to understand it.
I have had enough of your boorish behaviour and I will ignore any more of it.
Richard
Editor of the Fabius Maximus website:
You say
It is not possible to provide any test that “both sides will consider fair”.
This is true whatever Pilke jnr or anybody else suggests as being such a test.
No such test is possible because if it were possible then it would not be needed: Kiehle’s work would be sufficient to refute all except at most one unidentified climate model.
I again explain the matter.
None of the models – not one of them – could match the change in mean global temperature over the past century if it did not utilise a unique value of assumed cooling from aerosols. So, inputting actual values of the cooling effect (such as the determination by Penner et al.
http://www.pnas.org/content/early/2011/07/25/1018526108.full.pdf?with-ds=yes )
would make every climate model provide a mismatch of the global warming it hindcasts and the observed global warming for the twentieth century.
This mismatch would occur because all the global climate models and energy balance models are known to provide indications which are based on
1.
the assumed degree of forcings resulting from human activity that produce warming
and
2.
the assumed degree of anthropogenic aerosol cooling input to each model as a ‘fiddle factor’ to obtain agreement between past average global temperature and the model’s indications of average global temperature.
Nearly two decades ago I published a peer-reviewed paper that showed the UK’s Hadley Centre general circulation model (GCM) could not model climate and only obtained agreement between past average global temperature and the model’s indications of average global temperature by forcing the agreement with an input of assumed anthropogenic aerosol cooling.
The input of assumed anthropogenic aerosol cooling is needed because the model ‘ran hot’; i.e. it showed an amount and a rate of global warming which were greater than observed over the twentieth century. This failure of the model was compensated by the input of assumed anthropogenic aerosol cooling.
And my paper demonstrated that the assumption of aerosol effects being responsible for the model’s failure was incorrect.
(ref. Courtney RS An assessment of validation experiments conducted on computer models of global climate using the general circulation model of the UK’s Hadley Centre Energy & Environment, Volume 10, Number 5, pp. 491-502, September 1999).
More recently, in 2007, Kiehle published a paper that assessed 9 GCMs and two energy balance models.
(ref. Kiehl JT,Twentieth century climate model response and climate sensitivity. GRL vol.. 34, L22710, doi:10.1029/2007GL031383, 2007).
Kiehl found the same as my paper except that each model he assessed used a different aerosol ‘fix’ from every other model. This is because they all ‘run hot’ but they each ‘run hot’ to a different degree.
He says in his paper:
And, importantly, Kiehl’s paper says:
And the “magnitude of applied anthropogenic total forcing” is fixed in each model by the input value of aerosol forcing.
Kiehl’s Figure 2 can be seen here.
Please note that the Figure is for 9 GCMs and 2 energy balance models, and its title is:
It shows that
(a) each model uses a different value for “Total anthropogenic forcing” that is in the range 0.80 W/m^2 to 2.02 W/m^2
but
(b) each model is forced to agree with the rate of past warming by using a different value for “Aerosol forcing” that is in the range -1.42 W/m^2 to -0.60 W/m^2.
In other words the models use values of “Total anthropogenic forcing” that differ by a factor of more than 2.5 and they are ‘adjusted’ by using values of assumed “Aerosol forcing” that differ by a factor of 2.4.
So, each climate model emulates a different climate system. Hence, at most only one of them emulates the climate system of the real Earth because there is only one Earth. And the fact that they each ‘run hot’ unless fiddled by use of a completely arbitrary ‘aerosol cooling’ strongly suggests that none of them emulates the climate system of the real Earth.
Richard
When the models were backtested, did they accurately predict the Medieval Warm Period and the Little Ice Age?
They are even worse. The models must be run against the satellite record, for that alone detemines if any warming is cogent to CAGW theory.
CMIP5 model predictions of the “hotspot”
Thanks for highlighting the importance of scientific verification and validation.
EvidenceIn his May 13, 2015 sworn testimony to Congress John Christy evaluated 35 year predictions of the latest “improved” CMIP5 models from 1979 to 2015. Their predictions “only” show a 400% error for the “signature” anthropogenic “hotspot” of the tropospheric tropical temperatures against objective satellite temperature measurements.
Methodology
Evaluations by forecasting experts show that climate modelers violate most of the methodological principles of scientific forecasting e.g., “Research on forecasting for the manmade global warming alarm“. Testimony to U.S. House Committee on Science, Space, and Technology by Armstrong, J. S., Green, K. C., & Soon, W. Energy and Environment (2011).
Bias
Climate modelers further fail to account for very large “Type B” systemic errors in their models (aka the “lemming factor”. See Guide to the Expression of Uncertainty in Measurement JCGM 100:2008
Bingo!
Two basic errors in all their simulation models is the assumed sensitivity of global temperature to the atmospheric concencentration of CO2 and the assumed contribution of anthropogenic emissions to the rise in atmospheric concentrations of CO2. Actual data do not agree.
Looking at the adjusted temperature record where NOAA and others cool the the past and warm the current temperature is very interesting. Where before the models were tweaked by adjusting the input or other parameters(correctly or not) to force the models to match of the actual temprature record, they now appear to be tweaking the temperature record to match the models.
Folks ever see this (author)? Nice conclusion….
“If this is the case, then we should expect that in the two decades following the phase catastrophe, the worldâs mean temperature should be noticeably cooler i.e. the cooling should start in the late 2010s.”
When I did a masters our numerical methods course, today called modelling, told us not to extrapolate numerical solutions to differential equations. One was only supposed to use such solutions for interpolation only. A few years ago I met someone who had worked with numerical solutions of equations that modeled nuclear explosions. He said that they had the same rule. What has changed that the climate modelers think they can extrapolate based on their models?
What has changed?
The climate modelers are not doing science. They are doing propaganda in service to an ideology.
Larry
What are YOUR credentials? If you are NOT a climate scientist with 20+ years of experience, or a math genius with impeccable statistics background, you are unqualified to discuss this subject. That’s what qualified engineers, scientists and mathematicians who are AGW skeptics have been hearing for the past 20 years. Why should anyone on either side listen to you?
How can you look at the spagetti plot in you article and think it even remotely matches the “pause” during the past 18+ years? A Pause which no one in the AGM crowd predicted or even intimated might happen. All the AGW proponents expected the Earth to be 0.4C warmer today. They were all WRONG!
It’s the Sun and natural cycles (ENSO, PDO, AMO, etc), not CO2, stupid! When observed data contradicts the models, the models are WRONG. Scrap them and start again! Better yet, eliminate the CO2 function and see how well the models work. You might be surprised!
Bill
William,
First, I don’t understand the relevance of your points to the simple test I proposed.
Second, I consulted with several climate scientists when writing this.
Third, that’s quite the appeal to authority. It’s especially odd given your pronouncement that “it’s the sun, stupid.” It sounds like you believe yourself to be the Pope of Science.
“Second, I consulted with several climate scientists when writing this (simple test).”
And that was probably your 2nd mistake. The 1st one being in thinking that such a “test” could be created.
The earth’s climate system is a dynamic system consisting of dozens and dozens of interactive variables that are constantly changing from hour-to-hour, …. day-to-day, …. week-to-week, …. month-to-month ….. and year-to-year …. and is therefore never ever repeatable from one (1) year to the next …. or one (1) century to the next.
The only thing that is truly “cyclic” in the natural world is the “changing of the equinoxes”. Every thing else occurs “randomly” due to the interactivity of said variables …. and are therefore best described as being “emergent phenomena”.
My question: How does one model a dissipative process backward? That is, how is the entropy handled when going against the arrow of time? Reversibility is only a property on non-dissipative problem sets. If anyone ever runs the models backward they are doing non-physical nonsense or they are not modeling a sufficient analog of the real climate!
Here’s the answer: they are non-physical nonsense both ways!