Climate scientists can restart the climate change debate & win: test the models!

By Larry Kummer, from the Fabius Maximus website

Summary; Public policy about climate change has become politicized and gridlocked after 26 years of large-scale advocacy. We cannot even prepare for a repeat of past extreme weather. We can whine and bicker about who to blame. Or we can find ways to restart the debate. Here is the next of a series about the latter path, for anyone interested in walking it. Climate scientists can take an easy and potentially powerful step to build public confidence: re-run the climate models from the first 3 IPCC reports with actual data (from their future): how well did they predict global temperatures?

Trust can trump Uncertainty.”

Presentation by Leonard A Smith (Prof of Statistics, LSE), 6 February 2014.

The most important graph from the IPCC’s AR5

clip_image001

Figure 1.4 from p131 of AR5: the observed global surface temperature anomaly relative to 1961–1990 in °C compared with the range of projections from the previous IPCC assessments. Click to enlarge.

Why the most important graph doesn’t convince the public

Last week I posted What climate scientists did wrong and why the massive climate change campaign has failed. After 26 years, one of the largest longest campaigns to influence public policy has failed to gain the support of Americans, with climate change ranking near the bottom of people’s concerns. It described the obvious reason: they failed to meet the public’s expectations for behavior of scientists warning about a global threat (i.e., a basic public relations mistake).

Let’s discuss what scientists can do to restart the debate. Let’s start with the big step: show that climate models have successfully predicted future global temperatures with reasonable accuracy.

This spaghetti graph — probably the most-cited data from the IPCC’s reports — illustrates one reason for lack of sufficient public support in America. It shows the forecasts of models run in previous IPCC reports vs. actual subsequent temperatures, with the forecasts run under various scenarios of emissions and their baselines updated. First, Edward Tufte probably would laugh at this The Visual Display of Quantitative Informationclip_image002 — too much packed into one graph, the equivalent of a PowerPoint slide with 15 bullet points.

But there’s a more important weakness. We want to know how well the models work. That is, how well each forecast if run with a correct scenario (i.e., actual future emissions, since we’re uninterested here in predicting emissions, just temperatures).

The big step: prove climate models have made successful predictions

“A genuine expert can always foretell a thing that is 500 years away easier than he can a thing that’s only 500 seconds off.”

— From Mark Twain’s A Connecticut Yankee in King Arthur’s Courtclip_image002[1].

A massive body of research describes how to validate climate models (see below), most stating that they must use “hindcasts” (predicting the past) because we do not know the temperature of future decades. Few sensible people trust hindcasts, with their ability to be (even inadvertently) tuned to work (that’s why scientists use double-blind testing for drugs where possible).

But now we know the future — the future of models run in past IPCC reports — and can test their predictive ability.

Karl Popper believed that predictions were the gold standard for testing scientific theories. The public also believes this. Countless films and TV shows focus on the moment in which scientists test their theory to see if the result matches their prediction. Climate scientists can run such tests today for global surface temperatures. This could be evidence on a scale greater than anything else they’ve done.

Testing the climate models used by the IPCC

“Probably {scientists’} most deeply held values concern predictions: they should be accurate; quantitative predictions are preferable to qualitative ones; whatever the margin of permissible error, it should be consistently satisfied in a given field; and so on.”

— Thomas Kuhn in The Structure of Scientific Revolutionsclip_image002[2] (1962).

The IPCC’s scientists run projections. AR5 describes these as “the simulated response of the climate system to a scenario of future emission or concentration of greenhouse gases and aerosols … distinguished from climate predictions by their dependence on the emission/concentration/radiative forcing scenario used…”. The models don’t predict CO2 emissions, which are an input to the models.

So they should run the models as they were when originally run for the IPCC in the First Assessment Report (FAR, 1990), in the Second (SAR, 1995), and the Third (TAR, 2001). Run them using actual emissions as inputs and with no changes of the algorithms, baselines, etc. How accurately will the models’ output match the actual global average surface temperatures?

Of course, the results would not be a simple pass/fail. Such a test would provide the basis for more sophisticated tests. Judith Curry (Prof Atmospheric Science, GA Inst Tech) explains here:

Comparing the model temperature anomalies with observed temperature anomalies, particularly over relatively short periods, is complicated by the acknowledgement that climate models do not simulate the timing of ENSO and other modes of natural internal variability; further the underlying trends might be different. Hence, it is difficult to make an objective choice for matching up the observations and model simulations. Different strategies have been tried… matching the models and observations in different ways can give different spins on the comparison.

On the other hand, we now have respectably long histories since publication of the early IPCC reports: 25, 20, and 15 years. These are not short periods, even for climate change. Models that cannot successfully predict over such periods require more trust than many people have when it comes to spending trillions of dollars — or even making drastic revisions to our economic system (as Naomi Klein and the Pope advocate).

Conclusion

Re-run the models. Post the results. More recent models presumably will do better, but firm knowledge about performance of the older models will give us useful information for the public policy debate. No matter what the results.

As the Romans might have said when faced with a problem like climate change: “Fiat scientia, ruat caelum.” (Let science be done though the heavens may fall.)

“In an age of spreading pseudoscience and anti-rationalism, it behooves those of us who

believe in the good of science and engineering to be above reproach whenever possible.“

P. J. Roach, Computing in Science and Engineering, Sept-Oct 2004 — Gated.

Other posts in this series

These posts sum up my 330 posts about climate change.

  1. How we broke the climate change debates. Lessons learned for the future.
  2. A new response to climate change that can help the GOP win in 2016.
  3. The big step climate scientists can make to restart the climate change debate – & win.

For More Information

(a) Please like us on Facebook, follow us on Twitter, and post your comments — because we value your participation. For more information see The keys to understanding climate change and My posts about climate change. Also see these about models…

(b) I learned much, and got several of these quotes, from a 2014 presentations by Leonard A Smith (Prof of Statistics, LSE): the abridged version “The User Made Me Do It” and the full version “Distinguishing Uncertainty, Diversity and Insight“. Also see “Uncertainty in science and its role in climate policy“, Leonard A. Smith and Nicholas Stern, Phil Trans A, 31 October 2011.

(c)  Introductions to climate modeling

These provide an introduction to the subject, and a deeper review of this frontier in climate science.

Judith Curry (Prof Atmospheric Science, GA Inst Tech) reviews the literature about the uses and limitation of climate models…

  1. What can we learn from climate models?
  2. Philosophical reflections on climate model projections.
  3. Spinning the climate model – observation comparison — Part I.
  4. Spinning the climate model – observation comparison: Part II.

(d)  Selections from the large literature about validation of climate models

Advertisements

201 thoughts on “Climate scientists can restart the climate change debate & win: test the models!

    • It could be my imagination, but I could swear I saw the words “predict”, “predicted”, “forecasts”, “predictions”, “predicting”…more times than I had the patience to count, and also saw, separately, the word “projections” at least once, indicating that the writer evidently understands that these are indeed distinct concepts.
      Golly!

      • I am rather busty right now, but will post you links to discussions here which should shed some light on the reason for my comment.
        I am slightly confused at this point, and am wondering sir…do you count yourself as on the skeptical side of the Great Global Warming Non-Debate, on the side of what skeptics refer to as Warmistas, or perhaps a Lukewarmer, or what?
        Just curious. You of course are not obligated to satisfy my curiosity on this. It is the title of this post that makes me think it is written from a warmista perspective, because why on Earth would anyone want to reset this debate otherwise?
        I would point out that I find the title at least somewhat misleading in another regard, that being that “climate scientists” are engaging in debate on any of the issues before us.
        In fact, to the chagrin of the skeptical community, they refuse to engage in any organize public discussions of any of the relevant issues, and are pretty much monolithic in this regard.
        So, just what debate does the title refer to?

      • Menicholas,
        “do you count yourself…”
        That is a powerful question! You can find the answer in the links given in the For More Information section above. But I suggest that it is not relevant to this discussion, here and now.
        Think of this essay as a tool, a rock thrown into a pond. Who or why it was thrown don’t matter. The object and its effects are independent of the thrower. Just as it no longer matters how the public policy debate about climate change has become gridlocked, or who’s responsible.
        This a proposal to restart the debate. Climate scientists have to see it as in their interest to do so (if they are confident in their models, they can run this test and “win”). If not, the public can ask for this test, one that might end the incessant bickering that substitutes for a debate.
        Note that Naked Capitalism (a popular liberal-left website) included this on their daily links. That’s a sign of broad appeal necessary for any proposal that has even a tiny chance of success.
        I’m working on additional steps. I hope some of those reading this also will push this proposal. I don’t see anything else on the horizon that might affect the policy debate — except perhaps extreme weather (e.g., two large hurricanes hitting East coast cities, magnified in people’s minds by alarmists — allowing bills to be pushed through Congress).

      • Menicholas: Since Fabius Maximus will not answer your inquiry regarding its position on CAGW, I will. FM is a warmist. The refusal to answer your question is a key indicator of FM`s wish to be seen as “objective” and to disguise any real or imagined agenda. This is the typical stance of a biased observer seeking to cloak themselves in the righteous adornment of objectivity.

        • kelleydr:
          The following two paragraphs are written in the disambiguated language that is developed at http://wmbriggs.com/post/7923/ . Terms that are polysemic (have more than one meaning) in the literature of global warming climatology unless disambiguated are placed in quotes.
          I note that FM is an equivocator and that he draws a conclusion from at least one equivocation thus being guilty of application of the equivocation fallacy. In this way FM draws the false conclusion that projections (which he sometimes calls “predictions”) can be validated when they can only be evaluated. If FM were to rewrite his article in the disambiguated language that is referenced in the first paragraph he would find that all of his “models” are modèles, that they make projections but not predictions and that these projections are susceptible to evaluation but not validation.
          Models are built under the scientific method of investigation but Modèles are built under a pseudoscientific method of investigation. A consequence from FM’s use of an ambiguous language in making his argument is for a pseudoscience to be dressed up to look like a science.
          Several years ago, the chair of Earth Sciences at Georgia Tech asked me to prepare the manuscript for an article to be published in her blog on the topic of “The Principles of Reasoning: Logic and Climatology.” In the ensuing study I observed frequent applications of the equivocation fallacy in the literature of global warming climatology. Applications of this fallacy were frequently made by skeptics as well as warmists.

      • Menicholas,
        Your comment suggests that you are too sharp to be deceived by nonsense (making stuff up) from the likes of kelleydr, but that comment does illustrate the dysfunctional nature of the public debate about climate, with partisans defending their tribes — uninterested in truth or logic.
        My views (as shown in my post) are described here. I’ve been attacked by “skeptics” (a weird label, but suitable for this mad tribal war).
        More relevant here, I’ve been denounced by Leftists like Brad DeLong (Prof Economics, Berkeley) for defending Roger Pielke Jr. (who was guilty of repeating well-established findings in the peer-reviewed literature. I was attacked — quite speciously (e.g., by Politifact) — for showing that the PBL survey of climate scientists (the best such done to date) showed that only a minority (a large minority) supported the key finding of AG4 & AR5 at the 95% confidence level (i.e., more than half of warming since 1950 caused by anthropogenic greenhouse gases) .
        As conducted today, I believe the public policy debate does not serve the interests of America, but rewards only the political interests of Left and Right. One way to resolve this is finding tests that both sides believe fair, so we can move beyond the name-calling and make sound decisions.
        Another path forward would be for one side to adopt policies that a majority of Americans can support. I doubt the Left will do so. But climate change can help the GOP win in 2016.

      • Terry,
        “that projections (which he sometimes calls “predictions”)”
        I am discussing the IPCC reports, and so use their definitions for projection and prediction.
        “I note that FM is an equivocator and that he draws a conclusion from at least one equivocation thus being guilty of application of the equivocation fallacy.”
        Wow. Q.E.D.

        • Editor of the Fabius Maximus website:
          You can’t count on the IPCC to help you to avoid inadvertent applications of the equivocation fallacy. You and your colleagues at FM have to do this yourselves. To do this you must employ a disambiguated language in making global warming arguments. (Equivocation alert: in the literature of global warming climatology “warming” is among the polysemic terms that are used in making arguments.)

    • “Why should we re-run our models” – Well, somebody kept some old copies of the reports – the models didn’t predict the “pause”. Newer models have been “tweaked” to reduce the scarily high growth, but not by much, so really aren’t much better.
      Long story short: the models are crap and any policy decisions made based on the, likewise misdirected crap.
      Or shorter still: the shit has hit the fan.

  1. Look, all scientists’ models have some smooth exponential curve as their climate prediction outcomes. That’s simply not how the climate works.
    To even vaguely model climate correctly, you have to have some kind of Fourier series modelling, with periodicities representing natural climate cycles and amplitudes presumably modelled to try and fit to natural data.
    So, that would include the following:
    1. QBO – in the 1 – 3 yr periodicity range.
    2. El NIno/La Nina cycles – in the 5 – 8 yr range.
    3. Solar Cycles 11/22 yr range.
    4. Lunar cycles – 18.6yr cycle.
    5. Oceanic Oscillations – in the 30 – 75 year range.
    6. Etc etc etc.
    Of course, those are just certain input parameters and they do not reflect how they all integrate together, which presumably must be reflected by the effects on cloud formation, storminess.
    How do you put in stochastic variables like major volcanic eruptions, earthquakes etc? Are they really stochastic or do they too have fuzzy periodicities??
    If you look at the sorts of projections Landscheidt made, he never had sinusoidal curves or exponential curves – he had curves which reflected multiple variables and multiple periodicities.
    If you want to say that models have a useful role, they must reflect natural processes and mirror real temperature evolutions.
    What they must NOT do is obsess about carbon dioxide. They must not assume that strong stability is not built into the system (because it clearly is, be that in interglacials or within ice ages) and they must understand how to overcome those mechanisms to drive changes between glacials and inter-glacials.
    Perhaps the biggest imponderable now is whether we are still in the stability of an interglacial, or the rapid warming which precedes entry into the next ice age.
    I don’t know the answer to that and I wonder, quite frankly, if anyone does.
    Until people admit just what they don’t know and just how valid the assumptions made to develop models are, no-one is going to trust modellers again.
    They’ve wasted £100bn in a generation and if they worked in financial services, their corporations would be paying £1trn in fines, their sector would be decimated by unemployment and the scandal would be on the front pages of every newspaper for 3 yrs minimum.
    Next time, if there is a next time, the scientists serve their funders, not the other way around………

  2. Climate Scientists will, no doubt, argue that the original models are ‘old science’ and we are better informed now.
    To which we should reply: “But at the time, you told us that this was settled science…?”

    • That also lays the ground work for distrust in their current ‘projections’. You are correct in that until they can show some level of precision in past ‘projections’, why should we accept anything they say currently.

    • This is easy, take the new science back to 1998 and rerun the models… they still fail, both forward and backwards. We have the data since 1998.. so does it fit? Or from the time the world began in 1979, the world was perfect and calm up till that time. ” it worse than that Jim, AGW is dead Jim, it’s dead. “

      • They have already re-run their models. That’s why they aren’t saying anything…

        I’m not sure of ‘aren’t saying anything’, but clearly the results are much less useful for CAGW alrmism than they liked. Lack of a certain argument is often a proof that the argument does not work, which suggests (I love this word) the original theory was, to some extent, not producing good predictions.
        The guys at CAGW department would like to find a model which predicts recent pause, but which rapidly goes exponential in future, and which could be called reasonably sound.
        I have been somewhat worried at dark nights – what about if the cagwists are right and the West Side Highway will be under water in 2008 2018 2028. With police cars and different birds, trees and tape on the windows. But no, it is 13 years in future and there is no way Hansen was right. But it could be Hansen can’t be interviewed for a new prediction in 2028. You know, science advances one funeral at a time.
        (And when I do silly mistakes, like replacing ‘be’ with ‘the’, forgive me. English is my second language and while I type at superb speed, I’m getting old.)

    • As the author noted, hindcasts have very little persuasive power, for the very good reason that ANYBODY can produce a model that fits the observations when the required record of observations is sitting in front of them.

      • I’m pessimistic of working hindcast, because knowing the climate structure (like what is the variance of variables) and knowing the state of the system (like how warm is Atlantic surface water at given time) are different problems, and the latter is needed to do precise predictions with CO2 and aerosols measured a posteriori. And the latter is an impossible, weather-related, problem.

  3. It also should be requirement that every individual, real world, scientific principle in involved in climate must be able to be identified in the model. The scientific equations HAVE to be visible. Otherwise we have just a complex set of polynomials that are attempting to mimic a graph and a climate.

    • Dr. David Evans is providing the equations and some nice visuals of the climate models.
      Just wish that I paid more attention during math class. Never to late to learn. 🙂

    • …that are attempting to mimic a highly adjusted surface record.
      To do this correctly the models must be run against the satellite record for the troposphere, verses the model prediction for the same. Nothing else is cogent to CAGW theory.

  4. Dr. David Evans is reintroducing his Solar Model over at joannenova.com.au.
    He is reviewing the current GCM now in detail. After that he will reintroduce his theory.
    Should be an interesting.

  5. First, I asked Stephen Belcher, the head of the Met Office Hadley Centre, whether the recent extended winter was related to global warming. Shaking his famous “ghost stick”, and fingering his trademark necklace of sharks’ teeth and mammoth bones, the loin-clothed Belcher blew smoke into a conch, and replied,
    “Here come de heap big warmy. Bigtime warmy warmy. Is big big hot. Plenty big warm burny hot. Hot! Hot hot! But now not hot. Not hot now. De hot come go, come go. Now Is Coldy Coldy. Is ice. Hot den cold. Frreeeezy ice til hot again. Den de rain. It faaaalllll. Make pasty.”
    (from “When it comes to climate change, we have to trust our scientists, because they know lots of big scary words” from Sean Thomas Telegraph blogs June 19th, 2013

    • To the day of this very early morning in NE Oregon, your re-quote of Sean remains the best there is in climate change comments. So good and I am insanely jealous of that wordsmith. Way better than any of my much drier, imaginatively poorer, remarks. The old New York Times political cartoons are dust under that man’s feet.

      • Wow – there’s another skeptic in Oregon? Don’t tell Charlie Hales or Kate Brown. They’ll hunt us down.
        Of course, if you’re on the east side of the state, you might be okay – I’m stationed right outside of Portlandia – greenies, wiccans, and ‘keep Portland weird’ bumper stickers. It’s enough to drive you nuts.

      • Replying to Joel: Well, being a trans-Cascadian Oregonian (Eugene and Frenchglen), I can report that there are plenty of skeptics in Eugene, but not in the same proportion as in Harney County. Those on the west side have to keep their heads down to avoid being hassled by the proponents of the Cultural Revolution. On the east side we don’t have to worry. Of course, on both sides of the Cascades, we’re all much better armed than said proponents.

    • I’m with Pamela on this one. Sometimes one comes across such a perfect expression of irony and wit that it is utterly impossible to improve upon it. This definitely qualifies. Simply sublime.

  6. A model has a learning fase in the past, a testing fase in the past, and after that you can use it for predictions. If you use data from the testing fase in order to tune your model, you can not test it. So although it is difficult you should not use recent data. If you like to model and make predictions, you should first study the subject of forecasting in general.

  7. Does anyone think that the models are intended to produce a reasonable prognostication of future conditions? Look at how the modellers censor the results before release. We don’t get to see every run, and AFAIK some runs are curtailed when they go wild. How do they know what wild is? Why do then lump all the results together in one graph? Is there no realization that some run consistently hot compared to observation? Why isn’t, say, the Canadian model ditched for poor performance? Because of reluctance to offend a modelling group? Or because this is a political enterprise intended to provide ammunition for a political agenda?
    I find the assumption that those who run the models want accurate results tends to conflict with reality. And yes, I think the author knows that too and that the value of this challenge is that it will be ignored and we’ll all know what to make of that.

    • “intended to produce a reasonable prognostication of future conditions”
      No, they are intended to stampede the unthinking masses into buying the CAGW fraud.

      • The more money you want to spend, the higher your confidence level needs to be.
        For the changes they are demanding, we need at least 97% confidence.

      • Piffle!
        As the author states:
        “Models that cannot successfully predict over such periods require more trust than many people have when it comes to spending trillions of dollars — or even making drastic revisions to our economic system…”
        Confidence schmofidence!
        They cannot predict, so we simply have to have more TRUST when it comes to spending our TRILLIONS, and such trivialities as MAJOR REVISIONS TO OUR ECONOMIC SYSTEM!
        Am I hallucinating?
        Are these people tripping, or just way too high?
        Sorry to shout, but you literally cannot make this crap up… but they did!

      • Seriously, I think it would be best if these nutjobs just take the billions they have already raped from our national coffers and just go away!
        Follow Mike Tyson’s lead, and just fade into Bolivian.
        Which they may already be doing.

    • Depends… I’ve worked with as low as 51%.. basically if you have to make a decision you take the significance the data gives you.
      You all forget that this is applied science..

  8. This test has already been done; the models have been run against the actuals for twenty years and have failed. We know they don’t work. They were run using the then actual data. What’s the point of fiddling them again? They still won’t work because they are missing equations and algorithms they need because these are not known. As with any other computer program if you can’t accurately and completely specify the problem to be solved you won’t get the right answer. I fail to understand why so many seem to think these models are magic – they’re only computer programs. If you input guesses you’ll get guesswork out.

  9. Climate scientists can restart the climate change debate & win

    Why on Earth would they want to?
    1) They have won in the popular press. Read Nat. Geo. or Sci. Am.
    2) They have won in the Main Stream Media. The 97% consensus is given as fact.
    3) They have won in the scientific literature. Try publishing a skeptic paper in Nature Climate Change or Science (AAAS)
    4) They have won in the funding arena. M. Mann is said to have garnered over $10 million. The researcher behind the recent RICO-20 stunt has pulled in millions, as well. This type of funding is simplyunheard of in any other area of science.
    5) They have won in the policy arena. The destruction of the US electric grid and the war on fossil fuels proceeds apace. (the greenies love it)
    6) They have won across the government. “I hope there are no climate change den**rs in the Department of Interior,” – Sally Jewell, secretary of the Department of the Interior.
    7) They have won in public opinion. Skeptics are often harassed to the point where careers and livelihoods are threatened.
    In what way have they not won, and why should they care?

    • They do not appear to have persuaded mother Earth. She seems to be suggesting that they are wrong.
      As time goes by and the divergence between model predictions and reality widens (as will be the case should the ‘pause’ continue notwithstanding a temporary 2015/6 El Nino blip), there position will become increasingly untenable and may fall like a pack of cards.
      This is why we no longer hear about Global Warming but now Climate Change, and why even Climate Change is being muddled with weather weirdening/the prolification of extreme weather events.

      • richard v,
        I agree. That’s imo an under-appreciated aspect of the public policy debate about climate change.
        The alarmists “own” the high ground. They dominate in journalism, academia, the major science agencies, etc. By the Third Assessment Report in 2000 they were ready to push for massive public policy changes. But the climate increasingly failed them. First the pause in atmosphere warming, then the pause in many (or most) forms of extreme weather (e.g., landfalling major hurricanes in America).
        But Mother Nature is fickle, One or two major events — magnified in the public mind by the massive alarmist machinery — and everything could change. A severe tornado season plus a big tropical storm hitting a major city — and the debate might change with great speed.
        Twenty or thirty years from now historians will decide if the current models were correct, but it might not matter. It’s the like the 1970’s joke about the end of a Soviet invasion of western Europe. Two Red Army generals are in Paris toasting their victory. One asks the other, “Who won the air war?”
        I recommend moving fast to resolve this debate during the “pause” — this pause in the debate, when cooler minds can be heard.

    • In what way have they not won? They have not won (yet) in totalling silencing the critics and dissenters, which is why sources like wattsupwiththat are so valuable. Most importantly, the CAGW crowd have so far failed in their political objectives of attaining a global, legally binding treaty that will require developed countries to actually try to reduce their GHG emissions by 70% by 2050, while paying the developing countries blackmail in the form of the (at least) $100 billion a year in the Green Climate Fund. Politicians love to look “green”, but ultimately people vote for jobs and higher standards of living. After the failure of the Paris Conference of the Parties in December, the tactics internationally will change. They will try either to get groups of larger countries to form “Climate Clubs” to force others into stringent emissions by imposing damaging trade sanctions or they will try a “revolution from below” by a grassroots campaign aimed at recruiting the unions, churches, municipalities and environmental non-governmental organizations to intimidate and “shame” their opponents. There is a long war to be fought.

    • Tony,
      When we say people “won”, we usually mean by comparison with their stated goals. There have been no substantial public policy measures made in the US for mitigation or adaptation to climate change. The reason for this failure is that climate change consistently ranks on the bottom of surveys asking the public about their policy priorities.
      That’s failure.

      • A) 300+ power plants closed or in the process of shutting down. Another 300 plants slated for closure. Worse, these plants are not getting mothballed, the important parts are getting destroyed to comply with regulations. This will absolutely preclude the possibility of plant restarts once the disaster of this policy becomes apparent.
        B) I have lost track of the 100s of Billions poured into “renewable” energy schemes.
        C) The Ethanol Mandate:
        1) Compel its use (taxpayer pays)
        2) Subsidize its production (taxpayer pays)
        3) Tariff and trade barriers on imports. (taxpayer pays)

        no substantial public policy measures made in the US

        Are you kidding me?
        Are You Flipping Kidding Me?

      • Tonyl.
        Attributing all of these things to climate change policy is incorrect.
        (1) Hundreds of power plants are shutting down for a wide range of reasons. Several generations of plants are obsolete due to age and new technology. Others have become uneconomic due to increased regulations on air pollution and massive changes in energy prices. These matters are complex.
        (2) Since the early 1970s (especially after the 1973 Arab oil embargo) a major goal of US public policy has been to develop alternative energy sources — both to reduce pollution and diversify our energy sources. The National Renewable Energy Laboratory was created in 1974.
        (3) The Ethanol mandates were created by the Energy Policy Act of 2005 and the Energy Independence and Security Act of 2007. They were designed to further several public policy goals, including reducing air pollution and fighting climate change — but providing “energy independence and security” was the most important (as the title suggests).

      • One hundred percent agree with TonyL. The EPA has been writing draconian regulations which are forcing the shut down of coal fired power plants by requiring reduced CO2 emissions and unrealistically low Hg levels. Coal would easily compete with natural gas in price if the regulations in place in 1990 were still the ones in place today. California EPA (air resources board) has instituted regulations and a CO2 tax on gasoline which results in gas costing a $1 a gallon or more than most other states. Electricity prices in CA have sky-rocketed in the state due to the mandated renewables policy. The state is in the process of spending 100+ billion dollars to build a useless high speed/low speed rail between LA and SF financed by the CO2 tax. Almost $35k of every $100k+ Tesla car is paid for by taxpayers in order to drive up sales.
        The rapid increase in cost of energy drives up the cost of everything, including food. To say that the current Progressive Climate policies have had little to no effect is either speaking from ignorance or a lie. These policies need to be reversed. The Green/Progressive agenda has almost accomplished its goal, they are just having a problem putting the last nail or two in the coffin.

      • If the plants were merely being moth balled for economic reasons, there would be no need to disable them as well.

    • There’s one obvious reply: Volkswagen.
      Who could have predicted this a couple of weeks ago? Things can change remarkably fast.
      I agree that the fight against scientific corruption is incredibly hard and may appear hopeless. But it will be won eventually, though possibly not in my lifetime.
      Many commentators have noted that the EU’s green policies (based on junk science) have directly led to this scandal, which has damaged the environment and probably killed at least tens of thousands of people. This is a perfect proof of what sceptics have been saying for years.
      A few years ago Mann himself admitted that the sceptics were winning (though of course he attributed it to massive funding from the fossil fuel companies – if only….)
      In the end the truth always wins.

    • Juergen,
      Models have been extensively “hindtested”, as shown in the citations I give. However hindcasting is only the first stage of model verification, and by itself generally considered insufficient. Models are almost inevitably tuned to the past, either consciously or unconsciously by developers.
      For similar reasons drugs are tested in double-blind trials.

    • Yes!
      The models have the property “Great Skill” in forecasting. Furthermore, the “Great Skill” is a symmetrical property. This means that the models can predict the past with as great accuracy and precision as they can predict the future.
      “Making predictions is hard, especially about the future” – Yogi Berra.

      • The fudge factors are gathered from only 10 years of data. Somewhere in the literature I have seen backward calculations, which show similar deviations as we observe now …

  10. Notice the dated march of FARS, TARS, SARS, AR4 and WHAT LETTERS DO WE HAVE LEFT FOR THE NEXT BATCH of -ARSe acronyms displayed on the spaghetti graph. Exactly how many restarts and do-overs do climate scientists get before voters get rid of all fund granting climate alarmist politician on the face of the Earth???????

  11. There is an ole engineering design “saying” that goes …. “If it doesn’t work on paper … then you don’t have any chance whatsoever of it working when you put it to practice”.
    Climate modeling computer programs DO NOT work on paper.
    “Faith, hope and parity” based expectations of accurate “re-run” results is delusional thinking.

      • “Hence the need for a test both sides will consider fair.”
        Good luck with that!
        That comment gave me a big smile after a hard day. As if this were just an augment over a matter of science. This is a political battle with the State and all its many minions reaching for ever more power. Even the dictators of old never thought of taxing and controlling the very air you breath!
        We are seeing the US Empire build a police state, and it will not cease in its efforts to control you simply because you can show the facts are against them.

        The whole aim of practical politics is to keep the populace alarmed (and hence clamorous to be led to safety) by menacing it with an endless series of hobgoblins, all of them imaginary. — H. L. Mencken

      • Editor – FMw.
        I know I am right. Like the legal statute of ….. “Ignorance of the Law is no excuse”, …. thus it matters not a “twit” whether it is Judicial Law or Scientific Law.
        Hence the need for a test both sides will consider fair.
        Scientific facts and religious beliefs are incompatible, or oxymoronic if you choose, therefore there is no possibility that a “test” could be created that both sides would consider fair.
        because many people disagree with you.
        You got that right, ….. but very, very few have ever provided common sense thinking, logical reasoning and/or intelligent deductions along with supporting facts or evidence that proved me wrong.
        I am strictly science orientated without any personal non-science emotional biases attached.

  12. Update to this post
    Roger Pielke Jr (Prof Environmental Studies, U CO-Boulder) proposed such a test in “Climate predictions and observations“, Nature Geoscience, April 2008. Excerpt:

    “To facilitate such comparisons the IPCC should
    (1) clearly define the exact variables in its projections and the appropriate corresponding verification (observational) datasets, and
    (2) clearly explain in a quantitative fashion the exact reasons for changes to its projections from assessment to assessment, in even greater detail than found in the statement in 1995 regarding aerosols and the carbon cycle.
    Once published, projections should not be forgotten but should be rigorously compared with evolving observations.”

    • Editor of the Fabius Maximus website:
      You say

      But we have a logjam because many people disagree with you. Both sides yelling at each other will not change that. Hence the need for a test both sides will consider fair.

      It is not possible to provide any test that “both sides will consider fair”.
      This true whatever Pilke jnr or anybody else suggests as being such a test.
      No such test is possible because if it were then it would not be needed: Kiehle’s work would be sufficient to refute all except at most one unidentified climate model.

      I again explain the matter.
      None of the models – not one of them – could match the change in mean global temperature over the past century if it did not utilise a unique value of assumed cooling from aerosols. So, inputting actual values of the cooling effect (such as the determination by Penner et al.
      http://www.pnas.org/content/early/2011/07/25/1018526108.full.pdf?with-ds=yes )
      would make every climate model provide a mismatch of the global warming it hindcasts and the observed global warming for the twentieth century.
      This mismatch would occur because all the global climate models and energy balance models are known to provide indications which are based on
      1.
      the assumed degree of forcings resulting from human activity that produce warming
      and
      2.
      the assumed degree of anthropogenic aerosol cooling input to each model as a ‘fiddle factor’ to obtain agreement between past average global temperature and the model’s indications of average global temperature.
      Nearly two decades ago I published a peer-reviewed paper that showed the UK’s Hadley Centre general circulation model (GCM) could not model climate and only obtained agreement between past average global temperature and the model’s indications of average global temperature by forcing the agreement with an input of assumed anthropogenic aerosol cooling.
      The input of assumed anthropogenic aerosol cooling is needed because the model ‘ran hot’; i.e. it showed an amount and a rate of global warming which were greater than observed over the twentieth century. This failure of the model was compensated by the input of assumed anthropogenic aerosol cooling.
      And my paper demonstrated that the assumption of aerosol effects being responsible for the model’s failure was incorrect.
      (ref. Courtney RS An assessment of validation experiments conducted on computer models of global climate using the general circulation model of the UK’s Hadley Centre Energy & Environment, Volume 10, Number 5, pp. 491-502, September 1999).
      More recently, in 2007, Kiehle published a paper that assessed 9 GCMs and two energy balance models.
      (ref. Kiehl JT,Twentieth century climate model response and climate sensitivity. GRL vol.. 34, L22710, doi:10.1029/2007GL031383, 2007).
      Kiehl found the same as my paper except that each model he assessed used a different aerosol ‘fix’ from every other model. This is because they all ‘run hot’ but they each ‘run hot’ to a different degree.
      He says in his paper:

      One curious aspect of this result is that it is also well known [Houghton et al., 2001] that the same models that agree in simulating the anomaly in surface air temperature differ significantly in their predicted climate sensitivity. The cited range in climate sensitivity from a wide collection of models is usually 1.5 to 4.5 deg C for a doubling of CO2, where most global climate models used for climate change studies vary by at least a factor of two in equilibrium sensitivity.
      The question is: if climate models differ by a factor of 2 to 3 in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy.
      Kerr [2007] and S. E. Schwartz et al. (Quantifying climate change–too rosy a picture?, available at http://www.nature.com/reports/climatechange, 2007) recently pointed out the importance of understanding the answer to this question. Indeed, Kerr [2007] referred to the present work and the current paper provides the ‘‘widely circulated analysis’’ referred to by Kerr [2007]. This report investigates the most probable explanation for such an agreement. It uses published results from a wide variety of model simulations to understand this apparent paradox between model climate responses for the 20th century, but diverse climate model sensitivity.

      And, importantly, Kiehl’s paper says:

      These results explain to a large degree why models with such diverse climate sensitivities can all simulate the global anomaly in surface temperature. The magnitude of applied anthropogenic total forcing compensates for the model sensitivity.

      And the “magnitude of applied anthropogenic total forcing” is fixed in each model by the input value of aerosol forcing.
      Kiehl’s Figure 2 can be seen here.
      Please note that the Figure is for 9 GCMs and 2 energy balance models, and its title is:

      Figure 2. Total anthropogenic forcing (Wm2) versus aerosol forcing (Wm2) from nine fully coupled climate models and two energy balance models used to simulate the 20th century.

      It shows that
      (a) each model uses a different value for “Total anthropogenic forcing” that is in the range 0.80 W/m^2 to 2.02 W/m^2
      but
      (b) each model is forced to agree with the rate of past warming by using a different value for “Aerosol forcing” that is in the range -1.42 W/m^2 to -0.60 W/m^2.
      In other words the models use values of “Total anthropogenic forcing” that differ by a factor of more than 2.5 and they are ‘adjusted’ by using values of assumed “Aerosol forcing” that differ by a factor of 2.4.
      So, each climate model emulates a different climate system. Hence, at most only one of them emulates the climate system of the real Earth because there is only one Earth. And the fact that they each ‘run hot’ unless fiddled by use of a completely arbitrary ‘aerosol cooling’ strongly suggests that none of them emulates the climate system of the real Earth.
      Richard

      • For logical thinkers the issue of the “fairness” is resolved by placement of this issue in a logical context. This context is provided when “prediction” designates a kind of proposition; a “prediction” then has a probability of being true and thus the value that is assigned to this probability by a model can be tested by comparison to the value of the corresponding relative frequency in a sample drawn randomly from the underlying population and not used in the construction of the model. This process leads to the validation or falsification of the model. The IPCC climate models are insusceptible to validation or falsification because the context that they provide is not logical. In the illogical context that they do supply, IPCC-style “evaluation” is possible but fails to resolve the issue of the fairness.

      • Terry Oldberg:
        You may try to “resolve” the issue of “fairness” to your satisfaction but such a resolution would be meaningless. I am writing to explain why this is in the probably forlorn hope that you will understand.
        Logical thinkers know that “fairness” means whatever its user intends it to mean when s/he uses it. That is why “fairness” is only really useful to sophists and to children in school playgrounds. Also, it is why – as I said – “It is not possible to provide any test that “both sides will consider fair”.”.
        And – as I explained – “No such test is possible because if it were then it would not be needed: Kiehle’s work would be sufficient to refute all except at most one unidentified climate model.”.

        These matters will be obvious to you in the unlikely event that you learn the fundamental principles of logic.
        Richard

      • Terry Oldberg:
        OK. For sake of demonstration, I will assume that you do have some understanding of logic and ask you to show it.
        Please explain what you understand to be a definition of “fairness” that would enable the proposed “test that both sides will consider fair”.
        And while you are about it, at long last please say what you mean by the word “event”.
        Richard

        • richardscourtney:
          Unlike yourself, I would take a logical approach to finding a solution to the problem of the fairness. Logic features statements called “propositions.” I would define the word “prediction” such that it was a kind of a proposition. In logic, a proposition has a probability of being true. Every prediction of a model would have a probability of being true plus a value for this probability.
          A science has a theoretical side and an empirical side. Probabilities lie on the theoretical side. The empirical counterpart of a probability is a relative frequency.
          Relative frequencies are defined by the counts called “frequencies” in a sample that is drawn from a study’s statistical population. When the sample is selected randomly and unused in the construction of the model its relative frequency values provide for a test of the probability values that are asserted by the study’s model. If it passes this test the model is said to be “validated.” Otherwise it is said to be “falsified.”
          Your approach defines “prediction” such that it is not an example of a proposition. In this way you divorce the problem of the fairness from logic. The model can neither be validated nor falsified. However, it can be “evaluated.” Evaluation is a logically nonsensical concept that was invented by the IPCC after Vincent Gray pointed out to IPCC management that its claim to basing its assessments on validated models was false. Though evaluation is logically nonsensical it is the approach that you join the IPCC in favoring.

      • Terry Oldberg:
        OK. I understand that reply: it demonstrates
        1. You are unable to provide the requested definitions of what you mean by “fairness” and an “event”.
        2. You don’t know or understand anything that constitutes logic (i.e. reasoning conducted or assessed according to strict principles of validity).
        3. You think verbosity constitutes cogency. But it does not (as you would know if you were capable of logical reasoning).
        Richard

        • richardscourtney:
          That’s an ad hominem argument. Does resort to an obviously fallacious argument signal that you are out of ammunition? If so, the decent thing for you to do is capitulate.

      • Terry Oldberg:
        I strongly commend that you undertake a course in basic logic.
        You will then learn that I have NOT made “an ad hominem argument”.
        I merely pointed out that your irrational bloviation demonstrated your total ignorance of logical principles: it does demonstrate that, and I stated how it demonstrates that.
        Richard

        • richardscourtney:
          According to dictionary.com: “An ad hominem argument is one that relies on personal attacks rather than reason or substance.” Let us examine your argument that
          “I merely pointed out that your irrational bloviation demonstrated your total ignorance of logical principles: it does demonstrate that, and I stated how it demonstrates that.”
          with an eye toward whether it relies on a personal attack rather than reason or substance.
          Not being of the form of a syllogism, this argument cannot rely on reason or substance. Do you provide a point-by-point refutation of my “blovation.” No. Do you prove my “total ignorance of logical principles”? No. You have reason to believe, actually, that I am knowledgeable enough about about logical principles to deliver tutorials about them to audiences of erudite people. Rather than a justified attack on my bad ideas yours was an unjustified attack on my person.

      • Oldberg:
        Yes, that dictionary definition of ad hominem is correct.
        I did NOT make an ad hom. argument. I listed YOUR demonstrations of YOUR complete ignorance of logical principles. For example, do you deny that you have failed to state what you mean by the words “fairness” and “event” on which you have chosen to pontificate? Pointing out that you have failed to provide those requested definitions is NOT a “personal attack”: it is a statement of fact that your assertions are gibberish because they have no “substance” of any kind when they rely on undefined words.
        And your additional bloviation to which I am replying provides additional demonstration of your ignorance of how to argue logically.
        Your boorish behaviour does you no good and I suggest you stop it.
        Richard

        • richardscourtney:
          As I understand it, you assert that my complete ignorance of logical principles is proved by my failure to respond to your demand for me to provide my personal definitions for two words. It seems to me that this assertion is illogical for there is not a logical way in which the premise that person A failed to respond to person B’s demand for A’s personal definitions of words can yield the conclusion that B is completely ignorant of logical principles. If you can provide proof to the contrary please provide same.

      • Terry Oldberg:
        OK. You are now demonstrating that you are an idiot.
        I listed (indeed, I numbered) three different examples of your ignorance of logical principles that you provided.
        And your nonsense about one of the examples is silly.
        A basic principle of logic is that a person making an argument is required to define the terms he/she is using when requested. No amount of sophistry can hide the fact of your ignorance of that principle without which logical argument is not possible. And no amount of your idiocy can conceal the fact that you have failed to define what you mean by “fairness” and “event”.
        Richard

        • richardscourtney:
          In response to my post of Sept. 30 at 11:19 pm you fail to respond to my request for a proof of the contention that “…my complete ignorance of logical principles is proved by my failure to respond to your demand for me to provide my personal definitions for two words.” Is this because you are unable to prove it? If not, please post the proof.

      • Terry Oldberg:
        Having demonstrated your complete ignorance of logical principles and your idiocy, you now claim you cannot read by writing to me

        In response to my post of Sept. 30 at 11:19 pm you fail to respond to my request for a proof of the contention that “…my complete ignorance of logical principles is proved by my failure to respond to your demand for me to provide my personal definitions for two words.” Is this because you are unable to prove it? If not, please post the proof.

        I here wrote to you saying

        OK. For sake of demonstration, I will assume that you do have some understanding of logic and ask you to show it.
        Please explain what you understand to be a definition of “fairness” that would enable the proposed “test that both sides will consider fair”.
        And while you are about it, at long last please say what you mean by the word “event”.

        Your reply showed you cannot demonstrate ANY understanding of logic and I responded to that by here listing the “proof” of your total ignorance of logical principles which you had provided, and I later here explained one of the listed examples because you claimed you are too thick to understand it.
        I have had enough of your boorish behaviour and I will ignore any more of it.
        Richard

    • Editor of the Fabius Maximus website:
      You say

      But we have a logjam because many people disagree with you. Both sides yelling at each other will not change that. Hence the need for a test both sides will consider fair.

      It is not possible to provide any test that “both sides will consider fair”.
      This is true whatever Pilke jnr or anybody else suggests as being such a test.
      No such test is possible because if it were possible then it would not be needed: Kiehle’s work would be sufficient to refute all except at most one unidentified climate model.

      I again explain the matter.
      None of the models – not one of them – could match the change in mean global temperature over the past century if it did not utilise a unique value of assumed cooling from aerosols. So, inputting actual values of the cooling effect (such as the determination by Penner et al.
      http://www.pnas.org/content/early/2011/07/25/1018526108.full.pdf?with-ds=yes )
      would make every climate model provide a mismatch of the global warming it hindcasts and the observed global warming for the twentieth century.
      This mismatch would occur because all the global climate models and energy balance models are known to provide indications which are based on
      1.
      the assumed degree of forcings resulting from human activity that produce warming
      and
      2.
      the assumed degree of anthropogenic aerosol cooling input to each model as a ‘fiddle factor’ to obtain agreement between past average global temperature and the model’s indications of average global temperature.
      Nearly two decades ago I published a peer-reviewed paper that showed the UK’s Hadley Centre general circulation model (GCM) could not model climate and only obtained agreement between past average global temperature and the model’s indications of average global temperature by forcing the agreement with an input of assumed anthropogenic aerosol cooling.
      The input of assumed anthropogenic aerosol cooling is needed because the model ‘ran hot’; i.e. it showed an amount and a rate of global warming which were greater than observed over the twentieth century. This failure of the model was compensated by the input of assumed anthropogenic aerosol cooling.
      And my paper demonstrated that the assumption of aerosol effects being responsible for the model’s failure was incorrect.
      (ref. Courtney RS An assessment of validation experiments conducted on computer models of global climate using the general circulation model of the UK’s Hadley Centre Energy & Environment, Volume 10, Number 5, pp. 491-502, September 1999).
      More recently, in 2007, Kiehle published a paper that assessed 9 GCMs and two energy balance models.
      (ref. Kiehl JT,Twentieth century climate model response and climate sensitivity. GRL vol.. 34, L22710, doi:10.1029/2007GL031383, 2007).
      Kiehl found the same as my paper except that each model he assessed used a different aerosol ‘fix’ from every other model. This is because they all ‘run hot’ but they each ‘run hot’ to a different degree.
      He says in his paper:

      One curious aspect of this result is that it is also well known [Houghton et al., 2001] that the same models that agree in simulating the anomaly in surface air temperature differ significantly in their predicted climate sensitivity. The cited range in climate sensitivity from a wide collection of models is usually 1.5 to 4.5 deg C for a doubling of CO2, where most global climate models used for climate change studies vary by at least a factor of two in equilibrium sensitivity.
      The question is: if climate models differ by a factor of 2 to 3 in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy.
      Kerr [2007] and S. E. Schwartz et al. (Quantifying climate change–too rosy a picture?, available at http://www.nature.com/reports/climatechange, 2007) recently pointed out the importance of understanding the answer to this question. Indeed, Kerr [2007] referred to the present work and the current paper provides the ‘‘widely circulated analysis’’ referred to by Kerr [2007]. This report investigates the most probable explanation for such an agreement. It uses published results from a wide variety of model simulations to understand this apparent paradox between model climate responses for the 20th century, but diverse climate model sensitivity.

      And, importantly, Kiehl’s paper says:

      These results explain to a large degree why models with such diverse climate sensitivities can all simulate the global anomaly in surface temperature. The magnitude of applied anthropogenic total forcing compensates for the model sensitivity.

      And the “magnitude of applied anthropogenic total forcing” is fixed in each model by the input value of aerosol forcing.
      Kiehl’s Figure 2 can be seen here.
      Please note that the Figure is for 9 GCMs and 2 energy balance models, and its title is:

      Figure 2. Total anthropogenic forcing (Wm2) versus aerosol forcing (Wm2) from nine fully coupled climate models and two energy balance models used to simulate the 20th century.

      It shows that
      (a) each model uses a different value for “Total anthropogenic forcing” that is in the range 0.80 W/m^2 to 2.02 W/m^2
      but
      (b) each model is forced to agree with the rate of past warming by using a different value for “Aerosol forcing” that is in the range -1.42 W/m^2 to -0.60 W/m^2.
      In other words the models use values of “Total anthropogenic forcing” that differ by a factor of more than 2.5 and they are ‘adjusted’ by using values of assumed “Aerosol forcing” that differ by a factor of 2.4.
      So, each climate model emulates a different climate system. Hence, at most only one of them emulates the climate system of the real Earth because there is only one Earth. And the fact that they each ‘run hot’ unless fiddled by use of a completely arbitrary ‘aerosol cooling’ strongly suggests that none of them emulates the climate system of the real Earth.
      Richard

    • They are even worse. The models must be run against the satellite record, for that alone detemines if any warming is cogent to CAGW theory.

  13. CMIP5 model predictions of the “hotspot”
    Thanks for highlighting the importance of scientific verification and validation.
    EvidenceIn his May 13, 2015 sworn testimony to Congress John Christy evaluated 35 year predictions of the latest “improved” CMIP5 models from 1979 to 2015. Their predictions “only” show a 400% error for the “signature” anthropogenic “hotspot” of the tropospheric tropical temperatures against objective satellite temperature measurements.
    Methodology
    Evaluations by forecasting experts show that climate modelers violate most of the methodological principles of scientific forecasting e.g., “Research on forecasting for the manmade global warming alarm“. Testimony to U.S. House Committee on Science, Space, and Technology by Armstrong, J. S., Green, K. C., & Soon, W. Energy and Environment (2011).
    Bias
    Climate modelers further fail to account for very large “Type B” systemic errors in their models (aka the “lemming factor”. See Guide to the Expression of Uncertainty in Measurement JCGM 100:2008

  14. Two basic errors in all their simulation models is the assumed sensitivity of global temperature to the atmospheric concencentration of CO2 and the assumed contribution of anthropogenic emissions to the rise in atmospheric concentrations of CO2. Actual data do not agree.

  15. Looking at the adjusted temperature record where NOAA and others cool the the past and warm the current temperature is very interesting. Where before the models were tweaked by adjusting the input or other parameters(correctly or not) to force the models to match of the actual temprature record, they now appear to be tweaking the temperature record to match the models.

  16. Folks ever see this (author)? Nice conclusion….
    “If this is the case, then we should expect that in the two decades following the phase catastrophe, the world’s mean temperature should be noticeably cooler i.e. the cooling should start in the late 2010s.”

  17. When I did a masters our numerical methods course, today called modelling, told us not to extrapolate numerical solutions to differential equations. One was only supposed to use such solutions for interpolation only. A few years ago I met someone who had worked with numerical solutions of equations that modeled nuclear explosions. He said that they had the same rule. What has changed that the climate modelers think they can extrapolate based on their models?

  18. Larry
    What are YOUR credentials? If you are NOT a climate scientist with 20+ years of experience, or a math genius with impeccable statistics background, you are unqualified to discuss this subject. That’s what qualified engineers, scientists and mathematicians who are AGW skeptics have been hearing for the past 20 years. Why should anyone on either side listen to you?
    How can you look at the spagetti plot in you article and think it even remotely matches the “pause” during the past 18+ years? A Pause which no one in the AGM crowd predicted or even intimated might happen. All the AGW proponents expected the Earth to be 0.4C warmer today. They were all WRONG!
    It’s the Sun and natural cycles (ENSO, PDO, AMO, etc), not CO2, stupid! When observed data contradicts the models, the models are WRONG. Scrap them and start again! Better yet, eliminate the CO2 function and see how well the models work. You might be surprised!
    Bill

    • William,
      First, I don’t understand the relevance of your points to the simple test I proposed.
      Second, I consulted with several climate scientists when writing this.
      Third, that’s quite the appeal to authority. It’s especially odd given your pronouncement that “it’s the sun, stupid.” It sounds like you believe yourself to be the Pope of Science.

      • Second, I consulted with several climate scientists when writing this (simple test).
        And that was probably your 2nd mistake. The 1st one being in thinking that such a “test” could be created.
        The earth’s climate system is a dynamic system consisting of dozens and dozens of interactive variables that are constantly changing from hour-to-hour, …. day-to-day, …. week-to-week, …. month-to-month ….. and year-to-year …. and is therefore never ever repeatable from one (1) year to the next …. or one (1) century to the next.
        The only thing that is truly “cyclic” in the natural world is the “changing of the equinoxes”. Every thing else occurs “randomly” due to the interactivity of said variables …. and are therefore best described as being “emergent phenomena”.

  19. My question: How does one model a dissipative process backward? That is, how is the entropy handled when going against the arrow of time? Reversibility is only a property on non-dissipative problem sets. If anyone ever runs the models backward they are doing non-physical nonsense or they are not modeling a sufficient analog of the real climate!

  20. The models are being tested all the time, e.g., by scientists in China and India, and every time they are found deficient. In nature, various atmospheric and ocean phenomena, e.g., monsoon, the models are meant to simulate, unfold in ways that are different from how it happens in the models. And it cannot be any other way: the models are physically and chemically incomplete and their resolution is too poor to simulate correctly what today has to be called “subgrid physics,” e.g., convection and cloud formation/evolution, yet this subgrid physics is essential to how weather patterns and climate evolve.

      • “>>> Can you give us some pointers… <<<"
        With pleasure:
        [1] doi:10.1175/JCLI-D-14-00740.1 (April 2015)
        [2] doi:10.1007/s00382-014-2269-3 (July 2015)
        [3] doi:10.1007/s00376-015-4157-0 (August 2015)
        [4] doi:10.1175/JCLI-D-14-00475.1 (April 2015) Here the authors are Americans, but they find severe problems in both CMIP3 and CMIP5 models
        [5] doi:10.1175/JCLI-D-14-00810.1 (April 2015)
        [6] doi:10.1007/s00382-014-2398-8 (May 2015)
        [7] doi:10.1007/s00704-014-1155-6 (April 2015)
        [8] doi:10.1002/2014JD022239 (March 2015)
        [9] doi:10.1007/s00382-014-2229-y (March 2015)
        [10] doi:10.1175/JCLI-D-14-00405.1 (March 2015)

      • Gus,
        Thank you for the citations!
        These (the first 3, at least) evaluate climate models’ ability to simulate weather phenomena (e.g., monsoons, Hadley circulation). I believe (from memory) that the IPCC reports acknowledge that.
        But this kind of criticism has not — and I believe will not — break the logjam. The question is about the key factor: the ability of models to forecast global atmospheric temperatures. IMO that has to be the focus — on the core, not peripheral issues.

  21. There are some folks that think all the changes in climate are driven by ENSO. We know the models can’t do ENSO which means they would fail right from the start. Now, add in ocean oscillations and it gets even worse. In my opinion, until the models can clearly predict ENSO, PDO, AMO (at a minimum) they are completely worthless.

  22. If the consensus would like to convince people of anything they should abandon all the models and use only observed unfiddled data. Tell the truth, in other words.

  23. Why don’t the projections of each Report start at the existing conditions? It is as if each model run doesn’t recognize the present, considers some theoretical past is actually more correct – indeed, the “real” situation – from which the future builds.
    Some of these projections, then, say that this year is not what we measure, but actually 0.6C warmer than measured.

    • Easy, Douglas. The observed temperatures on the graphs that show a divergence from models are satellite-based. True believers use the globally averaged surface temperatures, which are “adjusted” (notionally: to correct for various sources of error) but (how convenient!!) more or less track the models.
      Because the models have what looks suspiciously like an exponential trend, this fix may not work for much longer (unless the earth cooperates and really does heat up, which appears a bit unlikely to this observer).

  24. Are there truly _no_ climate models which are in the public domain, and hence available for public testing?
    If there’s even just one, can’t we start there and publish the results ourselves?
    This is just software after all. Run on the right hardware and the with the true emissions data, the model’s accuracy or otherwise will become clear.

    • Here you can download GISS Model E:
      http://www.giss.nasa.gov/tools/modelE/
      The current incarnation of the GISS series of coupled atmosphere-ocean models is now available. Called ModelE, it provides the ability to simulate many different configurations of Earth System Models – including interactive atmospheric chemistry, aerosols, carbon cycle and other tracers, as well as the standard atmosphere, ocean, sea ice and land surface components.

    • Mark,
      I can sympathize with your view. However, let’s be generous at this point. When (if) we see the results from the first three assessment reports, then is the time to discuss the latest models.
      But today I don’t see why people consider climate models as sufficient basis for massive public policy changes — even assuming (as I do) that AR5’s WGI is mostly correct. Their major finding, operationally for public policy, is that anthropogenic greenhouse gases are responsible for more than half of the warming since 1950. This describes the past, not the future — and is only given at the 90% confidence level (below the 95% level usually required for science and public policy).
      The case for large bold action rests on the models. Let’s test the models, as a next step.

      • “is that anthropogenic greenhouse gases are responsible for more than half of the warming since 1950.”
        Then please explain how HadSST3 temps dropped from the 1950’s to about 1976 as CO2 was rising.
        And while you’re at it, please explain how SSTs dropped significantly in 2008, and then rebounded.
        It wasn’t CO2. What is the stated reason for the “other” half of the warming since 1950?

      • warrenlb says:
        You say “It wasn’t CO2. Justify your claim, please.”
        How can someone be so completely deluded about how real science works??
        Explaining the basics to warrenlb is like trying to teach a dog trigonometry. He is incapable of learning.
        For rational readers, here is how it works: the one making the conjecture or hypothesis has the onus of convincingly supporting it. But warrenlb is trying to re-frame the method in order to make skeptics prove a negative (prove that “It wasn’t CO2”).
        Skeptics do not have the onus to prove what “it wasn’t”. It is up to the alarmist misinformers to show that CO2 is causing the current global warming. But so far, all they have for an argument is their endless ‘appeal to authority’ logical fallacy, and their measurement-free assertions. And of course, global warming stopped many years ago.
        Dishonest and illogical word games like that appeal to the less bright, who tend to congregate on the alarmist side. But for the others, this is the correct statement:
        “You have made the conjecture that CO2 is the primary cause of global warming, and that it will cause runaway global warming if emissions continue. Justify your claim, please.”
        But they can’t; they’ve never even been able to produce a measurement of AGW — despite many decades of searching. They are convinced that CO2 emissions have a major effect. But they are completely incapable of finding the required evidence. Thus, they fall back on their baseless assertions that ‘it must be because of human CO2 emissions’.
        warrenlb cannot think straight. But for readers who can, that abject failure to find even a single measurement quantifying the fraction of AGW out of all global warming means one of two things:
        1. Either AGW does not exist, or
        2. AGW is such a tiny part of all global warming, which includes the natural recovery of the planet from the Little Ice Age, ocean and solar events, etc., that it is far too minuscule to measure. Since AGW is too tiny to measure, it can be completely disregarded as a non-problem.
        I think #2 is correct. But like everyone else’s opinion, that is not based on quantifiable measurements, because there are no such measurements.
        So dishonest propagandists on the alarmist side try to turn the burden upside down, and place the onus on scientific skeptics — the only honest kind of scientists. That leaves out warrenlb, as we see from his comment. That leaves out the UN/IPCC, too, which also has never been able to measure AGW.
        And that explains why alarmist scientists like Michael Mann will not engage in fair, moderated debates any more. The alarmist scientists have lost every debate held in a neutral venue. Skeptics easily demolished their arguments. That is to be expected, when they use warrenlb’s illogical attempts to convince people that skeptics must prove a negative. Wrong.
        So now alarmist scientists tuck tail and run from debates, relying on thier mendacious, anti-science “consensus” arguments instead. We hear anything but honest science from the alarmist contigent, because they lack even the simplest measurements of what they insist must be happening, and Planet Earth is decisively falsifying their claims.

  25. In grade B science fiction movies, when the scientist asked the computer a ridiculous question the computer would answer, “Cannot compute – Insufficient Data”. If only climate models were as sophisticated as computers in grade B sci-fi movies.

  26. It’s a pity that IPCC authors weren’t made to individually submit their predictions for the next 5 / 10 / 15 / 20 / 25 / 30 / 40 / 50 years, so their susequent

    • (Oops–I got cut off.)
      … subsequent predictions and warnings could be put in context.
      I think it would be a savvy political move for us contrarians to demand that this be done going forward, for IPCC authors of the next AR. In the interim, past IPCC authors, and other big name warmists, should be challenged to “Put your cards on the table”. Too bad there’s no betting market any more, where they could be challenged t “put up or shut up.”
      I repeat: I appeal to the merchants of doubt, in their lair in Skull Mountain, to get with this program and make “pu your cards on the table” our mantra.

      • With the recent developments of our betters calling for the arrest of skeptics, and the Vatican appearing to be revving up the Inquisition-style rhetoric against “non-believers” climate change-wise, IPCC authors should be happy we aren’t going back to the days of killing people for wrong predictions.
        Although these guys seem to be “do as we say, not as we do”, so I can imagine they wouldn’t be upset if this was used against skeptics.

      • I agree heavily on this. Were Gore setting up a bet that the Arctic melts by 2018, we were much happier. Even the loser of this bet might be happier – Gore that he was wrong, and me for new opportunities for oil production.
        And, the money could be placed on charity.

      • Roger Knights,
        The UKMO did this in 2009. Watch the following video and have a good laugh.

        There’s also a website dedicated to spreading the message:
        http://ukclimateprojections.metoffice.gov.uk/
        It contains such gems as: by 2080, in the high emissions scenario, summer mean temperature in the south of England (London) is very unlikely to be greater than the 2009 summer mean temperature + 10°C. Current Jun/Jul/Aug mean is ~19°C, so 2080 likely to be less than 29°C (about the same as New Orleans now). Oddly, for a place next to the sea, in the same timeframe it is very unlikely to be wetter than it is now. Obviously, the boiling hot North Sea is not expected to generate extra-tropical storms and fire them in the direction of the city. What’s not to like? Warmer weather with no downside.

  27. We should have enough accurate data now to back load temperatures and known natural phenomena against the models and get a realistic estimate of sensitivity. Why hasn’t that been done yet?

  28. First we had global warming, then climate change, then weather wierding. Now Greenwire reports we have “climate momentum”. The article gets one thing correct, it is filed under politics.
    http://www.eenews.net/tv/2015/09/25
    “POLITICS:
    “Greenwire’s Chemnick talks climate momentum following papal address
    “The Cutting Edge: Friday, September 25, 2015
    “As Pope Francis continues his U.S. tour, will his remarks on climate affect the tone of discussions in Congress and the momentum heading into this year’s Paris talks? On today’s The Cutting Edge, Greenwire reporter Jean Chemnick discusses the power of the pope following his historic address. She also talks about the growing momentum surrounding this year’s international climate talks in Paris.”

  29. Excuse the Wiki reference for the quote, but it deserves to be repeated on this thread: “All models are wrong, but some are useful”.
    https://en.wikipedia.org/wiki/George_E._P._Box
    “His name is associated with results in statistics such as Box–Jenkins models, Box–Cox transformations, Box–Behnken designs, and others. Box wrote that “essentially, all models are wrong, but some are useful” in his book on response surface methodology with Norman R. Draper.”
    https://en.wikipedia.org/wiki/All_models_are_wrong
    “Since all models are wrong the scientist cannot obtain a “correct” one by excessive elaboration. On the contrary following William of Occam he should seek an economical description of natural phenomena. Just as the ability to devise simple but evocative models is the signature of the great scientist so overelaboration and overparameterization is often the mark of mediocrity.”

  30. Before we challenge the climate change proponents to this test, we need to do a couple things:
    (1) Require that the model runs be compared to actual data, not the “adjusted”, “corrected”, “homogenized” data; and
    (2) Define before any tests are run the criteria (and their values) that will constitute verification.
    If you don’t do at least these two things first, before ever starting a run, the entire effort may well be a waste of money.
    BTW, lest anyone question my credentials, I spent about 20 years of my career verifying (or validating, since the words are so often used interchangeably) moderately complex models of physical systems, based on documented physics and algorithm derivations.

  31. This has effectively already been done, since the models were run under a “business as usual” scenario, in which CO2 was projected to continue growing at more or less its pace 25, 20 and 15 years ago.
    So the result is known, and the BAU scenario produced temperature forecasts for this decade laughably too high.

    • Lady G,
      “the models were run under a “business as usual” scenario”
      That’s not really correct.
      (1) The model runs shown are run under multiple scenarios. For example, in AR5 there were 4 scenarios used. Which of those lines on the graph were by the “business as usual” scenarios for each assessment report?
      (2) Don’t assume that “business as usual” means a continuation of current emission trends. For example, the RCP8.5 scenario in AR5 is often described as the “business as usual” scenario. That’s not remotely true. For details see “Is our certain fate a coal-burning climate apocalypse? No!
      In fact none of the 5 RCP’s used in RCP represents a “business as usual” scenario.

      • Correct me if wrong, but in at least those models I’ve studied, three scenarios are run, one of which is without any curbs on emissions, ie BAU. That scenario roughly replicates present levels of CO2, but of course is always way too hot. The other two scenarios assume different levels of CO2 reduction, which hasn’t happened, but even those scenarios still overshoot observed temperatures.

      • Lady G,
        You still have not shown which — if any — of the scenarios used in the first 3 IPCC assessment reports has tracked actual emissions through 2015. The IPCC’s “Emissions Scenarios” report published in 2000 doesn’t help.
        You still have not shown which — if any — of the lines on AR5’s spaghetti graph correspond to models run using actual emissions over the last 25, 20, or 15 years.
        It’s not clear to me what you are attempting to say.

      • Do the models stiii use 1% annual growth in CO2 ppm for the Business as Usual case? Twenty year data is close to only about 0.55%.
        Also exponential CO2 growth (constant annual percent) of any percent combined with a forcing based on a log of CO2 content will give a linear temp increase with time, not an increasing one as most models show.
        With so much variation between models, the entire basic common approach of these is wrong. The biggest concern is CO2. Why no just work on its effect with feedbacks on average global temperatures using simple energy balance models including balance at the surface?

      • Editor,
        You are welcome!
        The paper identifies a major source of model error using empirical evidence. I doubt if models will be adjusted as a result because CAGW would cease to exist.

  32. I have a better idea. Rather than allowing the climate scientists to rerun their models and adjusting parameters to present us with the NEW results. Lets just identify the FIVE previous model runs that most closely matched the actual emissions and see how well those predictions turned out?? There would be NO redo and chances for additional “under-the-hood” tampering. There is no need to rerun the model runs, there have undoubtedly been numerous runs already performed that come pretty close to matching actual emissions. More than likely this will NOT be done because those models likely all predict temperatures that are near the upper end of the spaghetti graph.

  33. The climate models are a complete waste of money and time.
    Models are not data.
    Without data, there is no science.
    Real climate science is done by geologists, and other scientists, who work with real data — objects on the Earth that tell a tale of Earth’s past — not silly computer games making inaccurate predictions of the future.
    The factors that change the climate are not understood.
    Some factors are probably still unknown.
    Climate history studies have identified only two main climate conditions:
    1) Repeating mild warming / cooling cycles, and
    2) Ice sheets growing until they cover much of the Earth, and then melting.
    These climate conditions have only one suspected correlation with CO2, to the best of scientists’ ability to estimate past temperature and CO2 levels:
    (1) It appears that natural (unknown) factors that warm the oceans, cause them to release CO2 into the air with a 500 to 1,000-year lag.
    There is no known correlation where rising CO2 leads, or is simultaneous with, global warming, and there is evidence of high CO2 levels in the past with no runaway greenhouse warming.
    Therefore, models based on the assumption that CO2 is “the climate controller” are WRONG, and even if they appeared to be accurate for a decade or two, or even for five decades, that would be nothing but a coincidence — not good science.
    You can never prove today’s climate model “predictions” wrong in your lifetime — you’d have to wait 100 years to “prove” them wrong.
    The climate change “debate” skipped the first step on the assumption ladder by assuming global warming is bad news.
    We’ve had global warming since 1850.
    It has been GOOD news for humans and green plants.
    Another degree of two F. warming would be even better news.
    Unfortunately you are completely wrong about re-running the models.
    — The models belong in the garbage can.
    — The “scientists” who run them belong on the unemployment line.
    Humans have caused a lot of damage to the environment, and are still causing a lot of damage in Asia.
    But adding CO2 to the air does no damage:
    – It improves plant growth = good news
    – It may cause a small amount of warming = good news
    The only bad news concerning CO2 is smarmy people demonizing the gas, in an effort to
    (1) halt economic growth (hurts the poor the most),
    (2) halt the use of cheap sources of energy (hurts the poor the most),
    (3) halt population growth (affects the poor the most), and
    (4) some want to redistribute wealth as “climate change reparations”, maybe to compensate for damage caused by (1), (2) and (3).
    Free climate blog for non-scientists:
    No ads
    No money for me.
    A public service:
    http://www.elOnionBloggle.blogspot.com

  34. “… re-run the climate models from the first 3 IPCC reports with actual data (from their future): how well did they predict global temperatures? …”.
    =================================
    Why?
    As the anonymous writer says: “… few sensible people trust hindcasts, with their ability to be (even inadvertently) tuned to work …”.
    Unless they change their basic false assumptions about feedbacks etc. the resulting graphs would look the same with ‘armageddon’ postponed a couple of decades.
    The only purpose would be as a face-saving operation.

    • Christopher,
      “Few sensible people trust hindcasts”
      Running the models from the first three ARs with data from after those reports’ publication is not “hindcasting” in the usual sense. It’s a fair test of their predictive ability since the models cannot be tuned to their future. (It is a “hindcast” in the technical sense, as it uses data from our past.)
      “Unless they change …”
      You are confident you know the result. Probably the people from Naked Capitalism reading this post (it was on their daily news today) are equally sure — with the opposite view. Breaking this logjam requires more than both side shouting with confidence at each other.
      Run the models. The answer will put the debate on a new foundation. We can only guess at what will happen then.

  35. Larry
    I like your proposal of demonstrating the veracity and potential improvement over time of GCMs. I would think that in an ideal world the model creators would be anxious to show off their handiwork. However, I don’t think it is going to happen. For the same reason that a person who owns a car that is ‘all show and no go’ would be reluctant to take it to a drag strip and suffer the embarrassment of demonstrating what everyone suspects, I’m afraid the modelers will not submit themselves to such scrutiny. Also, most climatologists seem unwilling to debate those who critique their work. Therefore, they will deny that there is any need for such oversight. My suspicion is that those most intimately familiar with the models are all too aware of their shortcomings, and to wash their laundry in public could endanger their future funding. In most contracts that are awarded, there are ‘milestones’ and performance specifications written into the contract. That isn’t the case for grant awards. Thus, it is in their best interest to avoid any kind of robust evaluation and continue to make promises they will not be held accountable for.

    • Clyde,
      You go to the very heart of this debate, the interaction of the public with scientists.
      My proposal is directed at the public as much as at climate scientists. Should they not do this test, they can be asked about this in the many forums for debate — and especially in Congress. We will learn much from their reluctance to test their models, perhaps as much as we might from a test of their models.
      We are not passengers in America, critiquing the performances of the crew — like the audience in the cheap seats at a baseball game. We are the crew.

  36. Dr David Evans has found a fault with the original climate model that all others are based on. Over at Jonova’s, this is the first of many posts that explains how the basic model worked and in subsequent posts, (3 to date). he will explain where the Physics went wrong. http://joannenova.com.au/2015/09/new-science-1-pushing-the-edge-of-climate-research-back-to-the-new-old-way-of-doing-science/
    Starting from scratch, the second and third post explain in detail what the climate model is, with full mathematics and equations before the future posts dissect the physics error he has found after 2 years of research.
    Well worth a look. http://joannenova.com.au/2015/09/new-science-2-the-conventional-basic-climate-model-the-engine-of-certain-warming/
    http://joannenova.com.au/2015/09/new-science-3-the-conventional-basic-climate-model-in-full/

  37. Who can seriously defend these models as worthwhile? Junk is junk. They are no good for their purpose as they are all wrong! The propaganda value of them however keeps them alive.
    I wouldn’t trust the people from the same outfits that make the old models with any new models, as they are already prejudiced, unobjective, operating on false premises, and under political pressure to conform to the CO2 paradigm at all costs.
    Their models have already been reality tested as failures. No one in the private sector could get away with such abject failure for so long. Time to run away from those failures.
    It’s time for some competition.

  38. Conclusion
    Re-run the models. Post the results. More recent models presumably will do better, but firm knowledge about performance of the older models will give us useful information for the public policy debate. No matter what the results.

    I do not know whether old computer code is properly archived or not. Is it?
    If it is not, the project is unimplementable.
    If it is, a pointer to an online code repository would be appreciated, complete with timestamps and digital signatures to ensure it was never tampered with in the meantime.

  39. In my profession, there is no way we would let product owners – suppliers – test their own products and trust the results they report. Simply because:
    – it is likely that the product owner will optimize and adjust their product for the test conditions
    – it is likely that product owners would only report results favorable to their product
    – It is unlikely that the product owner would report all results
    Hence results reported by product owners are in general regarded trustworthy. The reported results are not regarded to be sufficient to evaluate capabilities, uncertainties and systematic errors for the products.
    This is why independent test laboratories are used to test and report the results from such testing. This is also why there are international standards for the independent laboratories like the standard:
    “ISO/IEC 17025 “General requirements for the competence of testing and calibration laboratories” is the main ISO standard used by testing and calibration laboratories. In most major countries, ISO/IEC 17025 is the standard for which most labs must hold accreditation in order to be deemed technically competent. In many cases, suppliers and regulatory authorities will not accept test or calibration results from a lab that is not accredited.” (Ref. Wikipedia)
    Intergovernmental Panel on Climate Change is very far from meeting the requirements and guidelines of this standard. Hence, IPCC is very far from meeting the requirements the authorities would impose on an independent party in the industry – to be able to accept the results from such an independent party in important matters. Intergovernmental Panel on Climate Change is nowhere near being qualified, or in position to become, an accredited test laboratory. This is overwhelmingly clear just from the principles governing IPCC.
    To be trustworthy, the models need to be tested against trustworthy empirical data for conditions it has not been adjusted to match. Such tests need to be performed by an independent party. A party which have no interest, what so ever, in the test results. An independent party which is accredited in accordance with international standards for accreditation of testing laboratories.

    • “Hence results reported by product owners are in general regarded trustworthy.”
      Should have been:
      “Hence results reported by product owners are in general not regarded trustworthy.”

    • Science,
      “In my profession, there is no way we would let product owners – suppliers – test their own products and trust the results they report.”
      I agree. Validation by outside experts is an essential precaution for any high-stakes project. It’s the hard-won wisdom of the ages. “Trust but verify.” “Always cut the cards.”

      • EFM says “Validation by outside experts is an essential precaution for any high-stakes project.”
        Which is why we are all waiting with baited breath for the International Temperature Data Review Project (from the Global Warming Policy Foundation) to let us know what their review results are. Is anyone else looking forward to this?

  40. A few comments.
    First, I’m pretty sure this figure is the one that was in the draft report that was “released” early, and then was removed when the inevitable furor occurred and replaced with one that was less obviously failing. Perhaps I’m wrong, and don’t want to slog though my copy of AR5 to find out, but that’s my recollection.
    Second the fundamental problem is that THIS figure — even broken down — isn’t right. Each of the lines drawn in the spaghetti graph isn’t “the” output from one of the climate models — it is the average over many runs with slightly perturbed initial conditions from the climate models. The number of runs being averaged is not even controlled model to model. The independence of the models is not assessed — there are something like 7 GISS models out of the 36 that (obviously) share substantial parts of their code but all this does is weight GISS’s contribution to the final averages and envelopes disproportionately. There are far fewer than 36 independent models represented.
    Third, it isn’t really possible to run the models “with the actual numbers” as they weren’t run in the first place with actual numbers. We don’t have actual numbers to run them WITH. We have no idea what the state of the Earth’s fluid dynamical system is at any given instant in time because our measurements of it are incredibly sparse, even with ARGO and many surface stations. Sampling it in depth in the atmosphere and down to the bottom of the ocean in anything like a uniform or random grid of the whole planetary surface is simply not available and will not be available in the foreseeable future. Finally, what sampling we have omits far too much information. We do not have (for example) CO2 distributions, in depth. We do not have aerosol distributions, in depth, and recent evidence strongly suggests that the contributions of aerosols to cooling have been largely overestimated (and inconsistently estimated) in the GCMs to boot — we could rerun the models with aerosols reduced to something like their current most probable values but this has already been done for selected models and the result was that they produced something like half of the warming after they were adjusted to fit the reference period.
    Total climate sensitivity dropped to pretty much the no-feedback estimate of 1.5 C per doubling, utterly non-catastrophic in outlook, especially given that both ITER and Lockheed-Martin are now claiming that they have licked fusion, with LM promising a working fusion plant in (now) around four years, and ITER claiming that they are going to build a 500 MW facility starting immediately. If either or both of these claims are true, we MIGHT reach 450 ppm or even 500 ppm before electricity made from coal is as silly as whale oil and gaslights. Even if the Bern model is correct — which is still more than a bit contentious and dubious — and the residence time of CO2 in the atmosphere is centuries, that will only be a good thing as we will have restored a healthy amount of carbon dioxide to the atmosphere of a planet balanced on the edge of a cold catastrophe due to CO2 starvation. The low-water mark of CO2 in the Wisconsin glaciation was around 180-190 ppm, just over the point of mass extinction of broad species of plants. At 450 ppm CO2, temperatures would stabilize right about where they are now or perhaps a hair warmer (if a non-stationary process could be said to “stabilize”) and agriculture and the biosphere would retain the substantially boosted growth rate for C3 and some C4 plants, especially temperate zone trees and certain staple food crops.
    rgb

  41. Mr. Kummer implies that if the models were to be run on the recorded atmospheric CO2 concentration of the past, this would produce “predictions.” This implication is inaccurate and misleading.
    A model that makes predictions has a logical structure that results from the status of a prediction as a kind of proposition. For science, logic is probabilistic. Thus every proposition has a probability of being true. It follows that every prediction has a probability of being true. A model that makes predictions assigns a numerical value to each of these probabilities.
    Science has theoretical and empirical sides. Probabilities belong to the theoretical side. Their empirical counterparts are called “relative frequencies.” Values are assigned to relative frequencies by counting concrete objects called “sampling units” in a sample that is drawn from the population underlying the model. These values provide a check on the values that are assigned to the corresponding probabilities by the model. A model that passes this test in a sample that was not used in the construction of the model is said to be “validated.” Otherwise, it is said to be “falsified.”
    The climate models of yesterday and today possess none of the logic-related attributes that I have described. A consequence is for them to be insusceptible to being validated. In the parlance of the IPCC they are “evaluated.” The result of the exercise that is proposed by Mr. Kummer is an “evaluation” but while this word sounds like “validation” it refers to a process that is non-logical.
    In a logical context a “prediction” is a kind of proposition. If we conform to logic in naming concepts, Mr. Kummer’s “predictions” are not predictions at all.

    • Terry O.,
      “Mr. Kummer implies that if the models were to be run on the recorded atmospheric CO2 concentration of the past, this would produce “predictions.” This implication is inaccurate and misleading.”
      I give and use the IPCC definitions of these terms. I believe that’s the best way to clear communication with the public.
      “Mr. Kummer’s “predictions” are not predictions at all.”
      These are predictions of climate scientists, not mine. I’m recommending a specific use of them as a test to restart the public debate about climate change.

      • Editor of the Fabius Maximus website:
        Thanks for taking the time to reply.
        “Prediction” is usually used in reference to an logical concept where a “prediction” is an example of a proposition. When the same word is also used in reference to an illogical concept where a “prediction” is not an example of a proposition the result is to lead many people to mistake equivocations for syllogisms thus drawing false or unproved conclusions from global warming arguments. Drawing false or unproved conclusions from global warming arguments leads to logically unfounded public policy.
        Rather than fostering this state of affairs we can and should oppose it by reserving “prediction” for use in reference to a kind of proposition. Under this usage there are no circumstances in which the global warming models of today make predictions.

  42. I have to wonder why it is so important to you that people who don’t even understand their own work triumph in their glorious struggle to ruin the lives of millions of people they’ve never met. And on the basis of graphical legerdemain no less.

  43. The problem is that those models are vastly outdated. That’s always going to be the problem. Now, if you were to run CURRENT models, and they replicate past history from then until now, you’ve got something. Do they, does anybody have links?

    • Daryl,
      As mentioned in the post, hindcasting — predicting the past — is a first step to validate models, but only a weak one. Unless strict methodological protocols are followed, models tend to be “tuned” to match past results — deliberately or inadvertently.
      Similar problems plague drug testing, hence their use of double-blind trials.
      Predictions are the gold standard of testing. As you note, we can assume (but not know) that current models are better than older ones. But testing older models on new data (i.e., from their future) can give us confidence in newer ones — or reasons to be skeptical.
      Either way, we’ll know more than we do today.

      • Editor of the Fabius Maximus website:
        You’ve drifted back into equivocating. You could avoid same by making your arguments in a disambiguated language such as the one that is developed at http://wmbriggs.com/post/7923/ . With use of this language or an equivalent it can be shown the a prediction is a kind of proposition and that a projection is not. That a prediction is a kind of proposition forges a tie between the associated study and logic. That a projection is not a kind of proposition breaks the tie between the study and logic.
        Models predict but modèles project. Models are susceptible to validation but modèles are insusceptible to it. Modèles are susceptible to evaluation but models are insusceptible to it.
        The IPCC’s “models” are modèles. Thus they are insusceptible to validation and the studies that produced them were divorced from logic.
        It is impossible for a modèle to convey information to policy maker about the outcomes from his policy decisions. It is possible for a model to convey this information to a policy maker. Thus, it is not currently possible for our climate to be regulated but would be possible were climatologists to switch from building modèles to building models.

  44. This is to note for the record that richardscourtney has announced his retirement from debate over the issue of whether I am completely ignorant of logical principles. He has done so without providing the proof that I requested of his contention that I am completely ignorant of these principles. Thus, Courtney’s contention stands as an application of the fallacy of proof by assertion. As it attacks me personally, Courtney’s contention stands also as an application of the ad hominem fallacy. As it defames me, Courtney’s contention is illegal.

Comments are closed.