Validation Of A Climate Model Is Mandatory: The Invaluable work of Dr. Vincent Gray

Guest Opinion: Dr. Tim Ball

Early Awareness

Vincent Gray, M.A., Ph.D. is one of the most effective critics of the Intergovernmental Panel on Climate Change (IPCC) through his NZ Climate Truth Newsletter and other publications. He prefaces comments to the New Zealand Climate Science Coalition as follows.

As an Expert Reviewer for the Intergovernmental Panel on Climate Change for eighteen years, that is to say, from the very beginning. I have submitted thousands of comments to all of the Reports. My comments on the Fourth IPCC Report, all 1,898 of them, are to be found at IPCC (2007) and my opinions of the IPCC are in Gray (2008b).

His most recent publication is “The Global Warming Scam and the Climate Change Super Scam” that builds on his very effective first critique, The Greenhouse Delusion: A Critique of “Climate Change 2001”. We now know that the 2001 Report included the hockey stick and Phil Jones global temperature record, two items of evidence essential to the claim of human causes of global warming. In the summary of that book he notes,

· There are huge uncertainties in the model outputs which are recognized and unmeasured. They are so large that adjustment of model parameters can give model results which fit almost any climate, including one with no warming, and one that cools.

· No model has ever successfully predicted any future climate sequence. Despite this, future “projections” for as far ahead as several hundred years have been presented by the IPCC as plausible future trends, based on largely distorted “storylines”, combined with untested models.

· The IPCC have provided a wealth of scientific information on the climate, but have not established a case that increases in carbon dioxide are causing any harmful effects.

On page 58 of the book, he identifies what is one of the most serious limitations of the computer models.

No computer model has ever been validated. An early draft of Climate Change 95 had a Chapter titled “Climate Models – Validation” as a response to my comment that no model has ever been validated. They changed the title to “Climate Model – Evaluation” and changed the word “validation” in the text to “evaluation” no less than describing what might need to be done in order to validate a model.

Without a successful validation procedure, no model should be considered to be capable of providing a plausible prediction of future behaviour of the climate.


What is Validation?

The traditional definition of validation involved running the model backward to recreate a known climate condition. The general term applied was “hindsight forecasting”. There is a major limitation because of the time it takes a computer to recreate the historic conditions. Steve McIntyre at Climateaudit, illustrated the problem:

Caspar Ammann said that GCMs (General Circulation Models) took about 1 day of machine time to cover 25 years. On this basis, it is obviously impossible to model the Pliocene-Pleistocene transition (say the last 2 million years) using a GCM as this would take about 219 years of computer time.

Also, models are unable to simulate current or historic conditions because we don’t have accurate knowledge or measures. The IPCC accede this in Chapter 9 of the 2013 Report.

Although crucial, the evaluation of climate models based on past climate observations has some important limitations. By necessity, it is limited to those variables and phenomena for which observations exist.

Proper validation is “crucial” but seriously limited because we don’t know what was going on historically. Reducing the number of variables circumvents limited computer capacity and lack of data or knowledge of mechanisms.

However, as O’Keefe and Kueter explain:

As a result, very few full-scale GCM projections are made. Modelers have developed a variety of short cut techniques to allow them to generate more results. Since the accuracy of full GCM runs is unknown, it is not possible to estimate what impact the use of these short cuts has on the quality of model outputs.

One problem is that a variable considered inconsequential currently, may be crucial under different conditions. This problem occurred in soil science when certain minerals, called “trace minerals”, were considered of minor importance and omitted from soil fertility calculations. In the 1970s, the objective was increased yields through massive application of fertilizers. By the early 80s, yields declined despite added fertilizer. Apparently, the plants could not take up fertilizer minerals without some trace minerals. In the case of wheat, it was zinc, which was the catalyst for absorption of the major chemical fertilizers.

It is now a given in the climate debate that an issue or a person attacked by anthropogenic global warming (AGW) advocates is dealing with the truth. It proves they know the truth and are deliberately deflecting from it for political objectives. Skepticalscience is a perfect example and their attempt to justify validation of the models begins with an attack on Freeman Dyson’s observation that,

“[Models] are full of fudge factors that are fitted to the existing climate, so the models more or less agree with the observed data. But there is no reason to believe that the same fudge factors would give the right behaviour in a world with different chemistry, for example in a world with increased CO2 in the atmosphere.”

They use “reliability” instead of validation and use the term “hindcasting”, but in a different context.

“If a model can correctly predict trends from a starting point somewhere in the past, we could expect it to predict with reasonable certainty what might happen in the future.”

They claim, using their system that,

Models successfully reproduce temperatures since 1900 globally, by land, in the air and the ocean.


Climate models have to be tested to find out if they work. We can’t wait for 30 years to see if a model is any good or not; models are tested against the past, against what we know happened.

It is 25 years since the first IPCC model predictions (projections) and already the lie is exposed in Figure 1.


Source: University of Alabama’s John Christy presentation to the House Committee on Natural Resources on May 15, 2015.

Figure 1

Fudging To Assure Reliability Masquerading As Validation

Attempts at validation during the 120 years of the instrumental period also proved problematic for the same reasons as for the historical record. A major challenge was the cooling period from 1940 to 1980 because it coincided with the greatest increase in human production of CO2. This contradicted the most basic assumption of the AGW hypothesis that a CO2 increase caused a temperature increase. Freeman Dyson described the practice, generally described as “tweaking”, and discussed in several WUWT articles. It is the practice of covering up and making up evidence designed to maintain the lies that are the computer models.

They sought an explanation in keeping with their philosophy that any anomaly, or now a disruption, is, by default, due to humans. They tweaked the model with human sourced sulfate, a particulate that blocks sunlight and produces cooling. They applied it until the model output matched the temperature curve. The problem was after 1980 warming began again, but sulfate levels continued. Everything they do suffers from the T. H. Huxley truth; “The great tragedy of science, the slaying of a beautiful hypothesis by an ugly fact.

As Gray explained,

Instead of validation, and the traditional use of mathematical statistics, the models are “evaluated” purely from the opinion of those who have devised them. Such opinions are partisan and biased. They are also nothing more than guesses.


He also points out that in the section titled Model Evaluation of the 2001 Report they write,

We fully recognise that many of the evaluation statements we make contain a degree of subjective scientific perception and may contain much “community” or “personal” knowledge. For example, the very choice of model variables and model processes that are investigated are often based upon the subjective judgment and experience of the modelling community.

The 2013 IPCC Physical Science Basis Report Admits There Is No Validation.


Chapter 9 of the 2013 IPCC Report is titled Evaluation of Climate Models. They claim some improvements in the evaluation, but it is still not validation.

Although crucial, the evaluation of climate models based on past climate observations has some important limitations. By necessity, it is limited to those variables and phenomena for which observations exist.

In many cases, the lack or insufficient quality of long-term observations, be it a specific variable, an important processes, or a particular region (e.g., polar areas, the upper troposphere/lower stratosphere (UTLS), and the deep ocean), remains an impediment. In addition, owing to observational uncertainties and the presence of internal variability, the observational record against which models are assessed is ‘imperfect’. These limitations can be reduced, but not entirely eliminated, through the use of multiple independent observations of the same variable as well as the use of model ensembles.

The approach to model evaluation taken in the chapter reflects the need for climate models to represent the observed behaviour of past climate as a necessary condition to be considered a viable tool for future projections. This does not, however, provide an answer to the much more difficult question of determining how well a model must agree with observations before projections made with it can be deemed reliable. Since the AR4, there are a few examples of emergent constraints where observations are used to constrain multi-model ensemble projections. These examples, which are discussed further in Section 9.8.3, remain part of an area of active and as yet inconclusive research.

Their Conclusion


Climate models of today are, in principle, better than their predecessors. However, every bit of added complexity, while intended to improve some aspect of simulated climate, also introduces new sources of possible error (e.g., via uncertain parameters) and new interactions between model components that may, if only temporarily, degrade a model’s simulation of other aspects of the climate system. Furthermore, despite the progress that has been made, scientific uncertainty regarding the details of many processes remains.

These quotes are from the Physical Basis Science Report, which means the media and Policymakers don’t read them. What they get is a small Box (2.1) on page 56 of the Summary for Policymakers (SPM). It is carefully worded to imply everything is better than it was in AR4. The opening sentence reads,

Improvements in climate models since the IPCC Fourth Assessment Report (AR4) are evident in simulations of continental- scale surface temperature, large-scale precipitation, the monsoon, Arctic sea ice, ocean heat content, some extreme events, the carbon cycle, atmospheric chemistry and aerosols, the effects of stratospheric ozone and the El Niño-Southern Oscillation.

The only thing they concede is that

The simulation of large-scale patterns of precipitation has improved somewhat since the AR4, although models continue to perform less well for precipitation than for surface temperature. Confidence in the representation of processes involving clouds and aerosols remains low.

Ironically, these comments face the same challenge of validation because the reader doesn’t know the starting point. If your model doesn’t work, then “improved somewhat” is meaningless.

All of this confirms the validity of Dr Gray’s comments that validation is mandatory for a climate model and that,

No computer model has ever been validated.”




Without a successful validation procedure, no model should be considered to be capable of providing a plausible prediction of future behaviour of the climate.


106 thoughts on “Validation Of A Climate Model Is Mandatory: The Invaluable work of Dr. Vincent Gray

  1. “The IPCC have provided a wealth of scientific information on the climate” slightly off the subject, but it would be nice if the IPCC morphed into something like the International Panel on Climate Analysis and leveraged their wealth of information for more objective pursuits than anti CO2 politics…

  2. When this is debated with the alarmist groups they all say that the ice core is OK, the Mann – EAU is valid, the tree rings are valid, the geological ones are good. Now the research is presented to indicate that no model is correct.

    Is it not correct that none of the computer models will release data set, math, locations, test points and other information required to peer review the hypothesis presented?

    • Back when government climate scientists were still honest, ie. before being corrupted by the AGW gravy train, prehistoric tree rings were read and analyzed to infer wet-dry years, not temperature.

      Here is an example of old-school dendrochronology science from a 35 year-old tree ring display at the MesaVerde National Park Main Visitors Museum:

      • Joel, You are right in saying that for some trees that have rainfall as the factor limiting their growth. However, there are [also] trees that are located just below the snowline of the coastal Canadian Rocky Mountains that have their growth predominantly limited by temperature. The rings in these trees can be used to establish a reasonable good proxy temperature record. Tree rings can be used a temperature proxy when care is taken to establish that air temperature is the primary limiting factor for growth.

      • I do not question that T affects ring growth for a given year. The difficulty I find to resolve is in disentangling moisture from temperature, when soil moisture is the predominate effect on ring growth. A wet but slightly cooler spring year would look different how from a not so wet but warmer spring? And the growth rate is normally higher in the spring I think. Too many confounding moisture and water timing issues to disentangle to get a T that might be resolvable to w/i +/-3 C, just seems ludicrous IMO.

      • Ian Wilson writes “However, there are [also] trees that are located just below the snowline of the coastal Canadian Rocky Mountains that have their growth predominantly limited by temperature. “

        Apart from when it doesn’t and we label it “divergence” you mean?
        Oh look, a squirrel.

      • Another problem: what area of the world has a microclimate that tracks global average temperature year after year for hundreds of years? You need a special “thermometer” tree, AND it must be located in an area of the world that exactly mimics global average temperatures.

    • that have their growth predominantly limited by temperature.
      nope. above freezing temperatures will still not grow trees unless there is liquid water.

  3. We don’t have enough historical data to accurately hindcast, however we do have enough historical data to know what the temperature of the planet was within 0.3C?
    Something doesn’t fit here.

      • They are all like a nest full of baby birds – mouths wide open and squawking for more more more. If you give me more I will delivery the AGW climate changing maybe fact or not to you. More more more.

      • The output of that nest full of baby birds bears a striking relationship with the output of most climate scientists.

      • “They can read bank statements and count money.”

        I’d like to be able to do that. Please send me some money so that I can practise.

    • Mark, hindcasting requires the model to run FORWARD from a time in the past. The. Model requires a large set of dynamic inputs other than temperature. I’ve run large dynamic models in my career, but we spun them up by describing a fully static condition, which I assume isn’t very practical for a climate model. I’ve never sat down to quiz a climate modeler, but it seems to me they must spend a bunch of time trying to adjust a modern reanalysis product to conditions in the past (does anybody know what they do to set the initial conditions? )….

      • Contribution from working group I; on the scientific basis; to the fifth assessment report by IPCC
        Chapter 9
        Evaluation of Climate Models
        Box 9.1 (continued)
        “With very few exceptions .. modelling centres do not routinely describe in detail how they tune their models. Therefore the complete list of observational constraints toward which a particular model is tuned is generally not available. However, it is clear that tuning involves trade-offs; this keeps the number of constraints that can be used small and usually focuses on global mean measures related to budgets of energy, mass and momentum. It has been shown for at least one model that the tuning process does not necessarily lead to a single, unique set of parameters for a given model, but that different combinations of parameters can yield equally plausible models. Hence the need for model tuning may increase model uncertainty. There have been recent efforts to develop systematic parameter optimization methods, but owing to model complexity they cannot yet be applied to fully coupled climate models.”

  4. “These limitations can be reduced, but not entirely eliminated, through the use of multiple independent observations of the same variable as well as the use of model ensembles.”

    The average of 70 piles of dog poo is what???

    • but without the models as “future truth” (a completely Orwellian term) on global temperatures, the climate scam collapses.
      And along with it a collapse of the dream of the watermelon Malthusians’ world depopulation and crony capitalists’ dreams of carbon trading schemes. Thus they will undertake whatever means they can get to achieve those ends.

      • Your deep knowledge of the coming climate change catastrophe cult is presented in very interesting words. Concise and correct, although I’d remove the word: “watermelon”, in respect to watermelons.

        “models as future truth” is a brilliant thought.

        The climate change cult is the latest version of the anti-economic growth, anti-capitalism crowd — it’s much easier to promote chronic slow growth-high unemployment socialism … when you never mention socialism, and talk about ‘saving the Earth” instead.

        I will never forgive the #@$%&$ “environmentalists” for killing millions of people by getting DDT banned in the 1970s, allowing malaria to accelerate again — so the poorest, most helpless children in the world died from malaria due to bad science, and the false demonization of DDT.

        The climate change cult is a political / secular religion movement — the climate models, and those smarmy “bribed” by government grants climate modelers who get to play computer games for a living, are just props to allow Democrat politicians to gain more power over the private sector (and indirectly for giving more money to the crony green businessmen who bribe those Democrat politicians with contributions).

        It’s all about money and power — although some smug climate cult members are more interested in telling others how to live — micro-managing what light bulbs they can buy, for one example — and not in it to obtain wealth, like their former climate pope ‘Al Gorlioni’ (now replaced by the real pope).

        Yes, the three best known “scientists” for the climate change cult are Al Gore, the Pope, and Bill Nye the science guy — one of them doesn’t even have a science degree — hard to believe this is not a fictional movie and soon we will wake up and it will be over.
        The climate in 2015 is better than it has been in hundreds of years.
        The increased CO2 in the air is great news for plants.
        Even more CO2 in the air would be better news for plants.
        The slight warming since 1850 was needed, and welcome.
        Slightly more warming would be even better.
        The climate in 2015 is better than when all of us were born.
        We should be happy about the climate in 2015 — I am.
        But the smarmy leftists work hard to make lots of people worry about the climate — they teach children that economic growth is evil, when it really brings people out of poverty
        The world would be much better off if the climate change cult members were shipped to another planet — where they would soon find out the climate / temperature constantly changes there too.
        Their 40+ years of climate scaremongering is a well-financed scam to grab money and power.

        They are working hard to micro-manage your life.

        And they have your children fearing that the Earth is doomed.

        The climate change cult is more than misinformed — they are evil.

        Climate change blog for non-scientists:

    • Contribution from working group I; on the scientific basis; to the fifth assessment report by IPCC:

      “The climate change projections in this report are based on ensembles of climate models. The ensemble mean is a useful quantity to characterize the average response to external forcings, but does not convey any information on the robustness of this response across models, its uncertainty and/or likelihood or its magnitude relative to unforced climate variability.”

      “There is some debate in the literature on how the multi-model ensembles should be interpreted statistically. This and past IPCC reports treat the model spread as some measure of uncertainty, irrespective of the number of models, which implies an ‘indistinguishable’ interpretation.”

      I agree with you – so would IPCC – if they had any scientific integrity at all rather than serving dog poo also in their assessment.

  5. For modern soothsaying, one only needs a wondrous computer simulation to suck in the gullible.
    The science is in, the science tells us, the science is settled.
    Oh yeah, of course finding “the science” has turned out to be similar to hunting the Snark.

  6. And no computer model ever will be validated, there is a reason for that.

    Anyone who claims that an effectively infinitely large open-ended non-linear feedback-driven (where we don’t know all the feedbacks, and even the ones we do know, we are unsure of the signs of some critical ones) chaotic system – hence subject to inter alia extreme sensitivity to initial conditions – is capable of making meaningful predictions over any significant time period is either a charlatan or a computer salesman (and it seems we get plenty of both on here).

    Ironically, the first person to point this out was Edward Lorenz – a climate scientist.

    You can add as much computing power as you like, the result is purely to produce the wrong answer faster.

    Note in particular “hence subject to inter alia extreme sensitivity to initial conditions”.

    • Exactly why not one single AGW, climate change paper has been peer reviewed and raised to the class of theory. The information as presented does not permit other reviewing scientists to recreate the program and to validate or disprove the presentation.

      It is all about the run for the Government Grant money that is funneled through the nonprofit egreens so it can be said that donations finance the University research. Not true and not factual.

    • But doesn’t the total picture come down to one simple linear coefficient, the ECS?
      Nice and simple, and probably why I will never have one of the two Noble prices embedded in the Navier Stokes, where I wasted to much time pursuing what I believed was a complicated nonlinear problem.

    • No, celestial mechanics models have been validated and used for useful computational predictions — Hill’s lunar theory even predates computers and was used for manual computation.

      • “Hill’s lunar theory even predates computers and was used for manual computation.”

        Hill’s lunar theory does not attempt to model a non-linear chaotic system with a very large number of feedbacks the majority of which are not known and therefore is able to be analysed successfully with a high degree of reliability.

        We are discussing attempts to use computer models to predict the climate, a completely different issue entirely. Do try to keep up.

      • Hill’s lunar theory is a computation simplification to the restricted three body problem, a non-linear mechanical system. The restricted three body problem is chaotic. The point being that some chaotic systems can be computed over 10’s even 100’s of years accurately. The assertion that climate models cannot do so because the are chaotic is questionable and chaos does not excuse their failures.

        Is that “up” enough for you, catweazle666?

      • Even more important, where are the studies showing the departures of a climate model for nearby initial conditions over time? I’ve not seen that analysis for any climate model. My guess is that such analyses aren’t done, so there is no evaluation of how accurately the initial conditions must be known for useful 10, 20, 50, or 100 year prediction.

      • Oh, dear, teach your grandmother to suck eggs much do you, Philip?

        “Is that “up” enough for you, catweazle666?”

        So your argument is that modelling “a computation simplification to the restricted three body problem” is in some way comparable to a creating models of chaotic systems of the order of magnitude of the Earth’s climate for the purpose of making meaningful predictions”

        No, sorry, you’re not even close.

    • catweazle666, it’s obvious that I have nothing to teach you about snide. And since you seem unable to understand basic English but, instead, distorted my message twice in your replies, let me close with only one suggestion — find and study a copy of your professional society’s code of ethics.

      • “catweazle666, it’s obvious that I have nothing to teach you about snide.”

        Actually Philip, it is obvious that you have nothing to teach me about anything – especially non-linear systems, except perhaps spectacularly missing the point, at which you excel.

  7. It’s not just climate models. Somehow in the last 50 years modeling has become central to all the physical sciences not only as a way of exploring empirical study but to validate hypotheses. It just that there is no green movement with a large bet on the 00 to completely skew the results and “cheat” the evidence in most other disciplines. Nothing is ever “discovered” in a model though it has become chic to say that.

    • Of course. Models can be manipulated to support the agenda of choice and deliver a facade of scientific credibility.

  8. Why can’t they just get their brains around the fact that the evolution of climate is chaotic and no amount of modelling, computer upgrades and fudging will give a long term prediction that is guaranteed to match what really happens. Too many dimensions and convolutions to stand a cat in hell’s chance eg increased effective insolation => more cloud => decreased effective insolation.
    Why is this obvious to me but not the hoards of environmentalists?

  9. Well the lPCC are going to have a hard time finding signs of warming across the northern Atlantic side of the NH. lf the ‘Arctic blast’ weather pattern makes a habit of turning up in most of the winters in North America. Just watch what a cooling effect that would have on the northern Atlantic. Which with a combination of a more zonal southern tracking jet stream across the Atlantic and the persistence of low pressure over Northern Russia during the summer months. ls just the sort of thing to trigger climate cooling across the Atlantic side of the NH.

  10. Surely the glass isn’t empty.

    Are there any models which are accurately reflecting observations over time?

    • If they do it is due to heavy adjustments:
      “When initialized with states close to the observations, models ‘drift’ towards their imperfect climatology (an estimate of the mean climate), leading to biases in the simulations that depend on the forecast time. The time scale of the drift in the atmosphere and upper ocean is, in most cases, a few years … Biases can be largely removed using empirical techniques a posteriori … The bias correction or adjustment linearly corrects for model drift … The approach assumes that the model bias is stable over the prediction period (from 1960 onward in the CMIP5 experiment). This might not be the case if, for instance, the predicted temperature trend differs from the observed trend … The bias adjustment itself is another important source of uncertainty in climate predictions … There may be nonlinear relationships between the mean state and the anomalies, that are neglected in linear bias adjustment techniques..”
      (Ref: Contribution from working group I; on the scientific basis; to the fifth assessment report by IPCC)

      “Biases can be largely removed using empirical techniques a posteriori”
      A variant of the Texas sharp shooter fallacy – draw the blink after the shooting.
      In small scale a fallacy – in large scale more like a fraud.

  11. The climate is chaotic on all scales. The warmist reply can be this:

    Easterbrook is in the neighborhood of saying the climate is a boundary values problem. Skeptics can say it’s an initial values problem. With Easterbrook’s graphic, he doesn’t show a bistable system. That kind of would have at least the sets of boundaries, like the two lobes of the Lorenz butterfly.

    • Climate tippings points (bifurcations) are a favorite talking point for the alarmists. That implies an IVP.

      • I’ve seen them argue, IVP is not a problem as we will end up with higher boundaries for temperatures. So we don’t know the exact temperature at any one time, yet the boundaries will be higher.

      • Climate modelers claim the initial values problem disappears using a combination of climate model spin-up, and then taking anomalies. I.e., they implicitly assume their models are perfect, and only the measured parameters introduce errors.

        These errors are supposed as constant, that linear response theory applies to climate models throughout their projection range (never tested, never demonstrated), and that projection errors subtract away, leaving completely reliable anomaly trends.

        I have most of that nonsense in writing, from the climate modeler reviewers of the manuscript I’ve been trying to publish for, lo, these last 2.5 years. That’s the mental gymnastics they use to rationalize their physically meaningless work; their scientifically vacant careers.

      • Pat Frank wrote: “Climate modelers claim the initial values problem disappears using a combination of climate model spin-up, and then taking anomalies. I.e., they implicitly assume their models are perfect, and only the measured parameters introduce errors.”

        As best I can tell, the opposite it true. Most AOGCMs are run several times using a fixed set of parameters to get some idea of how much initialization conditions influence model output. One run might produce 3.2 degC of warming and a second run 3.5 degC of warming. The error in parameterization is assumed to be covered by combining the output of two dozen models that use different parameterizations, but AR4 acknowledged failing to systematically account for parameter uncertainty in a statistically meaningful way. They called their models an “ensemble of opportunity”. Instead the IPCC uses “expert judgment” (i.e. handwaving) to characterize the 90% ci of all model output as the “likely” rather than “very likely” and thereby correct for uncertainties they can’t assess.

    • The warmist have another problem. How long does co2 stay in the atmosphere? The IPCC is telling us hundreds of years. However, if you look at the amount of co2 being produced, what actually ends up in the atmosphere, there is a huge discrepancy. So much so that if we stopped producing co2, that at the current rate of depletion, (plant growth dies at 150ppm), near complete depletion could occur in 100 years. Don’t take my word for it, the numbers are readily available from NOAA and the historical amounts and rate of increase. Where would the decades to centuries be a valid argument? For some reason, the largest year increase in co2 is still 1998. That will almost surely change this year since I’ve made it an issue. I’m predicting 4 ppm’s will be the official number.

    • Indeed, paleoclimatolgy shows evidence of only two distinct climate regimes, glacial and interglacial. There is no paleological evidence for a third and higher temperature regime. It seems likely that the Stephan-Boltzman relation acts so strongly to raised temperature as to make a significantly higher regime all but impossible. Hence, there is zero evidence for “tipping points” to a higher temperature regime and their supposition by alarmists such as Hansen is unwarranted, unprofessional and irresponsible.

  12. Take away the rent seeking grant money. Take away the government’s ability to issue taxpayers money to the grant mongers. Suddenly the natural climate cycles would be perfectly acceptable, warts and all.

  13. For policy making purposes, the climate models take as a significant input – the CO2 emissions.
    The CO2 emissions are dependant on the economy (industry).
    So climate models are economic models with vastly more complex extra computer models added on.

    If you think we can model the economy you’ve missed the last 10 years.
    So how can climate models be reliable?

  14. The Pope’s job in Paris will be to declare by Papal Edict that the IPCC climate models are “immaculate contraptions” dictated by God. “Validation” is against the nature of God and therefore sacrilege; thus the offender must be cleansed by purifying fire after the holy beating by bullwhip.

    For the public viewing of the cleansing events the Vatican will be charging 1000 euros per person with an additional carbon tax of 50 euros per person.

    Ha ha

    • “immaculate contraptions” — great neologism. Let’s set aside April 22 for the Feast of the Immaculate Contraption.

    • The problem with the surface temperature record is that it is not real data. It’s raw data that has been cooked, adjusted, modified, calibrated, and smudged until it shows what the scientists want to see.
      The idea that we know what the temperature of the earth, within 5C, much less the 0.1C claimed by the activists is ludicrous to anyone who has actually studied the subject.

    • The question is how much the surface record has been affected by land use changes and urban heat islands. It’s one thing to say that strong human signal on the temperature record, but quite another to say the changes are due largely to CO2.

    • Oh dear, they seem to have lost something. There was a big cooling event from 1945 to 1975 which brought the temperature down all the way to the level of 1900 – 1910. It was there in early versions of the data. But now it is gone. They must have lost it somehow. Maybe they should go look for it. With any luck they can find it and put it back.

  15. But the global warming – according to the models – should occur in the air. CO2 is a gas on this planet.
    So the satellites should reflect the models if they were right. They measure the temperature in the atmosphere. But the models don’t work.

    And as for the land data, that is so corrupted by Urban Heat Islands that it’s contentious to say the least. For example, we had a record temperature here in the UK recently; can you guess where?
    Heathrow by the runway.
    What a coincidence. Or rather, what a contaminated record.

    Stick with the best data. And don’t ignore it unless you have a clear reason to like we do with the land record.

    • Not just UHI, but also a multitude of micro-site issues, as documented by Anthony and his surface stations project. There are also the numerous station change issues that are both undocumented and unadjusted for.
      Finally, we don’t have anywhere near a sufficient number of sensors to claim any confidence whatsoever, that we know what the “average” temperature of the earth is. Even if we had perfect sensors.

      • Yes. That’s why NOAA created the USCRN.
        If the USCRN had been established in 1950, there would be no debate.
        BTW, some USCRN stations came online in 2002, those show zero warming.
        The completed USCRN was nominally set at 2008, pooling all available data from that network shows a zero anomaly for average US surface temperatures. There are still a few stations being added in Alaska, but those will have no impact on the remaining stations. Also, there is no climatic reason to merge Alaska and Hawaii with the continental US
        The only global temperature estimate that even comes close to what the USCRN will provide is from satellites.

        The overlap of USCRN and global satellite coverage concur that there has been no warming AT ALL since 2002. This is further confirmed by a few good Antarctic stations that show zero warming since 1958

      • I believe it was the late John Daly who did a study with CA stations. He merely grouped the stations by the population of the county the sensors were located in. He found a near linear relationship between population density and temperature increase, with the most rural counties showing little to no warming over the length of their record.

    • McC, are you suggesting that thermometer set in the middle of a large expanse of tarmac, in the suburbs of a huge city, and constantly blasted by the exhausts of hundreds of enormous, kerosine-burning, jet engines, might not give a totally accurate reading of the overall temperature of the UK?

      You must be a paid shill for Big Coal.

    • And further to your point, the CO2 greenhouse effect should increase the temperature difference between the lower and the higher layers of the atmosphere. This increase should be observable quite independently from the temperature trends at the surface, or in any individual layer of the atmosphere, and should thus be a much more robust touchstone of man-made global warming. Yet, there seems to be little discussion of this parameter.

  16. Last sentence should include ‘and therefore should not be used to inform policy making’.

  17. The graph showing the (failure of) the IPPC projections should have a vertical line that shows where (what year) hindcasts ends and forecasts begin. That would help the viewer decide if there is reason to believe the models are mostly built from curve-fitting or climate physics.

    • The models use both climate physics and curve fitting. The problem is that 1) the curves are poorly fit, 2) the climate physics includes too many unknowns, 3) the models are under-constrained – a given goodness of fit does not provide a unique set of values for the adjustable parameters.

      • sure, but arnt the models suspiciously accurate up to ca 1995?
        A cynical person might think most of the models are made in 1995-2000 just by observing how they start failing miserably from then on.

  18. What is the point of validating the models with “hind casting” when they keep rewriting history by modifying the past record?

    • In fact, if they use a data set like HadCrut or GISS with its artificially created warming trends, they will introduce that unrealistic, artificial warming trend into their models.

      They will therefore always end up higher than reality.

    • And if they continue to adjust HadCrut and GISS to create even bigger artificial warming trends, the model divergence from reality will also get bigger..

      Fun ! :-)

    • “What is the point of validating the models with “hind casting” when they keep rewriting history by modifying the past record?”

      That is how they validate the models. They decide what the past should have been, adjust it to fit, and then build that into the models. The models then match the past, which proves that they accurately forecast the future. What could be wrong with that?

  19. My professional work is within an area where we rely on accurate measurements. Uncertainty and systematic errors in the measurements results (estimates if you like) has great impact. The model based measurements (hence the models) are based on physics. These are the things we require by a model based measurement (quantitative theoretical model) to rely on it:
    – The theory is about a causal relations between quantities that can be measured
    – The measurands are well defined
    – The measurands can be quantified
    – The uncertainty of the measurands has been determined by statistical and / or quantitative analysis
    – The functional relationships, the mechanisms, has been explained in a plausible manner
    – The functional relationships, the mechanisms, has been expressed in mathematical terms
    – The functional relationships between variables and parameters has been combined into a model.
    – The influencing variables which have significant effect on the accuracy of the model are identified
    – The model has been demonstrated to consistently predict outputs within stated uncertainties
    – The model has been demonstrated to consistently predict outputs without significant systematic errors
    – The model has been tested by an independent party on conditions it has not been adjusted to match

    More on the reason behind these requirements here:

    Whenever we have an falsifying experience of some kind, we have to reestablish the reliability of the model by repeating all relevant steps. When a new measurement principle is proposed we start at scratch. Even if the area is immensely less complicated than climate models we do have lot of falsifying experiences – normally in the form of significant systematic errors caused by influencing variables or parameters which are not properly accounted for. I never stop being surprised about the errors we uncover in the three last steps.

    The models IPCC rely on are very far from surviving these criteria.

    • The actual intended output of the models, through an obscure but well understood mechanism, is funding. In this regard they seem to perform very well, indeed. Of course, such models are much easier to construct without all the rigorous constraints you list imposed on them.

  20. Dear IPCC: What is the optimum/ideal surface temperature? How much variation is acceptable/safe? Why?

  21. I agree with Tim and Vincent.

    Re: false fabricated aerosols data in climate computer models – posts since 2006:

    We’ve known the warmists’ climate models were false alarmist nonsense for a long time.

    As I wrote (above) in 2006:

    “I suspect that both the climate computer models and the input assumptions are not only inadequate, but in some cases key data is completely fabricated – for example, the alleged aerosol data that forces models to show cooling from ~1940 to ~1975…. …the modelers simply invented data to force their models to history-match; then they claimed that their models actually reproduced past climate change quite well; and then they claimed they could therefore understand climate systems well enough to confidently predict future catastrophic warming?”,

    I suggest that my 2006 suspicion had been validated – see also the following from 2009:

    Allan MacRae (03:23:07) 28/06/2009 [excerpt]

    Repeating Hoyt : “In none of these studies were any long-term trends found in aerosols, although volcanic events show up quite clearly.”

    Here is an email just received from Douglas Hoyt [in 2009 – my comments in square brackets]:

    It [aerosol numbers used in climate models] comes from the modelling work of Charlson where total aerosol optical depth is modeled as being proportional to industrial activity.

    [For example, the 1992 paper in Science by Charlson, Hansen et al]

    or [the 2000 letter report to James Baker from Hansen and Ramaswamy]

    where it says [para 2 of covering letter] “aerosols are not measured with an accuracy that allows determination of even the sign of annual or decadal trends of aerosol climate forcing.”

    Let’s turn the question on its head and ask to see the raw measurements of atmospheric transmission that support Charlson.
    Hint: There aren’t any, as the statement from the workshop above confirms.


    There are actual measurements by Hoyt and others that show NO trends in atmospheric aerosols, but volcanic events are clearly evident.

    So Charlson, Hansen et al ignored these inconvenient aerosol measurements and “cooked up” (fabricated) aerosol data that forced their climate models to better conform to the global cooling that was observed pre~1975.

    Voila! Their models could hindcast (model the past) better using this fabricated aerosol data, and therefore must predict the future with accuracy. (NOT)

    That is the evidence of fabrication of the aerosol data used in climate models that (falsely) predict catastrophic humanmade global warming.

    And we are going to spend trillions and cripple our Western economies based on this fabrication of false data, this model cooking, this nonsense?


  22. The traditional definition of validation involved running the model backward to recreate a known climate condition. The general term applied was “hindsight forecasting”.

    Back testing style validation is mostly a waste of time. It can only prove a model is wrong, it can’t prove a model is right. I could probably make a model of pink noise that if I iterate long enough on the random number seed, I can probably find a curve that matches HadCrut, GISS, or (pick your temperature history. Don’t forget the release version!). The model would be perfectly back tested, but completely silly.

    In fact, the code is fairly trivial. I’ll just go do it and report back. Might take a couple of hours of compute time…

    “With four parameters I can fit an elephant, and with five I can make him wiggle his trunk”. A pink noise generator has piles of parameters in the random number state…


    • Yes. My Lotus 1-2-3 model of the global temperature reproduces the past perfectly (it’s nothing more than a simple table). But it’s useless for predicting future temperature.

      Reproducing the past is the old fallacy of “testing on the training data”. The fact that your model can reproduce the training data tells you nothing about whether or not it represents the physical reality correctly.

      • Reproducing the past is the old fallacy of “testing on the training data”.

        One way to address that is to divide the history into two different components, one is the training set and one the test set. Of course, that assumes you have enough data to divide into two. We don’t really have enough data to even use as a training set period. With ocean oscillations at 60 years we need at least 120 and preferably 240* years of reliable data just for the training. So we need 480 years total data. It doesn’t exist. Also the anthropogenic C02 signal is only about 70 years old…

        We’ll start to have enough data in the year 2099, with 120 years of satellite temperature records and the entirety under anthropogenic C02 increases. (Argo started in ~2005, so maybe it should be year 2125 if the heat wants to hide in the ocean).

        Humans really have a hard time with the “sorry not enough data” outcome. They’d rather just make things up than be resigned to that outcome…


        * Nyquist is 2x the period, however that assumes error free sampling. With errors, you begin to have something useful at 4x. Oscilloscopes for example use 4x-8x. Yes Nyquist is symmetrical – applies to both low frequency side and high frequency side.

    • In the spirit of demonstrating absurdity by absurd, here’s a back tested pinknoise noise model of the surface temperature as published by GISS.

      Hey look, it even captures 60 year cycles in it. It must be right!

      Of course this is silly. But it’s just as silly as taking a million lines of code and claiming it can predict the future of climate. In this case I (1) calculated the linear trend and detrended the data, (2) iterated on the 600+ size input state of octave’s random-normal number generator until I found a sufficiently small root mean square error, and (3) added the trend back. It’s actually very few lines of code, but a whole pile of hidden state in the random number generator. But that’s about as opaque and hidden as a million lines of grad student and PhD code. Both have emergent behaviors that can’t be traced to physical processes directly.

      Again, what I’m demonstrating is backtesting only verifies that your model isn’t horribly wrong. It does not prove correctness. Most especially if the training and test set are the same. The pinknoise + trend model here isn’t horribly wrong, however it’s not correct. Just like climate models.

      I’ll be tickled pink if the temperature actually follows the predicted track!


      source code:

  23. There is in fact a 1-D climate model that was verified by millions of observations: The 1976 US Standard Atmosphere, which remains the gold standard today. The hundreds of physicists, physical chemists, meteorologists, rocket scientists, etc that worked on this massive effort mathematically proved & verified with millions of observations that the Maxwell/Clausius/Carnot/Feynman gravito-thermal greenhouse effect is absolutely correct, and did not use one single radiative transfer calculation whatsoever, and furthermore, completely removed CO2 from their physical model of the atmosphere.

    • That model models vertical slice of the atmosphere. It doesn’t model over time. Still, it’s cool that simplified physics sometimes can be applied to complex systems and still have a reasonable outcome. Doesn’t usually happen that way.

  24. I always describe computer-based climate modeling as “Double-precision arithmetic operations against estimated data.” Pretty much meaningless.

  25. One of Vincent Gray’s masterpieces is “The triumph of doublespeak – how UNIPCC fools most of the people all of the time” (26 June 2009, ). Vincent recognized before anyone else that the IPCC was using the equivocation fallacy for the purpose of deceiving people and that this strategy had been so successful as to have achieved “the triumph of doublespeak” over logic. “Doublespeak” was a synonym for “equivocation.”

  26. What’s up with Figure 1 showing a 5-year running mean of average of 2 satellite datasets from 1979 or 1980 to 2014? One of the two satellite datasets started with December 1978 and the other started with January 1979, assuming the satallite datasets are the main ones of concern to the global warming debate – the TLT ones by UAH and RSS. If the satellite data is not a 5-year running mean, then why does it not show the 1998 spike as being the alltime high, and why does it show a slight warming trend from 1997-onwards that C. M. of B. likes to assert based on the RSS TLT is completely lacking?

    • I agree, the graphic does not look correct. In the satellite era 1998 was clearly, and by a large margin, the “warmest year ever”, but the graphic with apparently yearly data points does not depict that?

  27. Before trying to validate computational climate models one has to make them comprehensible, for no incomprehensible complexity can be validated ever.

    Just for the taste of it.

    It is a peculiar property of Keplerian orbits around a star, that as long as the solar constant is constant indeed (it is not), annual average incoming radiation at ToA (Top of Atmosphere) is exactly the same for the two hemispheres. That’s so, because of Kepler’s Second Law of Planetary Motion (“A line segment joining a planet and the Sun sweeps out equal areas during equal intervals of time”), conservation of angular momentum in a disguise.

    It says while the planet proceeds by a small angle along its orbit, the time needed to do that is proportional to the square of its instantaneous distance from the star. At the same time incoming radiation flux is inversely proportional to that quantity. Therefore integrated incoming radiation is strictly proportional to the angular distance travelled.

    If there were no precession (it is a negligible effect at first approximation), equinoxes were exactly 180 degrees apart along the orbit, so annual insolation is the same for the two hemispheres. Q.E.D.

    In case of Earth clear sky albedo of the Southern Hemisphere is much lower than that of the Northern one (by some 6 W per square meter). That’s because fraction of the surface covered by oceans is higher in the Southern hemisphere (4:5 vs. 2:3) and water is almost black under clear sky conditions, while land surface is not. In spite of this, annual average reflected shortwave radiation is almost the same for the two hemispheres, the difference being less than 0.1 W per square meter (as observed by CERES satellites).

    That means absorbed radiation (sunshine) also exhibits a high level of interhemispheric symmetry. That symmetry is brought about by clouds, of which one has a higher fraction of surface covered in the Southern hemisphere, than in the Northern one, and not only that, but it cancels differences in clear sky albedo nicely.

    The usual explanation, that this symmetry is brought about by regulation of the positioning of ITCZ (InterTropical Convergence Zone) with its bright cloud band. However, it can’t be the full truth, because in general the ITCZ is located in the Northern hemisphere (5 degrees north of the Equator). Therefore even mid latitudes should be more cloudy in the South.

    The mystery is made even deeper by the fact, that there is no such symmetry in OLR (Outgoing Longwave Radiation). The observed asymmetry is an order of magnitude higher, the Northern hemisphere radiates out 1.2 W per square meter more on average. The difference is accounted for by warm surface water transport across the equator.

    One can try to understand it in the context of EP (Entropy Production). The vast majority of entropy production occurs in the terrestrial climate system, when incoming shortwave radiation with a high color temperature (5778 K) gets absorbed and thermalized. Compared to this entropy increase associated with both reflected shortwave and conversion of heat to outgoing longwave are small.

    Therefore, if there is a sweet spot for rate of entropy production in the climate system, it could explain such a symmetry.

    The trouble is it’s utterly incomprehensible what’s actually going on.

    There is such a thing as MEPP (Maximum Entropy Production Principle). In climate science this approach was pioneered by Paltridge, but was only applied to internal processes, where rate of entropy production is negligible compared to absorption.

    The principle itself is pretty general, and applies to all reproducible nonequilibrium thermodynamic systems (as shown by Dewar, 2003). However, the climate system is not reproducible, that is, microstates belonging to the same macrostate can evolve to different macrostates in a short time due to its chaotic nature.

    Indeed, we find rate of entropy production could easily be increased in the climate system by making Earth just a little bit darker, that is, by lowering its albedo. But that does not happen, Earth is not pitch black as seen from the outside, not even close to it.

    The only precondition to maximum entropy production listed by Dewar, but missing from the climate system is reproducibility, so chaos must have a profound effect on both albedo and its regulation. Unfortunately theoretical treatment of irreproducible nonequilibrium thermodynamic systems is missing, so there is nothing to say about them on theoretical grounds.

    The fact observed interhemispheric symmetry in average rate of entropy production is replicated by no computational climate model is only a minor issue compared to its incomprehensible state.

    Albedo is clearly regulated (otherwise it could not be the same for the two hemispheres), its regulation is chaotic (done by clouds, genuinely chaotic and fractal-like objects) and its set point (~30%) has not theoretical explanation whatsoever.

    Therefore its dependence on changing atmospheric composition is unknown. Until this issue is resolved, it is both premature and pointless to construct sophisticated computational models.

  28. Don’t overlook the recent article

    1. According to Cowtan, the models compute air temperature and hence output air temperature projections, not land/ocean temperatures, and therefore the output from the corresponds with what the satellites are measuring.

    So in any verification test, the validation should be tested against the satellite observations.

    2. However, whilst the models output air temperatures, they were not tuned on the basis of satellite data but rather on a mix of land thermometer and ocean surface temperatures.

    So apples were used in the input, and pears is what is outputted.

    3. Of course, the land/sea thermometer record has been so bastardised and corrupted and polluted by endless adjustments/homogenisation, station drop outs, UHI etc, that very bad apples were used in the input/tuning process.

    Given this, and ignoring the problem that we have insufficient knowledge of how the climate works and the problems inherent in non linear chaotic systems, it is no surprise that the output projections is so far from reality that it is essentially simply cr*p.

  29. I’m wrapping up a summary paper on verification and validation in multiphysics simulations. The main conclusion is that we don’t do either well. Even in the simple cases of two monolithic (single physics) components coupled together, we sometimes lose an order of convergence (this means we need finer and finer grids to get accurate results). Without convergence, we can’t do a good job of verifying the models (verifying is ensuring the mathematics are completed sufficiently well and ideally ends with a high confidence estimate of numerical error associated with the discretization and selected solution method). Without good verification, validation is impossible (validation is characterizing the uncertainty in a simulation’s ability to reproduce reality, so without good experimental or observational data, validation is impossible as well). Then, when we want to predict outside of the validation space (beyond where we have data), the uncertainties rapidly increase.

    With this as the case for simple models, climate models, if they were to use the same metrics as the rest of the modelling community, would have such large error bars on their simulation results that they would be embarrassed to publish them.

    • With this as the case for simple models, climate models, if they were to use the same metrics as the rest of the modelling community, would have such large error bars on their simulation results that they would be embarrassed to publish them.

      Exactly right. Figure 1 here is a representative example of their actual error bars, more details here (2.9 MB pdf). Propagated error bars come to, at least, ±15 C after a projection century.

  30. Despite the “tweaking” and “fine tuning” the computer models are still coming up with nonsense predictions when compared to actual measurements of the temperature. It is now apparent that the warmists have given up on making the models match the temperature and are now adjusting the temperature measurements to match the models.

  31. Hypothesis: The Asymmetry of Evil

    The Asymmetry of Evil proposes that Evil is much more powerful than Good, because Evil can be extremely incompetent, swift and devastating in its effect and Good can only partially mitigate the resulting harm with great skill, effort, cost, and time.

    The Asymmetry of Evil is described in the following examples:
    – Any vandal can destroy a great work of art in an instant, which took a genius years to create.
    – Any thug can injure someone in an instant, but our best doctors can only mitigate the harm, and only with effort, cost and time.
    – Any thief can steal a cherished possession in an instant, which the victim took years to earn or to create.
    – Any liar can blurt a falsehood, but it can take years for an honest person to disprove it.

    The Asymmetry of Evil states that any villain can cause great and irreparable harm in an instant, but our best citizens can only mitigate and not fully reverse the harm and can only do so with skill, effort, cost and the passage of time.

    The villain cannot create a great work of art, the villain cannot properly raise a child, the villain is not honest in his or her life, but the villain can damage or destroy the great and the good by his or her acts of evil: destruction, violence. theft and deceit.

    Such is the Asymmetry of Evil – Evil is more powerful than Good – because Evil can be utterly incompetent, yet can cause great and lasting harm in an instant: but Good can only mitigate, and can never fully repair the harm done by Evil, and can only mitigate with effort and cost, over a much longer time.

    Here’s an example, showing how reparations cost 7 million times the cost of doing the evil…

    ONE 9/11 TALLY: $3.3 TRILLION
    By SHAN CARTER and AMANDA COX Published: September 8, 2011
    Al Qaeda spent roughly half a million dollars to destroy the World Trade Center and cripple the Pentagon. What has been the cost to the United States? In a survey of estimates by The New York Times, the answer is $3.3 trillion, or about $7 million for every dollar Al Qaeda spent planning and executing the attacks. While not all of the costs have been borne by the government — and some are still to come — this total equals one-fifth of the current national debt. All figures are shown in today’s dollars.

    Climate science also provides such examples, where nonsensical hypotheses promoting catastrophic humanmade global warming have been proposed by scoundrels and adopted by imbeciles, and have cost society trillions of dollars of squandered scarce resources. Ethical and learned individuals have spent decades disproving falsehoods that were “cooked up” by scoundrels in scant days or weeks, and yet the falsehoods still linger in the press and the public consciousness.

    When the unlimited wealth and power of the State collaborates to do evil, the damage can be enormous and the limited resources of individuals to remediate can easily be overwhelmed by the continued deceit and power of the State.

    In other words Anthony, it’s going to be a long and difficult road.

    I suggest that natural global cooling, which I believe will commence by about 2020, will put an end to global warming alarmist nonsense.

    Regards to all, Allan

    • Allan,

      I offer one comment that might help explain the concept of intentional evil. The money channels are very interesting. Any research funded by public funds must be public. So, to keep the results private and secret the EPA, NOAA, NASA and many other agencies fund grants to the Sierra club, Greenpeace and the other e=green nonprofits are given grants.

      Now those grant monies are merged with private funds and then they can require that basic data, math, methods, equipment, and in the end the methods of creating the computer models are proprietary properties and are hidden behind nondisclosure agreements. The only reason to do this is so there can be no peer review and they create a desired result. We have all been scammed by a very deceitful group that wants to use climate to control economies and nations,

  32. Determination of local climate is complex but determination of average global climate, i.e. a single average temperature trajectory for the entire planet is simple.

    Proof that CO2 has no effect on climate and identification of the two factors that do cause reported climate change (sunspot number is the only independent variable) are at (new update with 5-year running-average smoothing of measured average global temperature (AGT), the near-perfect explanation of AGT since before 1900; R^2 = 0.97+).

  33. “We can’t wait for 30 years to see if a model is any good or not…”

    Well after 25 years we see that the model’s predictions were worthless.

    Then the authors say, “Climate models of today are, in principle, better than their predecessors.” But upon what do they base this conclusion? It is entirely possible that within 25 years the climate models of today will be just as worthless as the models of 25 years ago.

    Sorry guys, unless someone invents a time machine I don’t think GCMs can ever be validated.

  34. Dr. Tim Ball, Thx for clearing the view.

    Sadly, 21st century climate models remind on

    ‘The Turk, an 18th-century fake chess-playing machine;’

    Regards – Hans

  35. Why are those IPCC scientists blind to reality? Why do they remain silent? Why do they not own up and admit the IPCC’s dangerous man-made global warming hypothesis is up the creek without a paddle? When are they ever going to admit that the IPCC is simply wrong?

Comments are closed.