Validation Of A Climate Model Is Mandatory: The Invaluable work of Dr. Vincent Gray

Guest Opinion: Dr. Tim Ball

Early Awareness

Vincent Gray, M.A., Ph.D. is one of the most effective critics of the Intergovernmental Panel on Climate Change (IPCC) through his NZ Climate Truth Newsletter and other publications. He prefaces comments to the New Zealand Climate Science Coalition as follows.

As an Expert Reviewer for the Intergovernmental Panel on Climate Change for eighteen years, that is to say, from the very beginning. I have submitted thousands of comments to all of the Reports. My comments on the Fourth IPCC Report, all 1,898 of them, are to be found at IPCC (2007) and my opinions of the IPCC are in Gray (2008b).

His most recent publication is “The Global Warming Scam and the Climate Change Super Scam” that builds on his very effective first critique, The Greenhouse Delusion: A Critique of “Climate Change 2001”. We now know that the 2001 Report included the hockey stick and Phil Jones global temperature record, two items of evidence essential to the claim of human causes of global warming. In the summary of that book he notes,

· There are huge uncertainties in the model outputs which are recognized and unmeasured. They are so large that adjustment of model parameters can give model results which fit almost any climate, including one with no warming, and one that cools.

· No model has ever successfully predicted any future climate sequence. Despite this, future “projections” for as far ahead as several hundred years have been presented by the IPCC as plausible future trends, based on largely distorted “storylines”, combined with untested models.

· The IPCC have provided a wealth of scientific information on the climate, but have not established a case that increases in carbon dioxide are causing any harmful effects.

On page 58 of the book, he identifies what is one of the most serious limitations of the computer models.

No computer model has ever been validated. An early draft of Climate Change 95 had a Chapter titled “Climate Models – Validation” as a response to my comment that no model has ever been validated. They changed the title to “Climate Model – Evaluation” and changed the word “validation” in the text to “evaluation” no less than describing what might need to be done in order to validate a model.

Without a successful validation procedure, no model should be considered to be capable of providing a plausible prediction of future behaviour of the climate.

 

What is Validation?

The traditional definition of validation involved running the model backward to recreate a known climate condition. The general term applied was “hindsight forecasting”. There is a major limitation because of the time it takes a computer to recreate the historic conditions. Steve McIntyre at Climateaudit, illustrated the problem:

Caspar Ammann said that GCMs (General Circulation Models) took about 1 day of machine time to cover 25 years. On this basis, it is obviously impossible to model the Pliocene-Pleistocene transition (say the last 2 million years) using a GCM as this would take about 219 years of computer time.

Also, models are unable to simulate current or historic conditions because we don’t have accurate knowledge or measures. The IPCC accede this in Chapter 9 of the 2013 Report.

Although crucial, the evaluation of climate models based on past climate observations has some important limitations. By necessity, it is limited to those variables and phenomena for which observations exist.

Proper validation is “crucial” but seriously limited because we don’t know what was going on historically. Reducing the number of variables circumvents limited computer capacity and lack of data or knowledge of mechanisms.

However, as O’Keefe and Kueter explain:

As a result, very few full-scale GCM projections are made. Modelers have developed a variety of short cut techniques to allow them to generate more results. Since the accuracy of full GCM runs is unknown, it is not possible to estimate what impact the use of these short cuts has on the quality of model outputs.

One problem is that a variable considered inconsequential currently, may be crucial under different conditions. This problem occurred in soil science when certain minerals, called “trace minerals”, were considered of minor importance and omitted from soil fertility calculations. In the 1970s, the objective was increased yields through massive application of fertilizers. By the early 80s, yields declined despite added fertilizer. Apparently, the plants could not take up fertilizer minerals without some trace minerals. In the case of wheat, it was zinc, which was the catalyst for absorption of the major chemical fertilizers.

It is now a given in the climate debate that an issue or a person attacked by anthropogenic global warming (AGW) advocates is dealing with the truth. It proves they know the truth and are deliberately deflecting from it for political objectives. Skepticalscience is a perfect example and their attempt to justify validation of the models begins with an attack on Freeman Dyson’s observation that,

“[Models] are full of fudge factors that are fitted to the existing climate, so the models more or less agree with the observed data. But there is no reason to believe that the same fudge factors would give the right behaviour in a world with different chemistry, for example in a world with increased CO2 in the atmosphere.”

They use “reliability” instead of validation and use the term “hindcasting”, but in a different context.

“If a model can correctly predict trends from a starting point somewhere in the past, we could expect it to predict with reasonable certainty what might happen in the future.”

They claim, using their system that,

Models successfully reproduce temperatures since 1900 globally, by land, in the air and the ocean.

And,

Climate models have to be tested to find out if they work. We can’t wait for 30 years to see if a model is any good or not; models are tested against the past, against what we know happened.

It is 25 years since the first IPCC model predictions (projections) and already the lie is exposed in Figure 1.

clip_image002

Source: University of Alabama’s John Christy presentation to the House Committee on Natural Resources on May 15, 2015.

Figure 1

Fudging To Assure Reliability Masquerading As Validation

Attempts at validation during the 120 years of the instrumental period also proved problematic for the same reasons as for the historical record. A major challenge was the cooling period from 1940 to 1980 because it coincided with the greatest increase in human production of CO2. This contradicted the most basic assumption of the AGW hypothesis that a CO2 increase caused a temperature increase. Freeman Dyson described the practice, generally described as “tweaking”, and discussed in several WUWT articles. It is the practice of covering up and making up evidence designed to maintain the lies that are the computer models.

They sought an explanation in keeping with their philosophy that any anomaly, or now a disruption, is, by default, due to humans. They tweaked the model with human sourced sulfate, a particulate that blocks sunlight and produces cooling. They applied it until the model output matched the temperature curve. The problem was after 1980 warming began again, but sulfate levels continued. Everything they do suffers from the T. H. Huxley truth; “The great tragedy of science, the slaying of a beautiful hypothesis by an ugly fact.

As Gray explained,

Instead of validation, and the traditional use of mathematical statistics, the models are “evaluated” purely from the opinion of those who have devised them. Such opinions are partisan and biased. They are also nothing more than guesses.

 

He also points out that in the section titled Model Evaluation of the 2001 Report they write,

We fully recognise that many of the evaluation statements we make contain a degree of subjective scientific perception and may contain much “community” or “personal” knowledge. For example, the very choice of model variables and model processes that are investigated are often based upon the subjective judgment and experience of the modelling community.

The 2013 IPCC Physical Science Basis Report Admits There Is No Validation.

 

Chapter 9 of the 2013 IPCC Report is titled Evaluation of Climate Models. They claim some improvements in the evaluation, but it is still not validation.

Although crucial, the evaluation of climate models based on past climate observations has some important limitations. By necessity, it is limited to those variables and phenomena for which observations exist.

In many cases, the lack or insufficient quality of long-term observations, be it a specific variable, an important processes, or a particular region (e.g., polar areas, the upper troposphere/lower stratosphere (UTLS), and the deep ocean), remains an impediment. In addition, owing to observational uncertainties and the presence of internal variability, the observational record against which models are assessed is ‘imperfect’. These limitations can be reduced, but not entirely eliminated, through the use of multiple independent observations of the same variable as well as the use of model ensembles.

The approach to model evaluation taken in the chapter reflects the need for climate models to represent the observed behaviour of past climate as a necessary condition to be considered a viable tool for future projections. This does not, however, provide an answer to the much more difficult question of determining how well a model must agree with observations before projections made with it can be deemed reliable. Since the AR4, there are a few examples of emergent constraints where observations are used to constrain multi-model ensemble projections. These examples, which are discussed further in Section 9.8.3, remain part of an area of active and as yet inconclusive research.

Their Conclusion

 

Climate models of today are, in principle, better than their predecessors. However, every bit of added complexity, while intended to improve some aspect of simulated climate, also introduces new sources of possible error (e.g., via uncertain parameters) and new interactions between model components that may, if only temporarily, degrade a model’s simulation of other aspects of the climate system. Furthermore, despite the progress that has been made, scientific uncertainty regarding the details of many processes remains.

These quotes are from the Physical Basis Science Report, which means the media and Policymakers don’t read them. What they get is a small Box (2.1) on page 56 of the Summary for Policymakers (SPM). It is carefully worded to imply everything is better than it was in AR4. The opening sentence reads,

Improvements in climate models since the IPCC Fourth Assessment Report (AR4) are evident in simulations of continental- scale surface temperature, large-scale precipitation, the monsoon, Arctic sea ice, ocean heat content, some extreme events, the carbon cycle, atmospheric chemistry and aerosols, the effects of stratospheric ozone and the El Niño-Southern Oscillation.

The only thing they concede is that

The simulation of large-scale patterns of precipitation has improved somewhat since the AR4, although models continue to perform less well for precipitation than for surface temperature. Confidence in the representation of processes involving clouds and aerosols remains low.

Ironically, these comments face the same challenge of validation because the reader doesn’t know the starting point. If your model doesn’t work, then “improved somewhat” is meaningless.

All of this confirms the validity of Dr Gray’s comments that validation is mandatory for a climate model and that,

No computer model has ever been validated.”

 

And

 

Without a successful validation procedure, no model should be considered to be capable of providing a plausible prediction of future behaviour of the climate.

5 1 vote
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

106 Comments
Inline Feedbacks
View all comments
Another Scott
August 8, 2015 11:47 am

“The IPCC have provided a wealth of scientific information on the climate” slightly off the subject, but it would be nice if the IPCC morphed into something like the International Panel on Climate Analysis and leveraged their wealth of information for more objective pursuits than anti CO2 politics…

August 8, 2015 11:48 am

When this is debated with the alarmist groups they all say that the ice core is OK, the Mann – EAU is valid, the tree rings are valid, the geological ones are good. Now the research is presented to indicate that no model is correct.
Is it not correct that none of the computer models will release data set, math, locations, test points and other information required to peer review the hypothesis presented?

Joel O'Bryan
Reply to  profitup10
August 8, 2015 4:38 pm

Back when government climate scientists were still honest, ie. before being corrupted by the AGW gravy train, prehistoric tree rings were read and analyzed to infer wet-dry years, not temperature.
Here is an example of old-school dendrochronology science from a 35 year-old tree ring display at the MesaVerde National Park Main Visitors Museum:
http://i60.tinypic.com/24v4siq.jpg

Ian Wilson
Reply to  Joel O'Bryan
August 8, 2015 7:01 pm

Joel, You are right in saying that for some trees that have rainfall as the factor limiting their growth. However, there are [also] trees that are located just below the snowline of the coastal Canadian Rocky Mountains that have their growth predominantly limited by temperature. The rings in these trees can be used to establish a reasonable good proxy temperature record. Tree rings can be used a temperature proxy when care is taken to establish that air temperature is the primary limiting factor for growth.

Reply to  Joel O'Bryan
August 9, 2015 11:29 am

Ian, let’s see your physical theory that will convert tree ring metrics into a temperature.

Reply to  Joel O'Bryan
August 9, 2015 11:30 am

And, by the way, the physical theory that demonstrates the case of temperature-limited growth.

Reply to  Joel O'Bryan
August 9, 2015 3:39 pm

I do not question that T affects ring growth for a given year. The difficulty I find to resolve is in disentangling moisture from temperature, when soil moisture is the predominate effect on ring growth. A wet but slightly cooler spring year would look different how from a not so wet but warmer spring? And the growth rate is normally higher in the spring I think. Too many confounding moisture and water timing issues to disentangle to get a T that might be resolvable to w/i +/-3 C, just seems ludicrous IMO.

Reply to  Joel O'Bryan
August 9, 2015 8:56 pm

Ian Wilson writes “However, there are [also] trees that are located just below the snowline of the coastal Canadian Rocky Mountains that have their growth predominantly limited by temperature. “

Apart from when it doesn’t and we label it “divergence” you mean?
Oh look, a squirrel.

Reply to  Joel O'Bryan
August 9, 2015 9:12 pm

Another problem: what area of the world has a microclimate that tracks global average temperature year after year for hundreds of years? You need a special “thermometer” tree, AND it must be located in an area of the world that exactly mimics global average temperatures.

ferdberple
Reply to  profitup10
August 9, 2015 6:38 pm

that have their growth predominantly limited by temperature.
=====================
nope. above freezing temperatures will still not grow trees unless there is liquid water.

MarkW
August 8, 2015 11:50 am

We don’t have enough historical data to accurately hindcast, however we do have enough historical data to know what the temperature of the planet was within 0.3C?
Something doesn’t fit here.

Reply to  MarkW
August 8, 2015 11:57 am

Alarmist and Grant Scientists can not do basic math. WAG theory is at work here.

Non Nomen
Reply to  profitup10
August 8, 2015 1:33 pm

They can read bank statements and count money. That’s enough basics.

Reply to  Non Nomen
August 8, 2015 1:39 pm

They are all like a nest full of baby birds – mouths wide open and squawking for more more more. If you give me more I will delivery the AGW climate changing maybe fact or not to you. More more more.

MarkW
Reply to  profitup10
August 8, 2015 1:51 pm

The output of that nest full of baby birds bears a striking relationship with the output of most climate scientists.

RoHa
Reply to  profitup10
August 9, 2015 2:49 am

“They can read bank statements and count money.”
I’d like to be able to do that. Please send me some money so that I can practise.

Reply to  MarkW
August 8, 2015 2:16 pm

Mark, hindcasting requires the model to run FORWARD from a time in the past. The. Model requires a large set of dynamic inputs other than temperature. I’ve run large dynamic models in my career, but we spun them up by describing a fully static condition, which I assume isn’t very practical for a climate model. I’ve never sat down to quiz a climate modeler, but it seems to me they must spend a bunch of time trying to adjust a modern reanalysis product to conditions in the past (does anybody know what they do to set the initial conditions? )….

Science or Fiction
Reply to  Fernando Leanme
August 8, 2015 2:55 pm

Contribution from working group I; on the scientific basis; to the fifth assessment report by IPCC
Chapter 9
Evaluation of Climate Models
Box 9.1 (continued)
“With very few exceptions .. modelling centres do not routinely describe in detail how they tune their models. Therefore the complete list of observational constraints toward which a particular model is tuned is generally not available. However, it is clear that tuning involves trade-offs; this keeps the number of constraints that can be used small and usually focuses on global mean measures related to budgets of energy, mass and momentum. It has been shown for at least one model that the tuning process does not necessarily lead to a single, unique set of parameters for a given model, but that different combinations of parameters can yield equally plausible models. Hence the need for model tuning may increase model uncertainty. There have been recent efforts to develop systematic parameter optimization methods, but owing to model complexity they cannot yet be applied to fully coupled climate models.”

August 8, 2015 11:52 am

“These limitations can be reduced, but not entirely eliminated, through the use of multiple independent observations of the same variable as well as the use of model ensembles.”

The average of 70 piles of dog poo is what???

Reply to  Joel O’Bryan
August 8, 2015 12:06 pm

but without the models as “future truth” (a completely Orwellian term) on global temperatures, the climate scam collapses.
And along with it a collapse of the dream of the watermelon Malthusians’ world depopulation and crony capitalists’ dreams of carbon trading schemes. Thus they will undertake whatever means they can get to achieve those ends.

Reply to  Joel O’Bryan
August 9, 2015 10:49 am

Your deep knowledge of the coming climate change catastrophe cult is presented in very interesting words. Concise and correct, although I’d remove the word: “watermelon”, in respect to watermelons.
“models as future truth” is a brilliant thought.
The climate change cult is the latest version of the anti-economic growth, anti-capitalism crowd — it’s much easier to promote chronic slow growth-high unemployment socialism … when you never mention socialism, and talk about ‘saving the Earth” instead.
I will never forgive the #$%&$ “environmentalists” for killing millions of people by getting DDT banned in the 1970s, allowing malaria to accelerate again — so the poorest, most helpless children in the world died from malaria due to bad science, and the false demonization of DDT.
The climate change cult is a political / secular religion movement — the climate models, and those smarmy “bribed” by government grants climate modelers who get to play computer games for a living, are just props to allow Democrat politicians to gain more power over the private sector (and indirectly for giving more money to the crony green businessmen who bribe those Democrat politicians with contributions).
It’s all about money and power — although some smug climate cult members are more interested in telling others how to live — micro-managing what light bulbs they can buy, for one example — and not in it to obtain wealth, like their former climate pope ‘Al Gorlioni’ (now replaced by the real pope).
Yes, the three best known “scientists” for the climate change cult are Al Gore, the Pope, and Bill Nye the science guy — one of them doesn’t even have a science degree — hard to believe this is not a fictional movie and soon we will wake up and it will be over.
.
The climate in 2015 is better than it has been in hundreds of years.
.
The increased CO2 in the air is great news for plants.
.
Even more CO2 in the air would be better news for plants.
.
The slight warming since 1850 was needed, and welcome.
.
Slightly more warming would be even better.
.
The climate in 2015 is better than when all of us were born.
.
We should be happy about the climate in 2015 — I am.
.
But the smarmy leftists work hard to make lots of people worry about the climate — they teach children that economic growth is evil, when it really brings people out of poverty
.
The world would be much better off if the climate change cult members were shipped to another planet — where they would soon find out the climate / temperature constantly changes there too.
.
Their 40+ years of climate scaremongering is a well-financed scam to grab money and power.
They are working hard to micro-manage your life.
And they have your children fearing that the Earth is doomed.
The climate change cult is more than misinformed — they are evil.
Climate change blog for non-scientists:
http://www.elOnionBloggle.blogspot.com

mike restin
Reply to  Joel O’Bryan
August 8, 2015 12:33 pm

Art?

Evan Jones
Editor
Reply to  Joel O’Bryan
August 8, 2015 1:39 pm

With or without the p factor?

eyesonu
Reply to  Joel O’Bryan
August 8, 2015 2:15 pm

Average amount of food source of dung beetles?

Science or Fiction
Reply to  Joel O’Bryan
August 8, 2015 3:09 pm

Contribution from working group I; on the scientific basis; to the fifth assessment report by IPCC:
“The climate change projections in this report are based on ensembles of climate models. The ensemble mean is a useful quantity to characterize the average response to external forcings, but does not convey any information on the robustness of this response across models, its uncertainty and/or likelihood or its magnitude relative to unforced climate variability.”
“There is some debate in the literature on how the multi-model ensembles should be interpreted statistically. This and past IPCC reports treat the model spread as some measure of uncertainty, irrespective of the number of models, which implies an ‘indistinguishable’ interpretation.”
I agree with you – so would IPCC – if they had any scientific integrity at all rather than serving dog poo also in their assessment.

August 8, 2015 12:00 pm

For modern soothsaying, one only needs a wondrous computer simulation to suck in the gullible.
The science is in, the science tells us, the science is settled.
Oh yeah, of course finding “the science” has turned out to be similar to hunting the Snark.

cnxtim
August 8, 2015 12:02 pm

Validating a lie? Now there is a concept..

Evan Jones
Editor
Reply to  cnxtim
August 8, 2015 1:40 pm

It is a concept called politics.

catweazle666
August 8, 2015 12:09 pm

And no computer model ever will be validated, there is a reason for that.
Anyone who claims that an effectively infinitely large open-ended non-linear feedback-driven (where we don’t know all the feedbacks, and even the ones we do know, we are unsure of the signs of some critical ones) chaotic system – hence subject to inter alia extreme sensitivity to initial conditions – is capable of making meaningful predictions over any significant time period is either a charlatan or a computer salesman (and it seems we get plenty of both on here).
Ironically, the first person to point this out was Edward Lorenz – a climate scientist.
You can add as much computing power as you like, the result is purely to produce the wrong answer faster.
Note in particular “hence subject to inter alia extreme sensitivity to initial conditions”.

Reply to  catweazle666
August 8, 2015 12:15 pm

Exactly why not one single AGW, climate change paper has been peer reviewed and raised to the class of theory. The information as presented does not permit other reviewing scientists to recreate the program and to validate or disprove the presentation.
It is all about the run for the Government Grant money that is funneled through the nonprofit egreens so it can be said that donations finance the University research. Not true and not factual.

RT
Reply to  catweazle666
August 8, 2015 2:37 pm

But doesn’t the total picture come down to one simple linear coefficient, the ECS?
Nice and simple, and probably why I will never have one of the two Noble prices embedded in the Navier Stokes, where I wasted to much time pursuing what I believed was a complicated nonlinear problem.

Alan Robertson
Reply to  catweazle666
August 8, 2015 2:39 pm

Models never get it right
and never will, you see
This truth’s been known
for all these years
Since Lorenz ’63
http://journals.ametsoc.org/doi/pdf/10.1175/1520-0469%281963%29020%3C0130%3ADNF%3E2.0.CO%3B2

Reply to  catweazle666
August 8, 2015 3:52 pm

No, celestial mechanics models have been validated and used for useful computational predictions — Hill’s lunar theory even predates computers and was used for manual computation.

catweazle666
Reply to  Philip Lee
August 8, 2015 5:13 pm

“Hill’s lunar theory even predates computers and was used for manual computation.”
Hill’s lunar theory does not attempt to model a non-linear chaotic system with a very large number of feedbacks the majority of which are not known and therefore is able to be analysed successfully with a high degree of reliability.
We are discussing attempts to use computer models to predict the climate, a completely different issue entirely. Do try to keep up.

Reply to  Philip Lee
August 11, 2015 12:11 pm

Hill’s lunar theory is a computation simplification to the restricted three body problem, a non-linear mechanical system. The restricted three body problem is chaotic. The point being that some chaotic systems can be computed over 10’s even 100’s of years accurately. The assertion that climate models cannot do so because the are chaotic is questionable and chaos does not excuse their failures.
Is that “up” enough for you, catweazle666?

Reply to  Philip Lee
August 11, 2015 12:47 pm

Even more important, where are the studies showing the departures of a climate model for nearby initial conditions over time? I’ve not seen that analysis for any climate model. My guess is that such analyses aren’t done, so there is no evaluation of how accurately the initial conditions must be known for useful 10, 20, 50, or 100 year prediction.

catweazle666
Reply to  Philip Lee
August 11, 2015 12:49 pm

Oh, dear, teach your grandmother to suck eggs much do you, Philip?
“Is that “up” enough for you, catweazle666?”
So your argument is that modelling “a computation simplification to the restricted three body problem” is in some way comparable to a creating models of chaotic systems of the order of magnitude of the Earth’s climate for the purpose of making meaningful predictions”
No, sorry, you’re not even close.

Reply to  catweazle666
August 11, 2015 9:58 pm

catweazle666, it’s obvious that I have nothing to teach you about snide. And since you seem unable to understand basic English but, instead, distorted my message twice in your replies, let me close with only one suggestion — find and study a copy of your professional society’s code of ethics.

catweazle666
Reply to  Philip Lee
August 12, 2015 1:11 pm

“catweazle666, it’s obvious that I have nothing to teach you about snide.”
Actually Philip, it is obvious that you have nothing to teach me about anything – especially non-linear systems, except perhaps spectacularly missing the point, at which you excel.

August 8, 2015 12:17 pm

It’s not just climate models. Somehow in the last 50 years modeling has become central to all the physical sciences not only as a way of exploring empirical study but to validate hypotheses. It just that there is no green movement with a large bet on the 00 to completely skew the results and “cheat” the evidence in most other disciplines. Nothing is ever “discovered” in a model though it has become chic to say that.

Reply to  fossilsage
August 8, 2015 1:17 pm

Of course. Models can be manipulated to support the agenda of choice and deliver a facade of scientific credibility.

son of mulder
August 8, 2015 12:20 pm

Why can’t they just get their brains around the fact that the evolution of climate is chaotic and no amount of modelling, computer upgrades and fudging will give a long term prediction that is guaranteed to match what really happens. Too many dimensions and convolutions to stand a cat in hell’s chance eg increased effective insolation => more cloud => decreased effective insolation.
Why is this obvious to me but not the hoards of environmentalists?

taxed
August 8, 2015 12:31 pm

Well the lPCC are going to have a hard time finding signs of warming across the northern Atlantic side of the NH. lf the ‘Arctic blast’ weather pattern makes a habit of turning up in most of the winters in North America. Just watch what a cooling effect that would have on the northern Atlantic. Which with a combination of a more zonal southern tracking jet stream across the Atlantic and the persistence of low pressure over Northern Russia during the summer months. ls just the sort of thing to trigger climate cooling across the Atlantic side of the NH.

John
August 8, 2015 12:34 pm

Surely the glass isn’t empty.
Are there any models which are accurately reflecting observations over time?

Science or Fiction
Reply to  John
August 8, 2015 3:31 pm

If they do it is due to heavy adjustments:
“When initialized with states close to the observations, models ‘drift’ towards their imperfect climatology (an estimate of the mean climate), leading to biases in the simulations that depend on the forecast time. The time scale of the drift in the atmosphere and upper ocean is, in most cases, a few years … Biases can be largely removed using empirical techniques a posteriori … The bias correction or adjustment linearly corrects for model drift … The approach assumes that the model bias is stable over the prediction period (from 1960 onward in the CMIP5 experiment). This might not be the case if, for instance, the predicted temperature trend differs from the observed trend … The bias adjustment itself is another important source of uncertainty in climate predictions … There may be nonlinear relationships between the mean state and the anomalies, that are neglected in linear bias adjustment techniques..”
(Ref: Contribution from working group I; on the scientific basis; to the fifth assessment report by IPCC)
“Biases can be largely removed using empirical techniques a posteriori”
A variant of the Texas sharp shooter fallacy – draw the blink after the shooting.
In small scale a fallacy – in large scale more like a fraud.

Ragnaar
August 8, 2015 1:13 pm

The climate is chaotic on all scales. The warmist reply can be this:
http://www.easterbrook.ca/steve/wp-content/Climate-change-as-a-change-in-boundary-conditions.png
Easterbrook is in the neighborhood of saying the climate is a boundary values problem. Skeptics can say it’s an initial values problem. With Easterbrook’s graphic, he doesn’t show a bistable system. That kind of would have at least the sets of boundaries, like the two lobes of the Lorenz butterfly.

Joel O'Bryan
Reply to  Ragnaar
August 8, 2015 5:22 pm

Climate tippings points (bifurcations) are a favorite talking point for the alarmists. That implies an IVP.

Ragnaar
Reply to  Joel O'Bryan
August 8, 2015 7:39 pm

I’ve seen them argue, IVP is not a problem as we will end up with higher boundaries for temperatures. So we don’t know the exact temperature at any one time, yet the boundaries will be higher.

Reply to  Joel O'Bryan
August 9, 2015 11:43 am

Climate modelers claim the initial values problem disappears using a combination of climate model spin-up, and then taking anomalies. I.e., they implicitly assume their models are perfect, and only the measured parameters introduce errors.
These errors are supposed as constant, that linear response theory applies to climate models throughout their projection range (never tested, never demonstrated), and that projection errors subtract away, leaving completely reliable anomaly trends.
I have most of that nonsense in writing, from the climate modeler reviewers of the manuscript I’ve been trying to publish for, lo, these last 2.5 years. That’s the mental gymnastics they use to rationalize their physically meaningless work; their scientifically vacant careers.

Frank
Reply to  Joel O'Bryan
August 11, 2015 11:43 am

Pat Frank wrote: “Climate modelers claim the initial values problem disappears using a combination of climate model spin-up, and then taking anomalies. I.e., they implicitly assume their models are perfect, and only the measured parameters introduce errors.”
As best I can tell, the opposite it true. Most AOGCMs are run several times using a fixed set of parameters to get some idea of how much initialization conditions influence model output. One run might produce 3.2 degC of warming and a second run 3.5 degC of warming. The error in parameterization is assumed to be covered by combining the output of two dozen models that use different parameterizations, but AR4 acknowledged failing to systematically account for parameter uncertainty in a statistically meaningful way. They called their models an “ensemble of opportunity”. Instead the IPCC uses “expert judgment” (i.e. handwaving) to characterize the 90% ci of all model output as the “likely” rather than “very likely” and thereby correct for uncertainties they can’t assess.

Ragnaar
Reply to  Ragnaar
August 8, 2015 7:36 pm

Correction. Should read:
That kind would have at least two sets of boundaries…

Reply to  Ragnaar
August 8, 2015 7:51 pm

The warmist have another problem. How long does co2 stay in the atmosphere? The IPCC is telling us hundreds of years. However, if you look at the amount of co2 being produced, what actually ends up in the atmosphere, there is a huge discrepancy. So much so that if we stopped producing co2, that at the current rate of depletion, (plant growth dies at 150ppm), near complete depletion could occur in 100 years. Don’t take my word for it, the numbers are readily available from NOAA and the historical amounts and rate of increase. Where would the decades to centuries be a valid argument? For some reason, the largest year increase in co2 is still 1998. That will almost surely change this year since I’ve made it an issue. I’m predicting 4 ppm’s will be the official number.

Reply to  Ragnaar
August 9, 2015 10:17 am

Indeed, paleoclimatolgy shows evidence of only two distinct climate regimes, glacial and interglacial. There is no paleological evidence for a third and higher temperature regime. It seems likely that the Stephan-Boltzman relation acts so strongly to raised temperature as to make a significantly higher regime all but impossible. Hence, there is zero evidence for “tipping points” to a higher temperature regime and their supposition by alarmists such as Hansen is unwarranted, unprofessional and irresponsible.

Ted G
August 8, 2015 1:23 pm

Take away the rent seeking grant money. Take away the government’s ability to issue taxpayers money to the grant mongers. Suddenly the natural climate cycles would be perfectly acceptable, warts and all.

August 8, 2015 1:45 pm

For policy making purposes, the climate models take as a significant input – the CO2 emissions.
The CO2 emissions are dependant on the economy (industry).
So climate models are economic models with vastly more complex extra computer models added on.
If you think we can model the economy you’ve missed the last 10 years.
So how can climate models be reliable?

601nan
August 8, 2015 1:47 pm

The Pope’s job in Paris will be to declare by Papal Edict that the IPCC climate models are “immaculate contraptions” dictated by God. “Validation” is against the nature of God and therefore sacrilege; thus the offender must be cleansed by purifying fire after the holy beating by bullwhip.
For the public viewing of the cleansing events the Vatican will be charging 1000 euros per person with an additional carbon tax of 50 euros per person.
Ha ha

Reply to  601nan
August 9, 2015 11:48 am

“immaculate contraptions” — great neologism. Let’s set aside April 22 for the Feast of the Immaculate Contraption.

Barry
August 8, 2015 1:51 pm

I’m not sure why John Christy refuses to acknowledge the surface temperature record, which actually flows the red line rather closely:
http://www.ncdc.noaa.gov/cag/time-series/global/globe/land_ocean/12/5/1880-2015
And since most people live on the surface of the earth, that data would seem pretty important.

MarkW
Reply to  Barry
August 8, 2015 2:07 pm

The problem with the surface temperature record is that it is not real data. It’s raw data that has been cooked, adjusted, modified, calibrated, and smudged until it shows what the scientists want to see.
The idea that we know what the temperature of the earth, within 5C, much less the 0.1C claimed by the activists is ludicrous to anyone who has actually studied the subject.

Erik Magnuson
Reply to  Barry
August 8, 2015 2:11 pm

The question is how much the surface record has been affected by land use changes and urban heat islands. It’s one thing to say that strong human signal on the temperature record, but quite another to say the changes are due largely to CO2.

TonyL
Reply to  Barry
August 8, 2015 2:22 pm

Oh dear, they seem to have lost something. There was a big cooling event from 1945 to 1975 which brought the temperature down all the way to the level of 1900 – 1910. It was there in early versions of the data. But now it is gone. They must have lost it somehow. Maybe they should go look for it. With any luck they can find it and put it back.

August 8, 2015 2:04 pm

But the global warming – according to the models – should occur in the air. CO2 is a gas on this planet.
So the satellites should reflect the models if they were right. They measure the temperature in the atmosphere. But the models don’t work.
And as for the land data, that is so corrupted by Urban Heat Islands that it’s contentious to say the least. For example, we had a record temperature here in the UK recently; can you guess where?
Heathrow by the runway.
What a coincidence. Or rather, what a contaminated record.
Stick with the best data. And don’t ignore it unless you have a clear reason to like we do with the land record.

MarkW
Reply to  MCourtney
August 8, 2015 2:10 pm

Not just UHI, but also a multitude of micro-site issues, as documented by Anthony and his surface stations project. There are also the numerous station change issues that are both undocumented and unadjusted for.
Finally, we don’t have anywhere near a sufficient number of sensors to claim any confidence whatsoever, that we know what the “average” temperature of the earth is. Even if we had perfect sensors.

bw
Reply to  MarkW
August 8, 2015 2:37 pm

Yes. That’s why NOAA created the USCRN.
If the USCRN had been established in 1950, there would be no debate.
BTW, some USCRN stations came online in 2002, those show zero warming.
The completed USCRN was nominally set at 2008, pooling all available data from that network shows a zero anomaly for average US surface temperatures. There are still a few stations being added in Alaska, but those will have no impact on the remaining stations. Also, there is no climatic reason to merge Alaska and Hawaii with the continental US
The only global temperature estimate that even comes close to what the USCRN will provide is from satellites.
The overlap of USCRN and global satellite coverage concur that there has been no warming AT ALL since 2002. This is further confirmed by a few good Antarctic stations that show zero warming since 1958

AndyG55
Reply to  MarkW
August 8, 2015 3:35 pm

Actually, all 3 of USCRN, RSS and UAH show a slight cooling trend since 2005.

MarkW
Reply to  MarkW
August 8, 2015 3:50 pm

I believe it was the late John Daly who did a study with CA stations. He merely grouped the stations by the population of the county the sensors were located in. He found a near linear relationship between population density and temperature increase, with the most rural counties showing little to no warming over the length of their record.

RoHa
Reply to  MCourtney
August 9, 2015 4:23 am

McC, are you suggesting that thermometer set in the middle of a large expanse of tarmac, in the suburbs of a huge city, and constantly blasted by the exhausts of hundreds of enormous, kerosine-burning, jet engines, might not give a totally accurate reading of the overall temperature of the UK?
You must be a paid shill for Big Coal.

Reply to  MCourtney
August 9, 2015 6:14 am

And further to your point, the CO2 greenhouse effect should increase the temperature difference between the lower and the higher layers of the atmosphere. This increase should be observable quite independently from the temperature trends at the surface, or in any individual layer of the atmosphere, and should thus be a much more robust touchstone of man-made global warming. Yet, there seems to be little discussion of this parameter.

Kevin Hearle
August 8, 2015 2:04 pm

Last sentence should include ‘and therefore should not be used to inform policy making’.

mountainape5
August 8, 2015 2:12 pm

Erm and how do you propose they justify new taxes?

Björn from Sweden
August 8, 2015 2:22 pm

The graph showing the (failure of) the IPPC projections should have a vertical line that shows where (what year) hindcasts ends and forecasts begin. That would help the viewer decide if there is reason to believe the models are mostly built from curve-fitting or climate physics.

Reply to  Björn from Sweden
August 9, 2015 6:20 am

The models use both climate physics and curve fitting. The problem is that 1) the curves are poorly fit, 2) the climate physics includes too many unknowns, 3) the models are under-constrained – a given goodness of fit does not provide a unique set of values for the adjustable parameters.

Björn from Sweden
Reply to  Michael Palmer
August 9, 2015 7:26 am

sure, but arnt the models suspiciously accurate up to ca 1995?
A cynical person might think most of the models are made in 1995-2000 just by observing how they start failing miserably from then on.

Trevor H
August 8, 2015 2:27 pm

What is the point of validating the models with “hind casting” when they keep rewriting history by modifying the past record?

AndyG55
Reply to  Trevor H
August 8, 2015 3:38 pm

In fact, if they use a data set like HadCrut or GISS with its artificially created warming trends, they will introduce that unrealistic, artificial warming trend into their models.
They will therefore always end up higher than reality.

AndyG55
Reply to  Trevor H
August 8, 2015 3:40 pm

And if they continue to adjust HadCrut and GISS to create even bigger artificial warming trends, the model divergence from reality will also get bigger..
Fun ! 🙂

RoHa
Reply to  Trevor H
August 9, 2015 4:30 am

“What is the point of validating the models with “hind casting” when they keep rewriting history by modifying the past record?”
That is how they validate the models. They decide what the past should have been, adjust it to fit, and then build that into the models. The models then match the past, which proves that they accurately forecast the future. What could be wrong with that?

Science or Fiction
August 8, 2015 2:29 pm

My professional work is within an area where we rely on accurate measurements. Uncertainty and systematic errors in the measurements results (estimates if you like) has great impact. The model based measurements (hence the models) are based on physics. These are the things we require by a model based measurement (quantitative theoretical model) to rely on it:
– The theory is about a causal relations between quantities that can be measured
– The measurands are well defined
– The measurands can be quantified
– The uncertainty of the measurands has been determined by statistical and / or quantitative analysis
– The functional relationships, the mechanisms, has been explained in a plausible manner
– The functional relationships, the mechanisms, has been expressed in mathematical terms
– The functional relationships between variables and parameters has been combined into a model.
– The influencing variables which have significant effect on the accuracy of the model are identified
– The model has been demonstrated to consistently predict outputs within stated uncertainties
– The model has been demonstrated to consistently predict outputs without significant systematic errors
– The model has been tested by an independent party on conditions it has not been adjusted to match
More on the reason behind these requirements here:
https://dhf66.wordpress.com/2015/05/25/requirements-to-a-reliable-quantitative-theoretical-model-explained/
Whenever we have an falsifying experience of some kind, we have to reestablish the reliability of the model by repeating all relevant steps. When a new measurement principle is proposed we start at scratch. Even if the area is immensely less complicated than climate models we do have lot of falsifying experiences – normally in the form of significant systematic errors caused by influencing variables or parameters which are not properly accounted for. I never stop being surprised about the errors we uncover in the three last steps.
The models IPCC rely on are very far from surviving these criteria.

TonyL
Reply to  Science or Fiction
August 8, 2015 2:41 pm

The actual intended output of the models, through an obscure but well understood mechanism, is funding. In this regard they seem to perform very well, indeed. Of course, such models are much easier to construct without all the rigorous constraints you list imposed on them.

August 8, 2015 2:49 pm

Dear IPCC: What is the optimum/ideal surface temperature? How much variation is acceptable/safe? Why?

August 8, 2015 3:09 pm

I agree with Tim and Vincent.
Re: false fabricated aerosols data in climate computer models – posts since 2006:
http://wattsupwiththat.com/2015/05/26/the-role-of-sulfur-dioxide-aerosols-in-climate-change/#comment-1946228
We’ve known the warmists’ climate models were false alarmist nonsense for a long time.
As I wrote (above) in 2006:
“I suspect that both the climate computer models and the input assumptions are not only inadequate, but in some cases key data is completely fabricated – for example, the alleged aerosol data that forces models to show cooling from ~1940 to ~1975…. …the modelers simply invented data to force their models to history-match; then they claimed that their models actually reproduced past climate change quite well; and then they claimed they could therefore understand climate systems well enough to confidently predict future catastrophic warming?”,
I suggest that my 2006 suspicion had been validated – see also the following from 2009:
http://wattsupwiththat.com/2009/06/27/new-paper-global-dimming-and-brightening-a-review/#comment-151040
Allan MacRae (03:23:07) 28/06/2009 [excerpt]
Repeating Hoyt : “In none of these studies were any long-term trends found in aerosols, although volcanic events show up quite clearly.”
___________________________
Here is an email just received from Douglas Hoyt [in 2009 – my comments in square brackets]:
It [aerosol numbers used in climate models] comes from the modelling work of Charlson where total aerosol optical depth is modeled as being proportional to industrial activity.
[For example, the 1992 paper in Science by Charlson, Hansen et al]
http://www.sciencemag.org/cgi/content/abstract/255/5043/423
or [the 2000 letter report to James Baker from Hansen and Ramaswamy]
http://74.125.95.132/search?q=cache:DjVCJ3s0PeYJ:www-nacip.ucsd.edu/Ltr-Baker.pdf+%22aerosol+optical+depth%22+time+dependence&cd=4&hl=en&ct=clnk&gl=us
where it says [para 2 of covering letter] “aerosols are not measured with an accuracy that allows determination of even the sign of annual or decadal trends of aerosol climate forcing.”
Let’s turn the question on its head and ask to see the raw measurements of atmospheric transmission that support Charlson.
Hint: There aren’t any, as the statement from the workshop above confirms.
__________________________
IN SUMMARY
There are actual measurements by Hoyt and others that show NO trends in atmospheric aerosols, but volcanic events are clearly evident.
So Charlson, Hansen et al ignored these inconvenient aerosol measurements and “cooked up” (fabricated) aerosol data that forced their climate models to better conform to the global cooling that was observed pre~1975.
Voila! Their models could hindcast (model the past) better using this fabricated aerosol data, and therefore must predict the future with accuracy. (NOT)
That is the evidence of fabrication of the aerosol data used in climate models that (falsely) predict catastrophic humanmade global warming.
And we are going to spend trillions and cripple our Western economies based on this fabrication of false data, this model cooking, this nonsense?
*************************************************

Chris Hanley
August 8, 2015 3:33 pm

There are only 35 years of reliable observations of the two variables of concern:
http://www.climate4you.com/images/MSU%20UAH%20GlobalMonthlyTempSince1979%20AndCO2.gif