The temperature forecasting track record of the IPCC

Guest essay by Euan Mearns of Energy Matters

In geology we use computer models to simulate complex processes. A good example would be 4D simulation of fluid flow in oil and gas reservoirs. These reservoir models are likely every bit as complex as computer simulations of Earth’s atmosphere. An important part of the modelling process is to compare model realisations with what actually comes to pass after oil or gas production has begun. It is called history matching. At the outset, the models are always wrong but as more data is gathered they are updated and refined to the point that they have skill in hind casting what just happened and forecasting what the future holds. This informs the commercial decision making process.

The IPCC (Intergovernmental Panel on Climate Change) has now published 5 major reports, the First Assessment Report (FAR) in 1990. This provides an opportunity to examine what has been forecast with what has come to pass. Examining past reports is quite enlightening since it reveals what the IPCC has learned in the last 24 years.

I conclude that nothing has been learned other than how to obfuscate, mislead and deceive.

Figure 1 Temperature forecasts from the FAR (1990). Is this the best forecast the IPCC has ever made? It is clearly stated in the caption that each model uses the same emissions scenario. Hence the differences between Low, Best and High estimates are down to different physical assumptions such as climate sensitivity to CO2. Holding the key variable constant (CO2 emissions trajectory) allows the reader to see how different scientific judgements play out. This is the correct way to do this. All models are initiated in 1850 and by the year 2000 already display significant divergence. This is what should happen. So how does this compare to what came to pass and with subsequent IPCC practice?

I am aware that many others will have carried out this exercise before and in a much more sophisticated way than I do here. The best example I am aware of was done by Roy Spencer [1] who produced this splendid chart that also drew some criticism.

Figure 2 Comparison of multiple IPCC models with reality compiled by Roy Spencer. The fact that reality tracks along the low boundary of the models has been made many times by IPCC sceptics. The only scientists that this reality appears to have escaped are those attached to the IPCC.

My approach is much more simple and crude. I have simply cut and pasted IPCC graphics into XL charts where I compare the IPCC forecasts with the HadCRUT4 temperature reconstructions. As we shall see, the IPCC have an extraordinary lax approach to temperature datums and in each example a different adjustment has to be made to HadCRUT4 to make it comparable with the IPCC framework.

Figure 3 Comparison of the FAR (1990) temperature forecasts with HadCRUT4. HadCRUT4 data was downloaded from WoodForTrees [2] and annual averages calculated.

Figure 3 shows how the temperature forecasts from the FAR (1990) [3] compare with reality. It should be quite clear that the best model is the Low Model. I cannot easily find the parameters used to define the Low, Best and High models but the report states that a range of climate sensitivities from 1.5 to 4.5˚C are used. It should be abundantly clear that the Low model is the one that lies closest to the reality of HadCRUT4. The High model is already running about 1.2˚C too warm in 2013.

Figure 4 The TAR (2001) introduced the hockey stick. The observed temperature record is spliced onto the proxy record and the model record is spliced onto the observed record and no opportunity to examine the veracity of the models is offered. But 13 years have since past and we can see how reality compares with the models in that very short time period.

I could not find a summary of the Second Assessment Report (SAR) from 1994 and so jump to the TAR (third assessment report) from 2001 [4]. This was the year (I believe) that the hockey stick was born (Figure 4). In the imaginary world of the IPCC, Northern Hemisphere temperatures were constant from 1000 to 1900 AD with not the faintest trace of Medieval Warm Period or Little Ice Age where real people either prospered or died by the million. The actual temperature record is spliced onto the proxy record and the model world is spliced onto that to create a picture of future temperature catastrophe. So how does this compare with reality?

Figure 5 From 1850 to 2001 the IPCC background image is plotting observations (not model output) that agree with the HadCRUT4 observations. Well done IPCC! The detail of what has happened since 2001 is shown in Figure 6. To have any value or meaning all of the models should have been initiated in 1850. We would then see that the majority are running far too hot by 2001.

Figure 5 shows how HadCRUT4 compares with the model world. The fit from 1850 to 2001 is excellent.  That is because the background image is simply plotting observations in this period. I have nevertheless had to subtract 0.6˚C from HadCRUT4 to get it to match the observations while a decade earlier I had to add 0.5˚C. The 250 year x-axis scale makes it difficult to see how models initiated in 2001 now compare with 13 years of observations since. Figure 6 shows a blow up of the detail.

Figure 6 The single vertical grid line is the year 2000. The blue line is HadCRUT4 (reality) moving sideways while all of the models are moving up.

The detailed excerpt illustrates the nature of the problem in evaluating IPCC models. While real world temperatures have moved sideways since about 1997 and all the model trends are clearly going up, there is really not enough time to evaluate the models properly. To be scientifically valid the models should have been run from 1850, as before (Figure 1), but they have not. Had they been, by 2001 they would have been widely divergent (as 1990) and it would be easy to pick the winners. But they are brought together conveniently by initiating the models at around the year 2000. Scientifically this is bad practice.

Figure 7 IPCC future temperature scenarios from AR4 published in 2007. It seems that the IPCC has taken on board the need to initiate models in the past and in this case the initiation date stays at 2000 offering the same 14 years to compare models with what came to pass.

For the Fourth Assessment Report (AR4) [5] we move on to 2007 and the summary shown in Figure 7. By this stage I’m unsure what the B1 to A1F1 scenarios mean. The caption to this Figure in the reports says this:

Figure SPM.5. Solid lines are multi-model global averages of surface warming (relative to 1980–1999) for the scenarios A2, A1B and B1, shown as continuations of the 20th century simulations. Shading denotes the ±1 standard deviation range of individual model annual averages. The orange line is for the experiment where concentrations were held constant at year 2000 values. The grey bars at right indicate the best estimate (solid line within each bar) and the likely range assessed for the six SRES marker scenarios. The assessment of the best estimate and likely ranges in the grey bars includes the AOGCMs in the left part of the figure, as well as results from a hierarchy of independent models and observational constraints. {Figures 10.4 and 10.29}

Implicit in this caption is the assertion that the pre-year 2000 black line is a simulation produced by the post-2000 models (my bold). The orange line denotes constant CO2 and the fact that this is a virtual flat line shows that the IPCC at that time believed that variance in CO2 was the only process capable of producing temperature change on Earth. I don’t know if the B1 to A1F1 scenarios all use the same or different CO2 increase trajectories. What I do know for sure is that it is physically impossible for models that incorporate a range of physical input variables, initiated in the year 1900, to be closely aligned and to converge on the year 2000 as shown here. It is a physical impossibility as demonstrated by the IPCC models published in 1990 (Figure 1).

So how do the 2007 simulations stack up against reality?

Figure 7 Comparison of AR4 models with reality. Since 2000, reality is tracking along the lower bound of the models as observed by Roy Spencer and many others. If anything, reality is aligned with the zero anthropogenic forcing model shown in orange.

Last time out I had to subtract 0.6˚C to align reality with the IPCC models. Now I have to add 0.6˚C to HadCRUT4 to achieve alignment. And the luxury of tracking history from 1850 has now been curtailed to 1900. The pre-2000 simulations align pretty well with observed temperatures from 1940 even though we already know that it is impossible for the pre-2000 simulations to have been produced by a large number of different computer models programmed to do different things – how can this be? Post 2000, reality seems to be aligned best with the orange no CO2 rise /  no anthropogenic forcing model.

From 1900 to 1950 the alleged simulations do not in fact reproduce reality at all well (Figure 8). The actual temperature record rises at a steeper gradient than the model record. And reality has much greater variability due to natural processes that the IPCC by and large ignore.

Figure 8 From 1900 to 1950 the alleged AR4 simulations actually do a very poor job of simulating reality, HadCRUT4 in blue.

Figure 9 The IPCC view from AR5 (2014). The inconvenient mismatch 1900 to 1950 observed in AR4 is dealt with by simply chopping the chart to 1950. The flat blue line is essentially equivalent to the flat orange line shown in AR4.

The fifth assessment report (AR5) was published this year and the IPCC current view on future temperatures is shown in Figure 9 [6]. The inconvenient mismatch of alleged model data with reality in the period 1900 to 1950 is dealt with by chopping that time interval off the chart. A very simple simulation picture is presented. Future temperature trajectories are shown for a range of Representative Concentration Pathways (RCP). This is the completely wrong approach since the IPCC is no longer modelling climate but different human, societal and political choices, that result in different CO2 trajectories. Skepitcalscience provides these descriptions [7]:

RCP2.6 was developed by the IMAGE modeling team of the PBL Netherlands Environmental Assessment Agency. The emission pathway is representative of scenarios in the literature that lead to very low greenhouse gas concentration levels. It  is a “peak-and-decline”  scenario; its radiative forcing level first reaches a value of around 3.1 W/m2  by mid-century, and returns to 2.6 W/m2  by 2100. In order to reach such radiative forcing levels, greenhouse gas emissions (and indirectly emissions of air pollutants) are reduced substantially, over time (Van Vuuren et al. 2007a). (Characteristics quoted from van Vuuren et.al. 2011)

AND

RCP 8.5 was developed using the MESSAGE model and  the IIASA Integrated Assessment Framework by  the International  Institute  for  Applied  Systems  Analysis  (IIASA),  Austria.  This  RCP  is characterized by increasing greenhouse gas emissions over time, representative of scenarios in the literature that lead to high greenhouse gas concentration levels (Riahi et al. 2007).

This is Mickey Mouse science speak. In essence they show that 32 models programmed with a low future emissions scenario have lower temperature trajectories than 39 models programmed with high future emissions trajectories.

The models are initiated in 2005 (the better practice of using a year 2000 datum as employed in AR4 is ditched) and from 1950 to 2005 it is alleged that 42 models provide a reasonable version of reality (see below). We do not know which, if any, of the 71 post-2005 models are included in the pre-2005 group. We do know that pre-2005, each of the models should be using actual CO2 et al concentrations and since they are all closely aligned we must assume they all use similar climate sensitivities. What the reader really wants to see is how varying climate sensitivity influences different models using fixed CO2 trajectories and this is clearly not done. The modelling work shown in Figure 9 is effectively worthless. Nevertheless, let us see how it compares with reality.

Figure 10 Comparison of reality with the AR5 model scenarios.

With models initiated in 2005 we have only 8 years to compare models with reality. This time I have to subtract 0.3˚C from HadCRUT4 to get alignment with the models. Pre-2005 the models allegedly reproduce reality from 1950. Pre-1950 we are denied a view of how the models worked then. Post-2005 it is clear that reality is tracking along the lower limit of the two uncertainty envelopes that are plotted. This is an observation made by many others [e.g 1].

Concluding comments

  • To achieve alignment of the HadCRUT4 reality with the IPCC models the following temperature corrections need to be applied: 1990 +0.5; 2001 -0.6; 2007 +0.6; 2014 -0.3. I cannot think of any good reason to continuously change the temperature datum other than to create a barrier to auditing the model results.
  • Comparing models with reality is severely hampered by the poor practice adopted by the IPCC in data presentation. Back in 1990 it was done the correct way. That is all models were initiated in 1850 and used the same CO2 emissions trajectories. The variations in model output are consequently controlled by physical parameters like climate sensitivity and with the 164 years that have past since 1850 it is straight forward to select the models that provide the best match with reality. In 1990, it was quite clear that it was the “Low Model” that was best almost certainly pointing to a low climate sensitivity.
  • There is no good scientific reason for the IPCC not adopting today the correct approach adopted in 1990 other than to obscure the fact that the sensitivity of the climate to CO2 is likely much less than 1.5˚C based on my and others’ assertion that a component of the Twentieth Century warming is natural.
  • Back in 1990, the IPCC view on climate sensitivity was a range from 1.5 to 4.5˚C. In 2014 the IPCC view on climate sensitivity is a range from 1.5 to 4.5˚C. 24 years have past and billions of dollars spent and absolutely nothing has been learned! The wool has been pulled over the eyes of policy makers, governments and the public to the extent of total brain washing. Trillions of dollars have been misallocated on energy infrastructure that will ultimately lead to widespread misery among millions.
  • In the UK, if a commercial research organisation were found cooking research results in order to make money with no regard for public safety they would find the authorities knocking at their door.

References

[1] Roy Spencer: 95% of Climate Models Agree: The Observations Must be Wrong

[2] Wood For Trees

[3] IPCC: First Assessment Report – FAR

[4] IPCC: Third Assessment Report – TAR

[5] IPCC: Fourth Assessment Report – AR4

[6] IPCC: Fifth Assessment Report – AR5

[7] Skepticalscience: The Beginner’s Guide to Representative Concentration Pathways

0 0 votes
Article Rating
84 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
June 12, 2014 8:28 am

Niece piece of work. No one should be surprised. CO2 is not a pollutant. It is not the reason the globe has warmed. Abbott gets it, Harper gets it. The next Republican president will get it too. Now, if Cameron and Merkel fall into line, we can end this madness.

Resourceguy
June 12, 2014 8:31 am

The AMO is already doing its own thing–down.

June 12, 2014 8:50 am

A rather simple transfer function applied to the best estimate of TSI (Wang, et al. 2005) produces an excellent estimator of the HADCRUT3 record going back to the invention of the thermometer. Kudos to HADCRUT because that fact tends to validate its algorithm. That fact also provides a simple explanation for the failure of models to forecast temperature: climatologists are trying to forecast the Sun from global average surface temperature. This is analogous to understanding what they are thinking by reading their EEGs.
The existence of that transfer function also shows that IPCC’s claim for the existence of AGW based on the GHE plus fingerprints of human activity on the CO2 record can’t be true unless man’s CO2 emissions are affecting the Sun. To the extent that AGW might exist, it is not measurable.

rgbatduke
June 12, 2014 9:16 am

You are missing several important issues in this discussion — many of which I have gone over on this list. First, you are implicitly buying into the notion that the Multimodel Ensemble Mean (MME mean) is “the prediction of the models” when in reality, there is no such thing as the prediction of the models. There are only the predictions of each model, one at a time. This completely explains the incorrect mean variation, but it also obscures the fact that actually, the individual models have many times greater variability than nature, not less. Further, they have the wrong autocorrelation times. Basically, they make the climate fluctuate far more wildly than it actually does, and with the wrong spectrum of relaxation times. Analysis of their fluctuation-dissipation is (I suspect) direct evidence of incorrect dynamics.
But even this isn’t sufficient, because each model is not presenting individual model runs from a given set of initial conditions, they data they contribute to the MME mean is already an average — the Perturbed Parameter Ensemble average — over many runs with slightly different initial conditions and parametric settings. The idea is for the model to predict the “average” future by erasing the enormous spread of results produced by the fact that the models are nonlinear and chaotic so that butterfly-effect perturbations cause the same model, started from basically the same initial conditions to far smaller perturbations than we can actually measure, to sometimes lead to rapid extreme heating, sometimes to rapid extreme cooling, and sometimes to stuff that is in between or switching rapidly from one to the other.
If you analyze the fluctuations of the individual model runs and compare their relative variation to that of the real climate, again they generally have completely incorrect relaxation times, nearly an order of magnitude too much variance, and sketch out an envelope with no practical predictive value. And this is (as you say) ignoring the fact that the models are invariably normalized “close to the present” — either in the “reference period” shown in grey in the figures above or — to minimize the growing divergence — they re-normalize them to maximally correspond in (say) 1990 at the expense of the agreement in the reference period visible in e.g. figure 9.2a of AR5.
The open abuse of the axioms and rules of ordinary statistics in all of the figures above and in the assertions of “confidence”, from the draft or from the actual report, is — again in my opinion — one of the greatest scandals in modern science.
To abuse Winston Churchill just a bit: Never have so many concluded so much from so little actual statistically defensible computation and argumentation based on data.
rgb

emsnews
June 12, 2014 9:24 am

Any graphs showing global warming must be first extended to include all the ice ages. This gives us perspective. Minus that, it is utterly meaningless for predicting the future since the future is very heavily weighted towards repeating the past.
And in the past, the warmest times are right when an Ice Age ends, and we are not nearly so warm today as when we first figured out how to do agriculture and tame animals.

Neil
June 12, 2014 9:26 am

Try using Mike’ Nature trick to hide the decline in correlation between observered and projected temperatures.
No one will notice then.

June 12, 2014 9:37 am

Nice article but I have to disagree with the statement that oil and gas reservoir models are every bit as complex as computer simulations of Earth’s atmosphere. There are some similarities, both based on partial differential equations (actually difference equations when translated into computer code), but modeling the atmosphere has a greater number of contributing factors. Indeed, the climate models, as complex as they are, are not complex enough. On top of that is the matter of scale. The size of an oil field is much smaller than the size of the planet’s entire atmosphere. The reservoir simulation can use a much finer computational grid. One of the major problems with atmospheric simulation is the coarseness of the grid, on the order of hundreds of kilometers on a side for each cell. This is larger than many important phenomena, like tropical storms and clouds. Using a finer grid increases the computational cost for each time step. With current computer technology this simply cannot be done today. So for these and many other reasons, oil and gas reservoir simulations can provide useful output but climate models have failed miserably.

richardscourtney
June 12, 2014 9:47 am

rgbatduke:
I write to draw attention to your post at June 12, 2014 at 9:16 am which is here.
Your entire post is good, but I wave a football scarf at these two excerpts.

First, you are implicitly buying into the notion that the Multimodel Ensemble Mean (MME mean) is “the prediction of the models” when in reality, there is no such thing as the prediction of the models. There are only the predictions of each model, one at a time. This completely explains the incorrect mean variation, but it also obscures the fact that actually, the individual models have many times greater variability than nature, not less. Further, they have the wrong autocorrelation times. Basically, they make the climate fluctuate far more wildly than it actually does, and with the wrong spectrum of relaxation times.

and

If you analyze the fluctuations of the individual model runs and compare their relative variation to that of the real climate, again they generally have completely incorrect relaxation times, nearly an order of magnitude too much variance, and sketch out an envelope with no practical predictive value .

Yes! Oh, yes!
Richard

Editor
June 12, 2014 9:53 am

Euan Mearns: If memory serves, the IPCC used different base years for anomalies for each of their reports, and sometimes the base years varied within the report. What are they for the illustrations you’ve chosen? And are your presentations of HADCRUT4 data referenced to the same base years?

June 12, 2014 9:57 am

“….. obfuscate, mislead and deceive.”
Why am I not surprised?

June 12, 2014 10:05 am

“24 years have past and billions of dollars spent and absolutely nothing has been learned! ”
http://www.amazon.com/Neutrino-Hunters-Thrilling-Particle-Universe/dp/0374220638
Interesting story about how science works
“In 1911 Lise Meitner and Otto Hahn performed an experiment that showed that the energies of electrons emitted by beta decay had a continuous rather than discrete spectrum. This was in apparent contradiction to the law of conservation of energy, as it appeared that energy was lost in the beta decay process. ”
So here we have an experiment with real data and real observations, that contradict the law of the conservation of energy.
Question: do scientists collectively run around yelling “falsified”? Nope.
20 years later….
“The neutrino was originally postulated, in 1931, almost as “a form of scientific witchcraft”, says Jayawardhana. “When scientists couldn’t account for energy that went missing during radioactive decay, one theorist found it necessary to invent a new particle to account for that missing energy,” he adds. The theorist was the physicist Wolfgang Pauli.”
Basically, Pauli postulates a unicorn.
more than 20 years later we get project poltergiest
http://documentaryheaven.com/project-poltergeist/
Next enter in the late 60s
John Bahcall. The first experiment to test his solar neutrino theory
http://en.wikipedia.org/wiki/Homestake_Experiment
Opps the experiment detects 1/3 of what theory predicts .
Feynman talks to Bahcall. What does Feynman say? Does feynman reject the theory that is falsified by the data?
Nope: see page 91 of the book. When face with a conflict between theory and data Feynman says
“we dont know” we dont whether the theory is wrong or the data is somehow wrong.. or if there is something we are missing.
fast forward another 30 years and the mystery is solved.. 90 years or so from the initial problem.
So, the idea that 24 years is a long time for zero progress is unscientific. If we look at the data, the history if science, we see that the theory that science progress can be judged within a couple of decades is not a good theory. we see that and also these things, that is a data driven look at how science actually works, shows the following
1. discrepancy between theory and observation settles nothing. As Feynman explained to Bahcall, ‘we dont know’. could be the theory, could be the data..
2. Progress is not continuous
3. no working scientist ever made the skeptical argument that it “wasnt their job” to come up
with a better more complete theory.

Max Hugoson
June 12, 2014 10:06 am

And, my consistent, and complete complaint. THERE IS NO BASIS TO USE AVERAGE TEMPERATURES. Complete and utter, indefensible nonsense. Unless one is doing ENTHALPY, i.e. the ENERGY PER CUBIC VOLUME OF THE ATMOSPHERE, which demands a rather COMPLETE and detailed humidity profile, all the time and historically, almost all of this “work” is worse than the wizard of Oz in the Wizard of OZ. I’ve asked, I’ve begged, I’ve pleaded, for a simple “first pass” at taking, merely, surface station humdities with TEMPERATURES and trying to figure out local ENTHALPY. And then to see if there is ANY significant trend there.

Mike Smith
June 12, 2014 10:16 am

“Back in 1990, the IPCC view on climate sensitivity was a range from 1.5 to 4.5˚C. In 2014 the IPCC view on climate sensitivity is a range from 1.5 to 4.5˚C. 24 years have past and billions of dollars spent and absolutely nothing has been learned!:
Bingo! And I’m 97% certain the climate sensitivity will turn out to be somewhat less than 1˚C.

TheOtherJohnInCA
June 12, 2014 10:30 am

Alan Poirier gives Republicans too much credit. Americans do not elect people based on how knowledgeable or reasonable they are. They are elected based on how well they campaign. Even the best American presidents of the latter half of the 20th century only won because they were the better campaigners.
We have no idea of what the next POTUS thinks of AGW. At best, we will know what the MSM deigns to share about what he/she knows, and there is no guarantee what they shar will reflect his/her views or reality.

June 12, 2014 10:45 am

“First, you are implicitly buying into the notion that the Multimodel Ensemble Mean (MME mean) is “the prediction of the models” when in reality, there is no such thing as the prediction of the models. There are only the predictions of each model, one at a time.”
of course there is a prediction of the models. You simply average them. This is a simple mathematical operation.
it is very simple. The best model is the model that matches the data best.
That model is a simple average of all models.
1. The model of models exists. (just do the math)
2. The model of models outperforms any given model. (just do the math)
3. what works should be preferred over what doesnt work.
Of course we currently dont like the approach of taking the average of all models.
From a purist standpoint it’s a hack. There are all sorts of things wrong with the approach.
But it works better than other approaches. So absent a better alternative.. you go with what you have.. That doesnt necessarily mean you would use it for policy,
So, we might argue that a model of models is not the approach we would prefer to take.
We would prefer to select one model and improve it. But pragmatically and practically,
we often do things that ‘dont make sense” because it is the best ( but not perfect) we currently have. Until another approach demonstrates better performance you are kinda stuck with what works best.
its fun to look at what folks do for hurricanes
http://www.nhc.noaa.gov/verification/verify6.shtml

Eustace Cranch
June 12, 2014 10:46 am

Steven Mosher says:
June 12, 2014 at 10:05 am
OK, fine.
So we are to implement painful and hideously expensive global policies, reducing worldwide standards of living, over “we don’t know?”

Neil
June 12, 2014 10:49 am

@Steven Mosher,
Very true and very interesting. I don’t recall a cap-and-trade on the ‘missing’ energy, though.

June 12, 2014 10:49 am

The solution to the IPCC’s problem is simple. They need more hot air.

William Astley
June 12, 2014 10:53 am

In support of and further comment to:
“Comparing models with reality is severely hampered by the poor practice adopted by the IPCC in data presentation..”
I have commercial experience in testing and debugging complex modeling software (senior technical specialist, primary responsibility for scenario identification and commercial definition to facilitate debugging/model development to ensure the model meets the needs of technical users). It is standard and an absolute necessary practice to force the modeling software supplier to run specific, defined modeling scenarios to validate or invalidate models, to help debug the software, and to provide insight into the weighting of parameters/requirements of the modeling software. Payment for the modeling software is contingent on performance for the multiple ‘testing’ scenarios.
It is pathetic, embarrassing, ridiculous that the IPCC does not include a modeling scenario that starts in 1850, that includes the rise of atmospheric CO2, and that compares actual temperature rise to predicted temperature rise as that scenario would provide unequivocal proof the general circulation models (GCM) that are the basis for the extreme AWG agenda have multiple fundamental errors and that the majority of the warming in the last 150 years was due to something else besides the increase in atmospheric CO2.
There are multiple observations that indicate the GCM are incorrect. For example:
A) High Latitudinal Warming (lack of tropical region warming) Paradox
The warming in the last 150 years is primarily at high latitudes which does not match what the GCM’s predict. As changes to atmospheric CO2 are quickly equalized in the atmosphere the potential for the CO2 to increase the temperature of the earth should be the same by latitude, if all other factors were the same.
As the actual increase in forcing due to the increase in atmospheric CO2 is directly proportional to the amount of long wave radiation that was emitted at the latitude in question before the increase in CO2, the largest forcing change and the greatest increase in temperature should have occurred at the equator. This is not observed.
http://wattsupwiththat.files.wordpress.com/2012/12/ipcc-ar5draft-fig-1-4.gif
http://arxiv.org/ftp/arxiv/papers/0809/0809.0581.pdf
Limits on CO2 Climate Forcing from Recent Temperature Data of Earth
The global atmospheric temperature anomalies of Earth reached a maximum in 1998 which has not been exceeded during the subsequent 10 years (William: 16 years and counting). The global anomalies are calculated from the average of climate effects occurring in the tropical and the extratropical latitude bands. El Niño/La Niña effects in the tropical band are shown to explain the 1998 maximum while variations in the background of the global anomalies largely come from climate effects in the northern extratropics. These effects do not have the signature associated with CO2 climate forcing. (William: This observation indicates something is fundamental incorrect with the IPCC models, likely negative feedback in the tropics due to increased or decreased planetary cloud cover to resist forcing). However, the data show a small underlying positive trend that is consistent with CO2 climate forcing with no-feedback. (William: This indicates a significant portion of the 20th century warming has due to something rather than CO2 forcing.)
….The effects in the northern extratropics are not consistent with CO2 forcing alone.
An underlying temperature trend of 0.062±0.010ºK/ decade was estimated from data in the tropical latitude band. Corrections to this trend value from solar and aerosols climate forcings are estimated to be a fraction of this value. The trend expected from CO2climate forcing is 0.070g ºC/decade, where g is the gain due to any feedback. If the underlying trend is due to CO2 then g ~1. Models giving values of greater than 1 would need a negative climate forcing to partially cancel that from CO2. This negative forcing cannot be from aerosols.
These conclusions are contrary to the IPCC [2007] statement: “[M]ost of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.”
B) Lack of high altitude warming in tropical troposphere paradox
In addition to latitude forcing change the actual forcing due to the increase in atmospheric CO is altitude dependent. At lower altitudes in the atmosphere there is more CO2 (the amount of forcing is depend on the number of molecules per unit volume and there are more CO2 molecules at the surface of the planet due to change the higher pressure) and there is more water vapor (due to both temperature and atmospheric pressure). As almost half of the CO2 absorption spectrum overlaps with water and the greenhouse gas potential to cause warming saturates with increasing concentration (actual forcing change is logarithmic so twice as much increase in CO2 is required to cause the same forcing which is a very good way to validate or invalidate the GCM as CO2 rises), theoretically the majority of the forcing change should occur higher in troposphere where there is less water and less CO2 molecules. The warming of the surface of the earth then occurs due to long wave radiation that is emitted from the higher regions of the troposphere back down to the surface of the earth. This higher troposphere warming is not observed.
Analyzing piecewise segments where there were temperature changes and/or CO2 changes would shows that a key missing forcing function (four different mechanisms related to solar magnetic cycle changes ) are causing the planet to warm and cool and that something specifically inhibits the CO2 forcing mechanism above around 180 ppm.
http://icecap.us/images/uploads/DOUGLASPAPER.pdf
A comparison of tropical temperature trends with model predictions
We examine tropospheric temperature trends of 67 runs from 22 ‘Climate of the 20th Century’ model simulations and try to reconcile them with the best available updated observations (in the tropics during the satellite era). Model results and observed temperature trends are in disagreement in most of the tropical troposphere, being separated by more than twice the uncertainty of the model mean. In layers near 5 km, the modelled trend is 100 to 300% higher than observed, and, above 8 km, modelled and observed trends have opposite signs. These conclusions contrast strongly with those of recent publications based on essentially the same data.

PeterB in Indianapolis
June 12, 2014 11:09 am

” 1. discrepancy between theory and observation settles nothing. As Feynman explained to Bahcall, ‘we dont know’. could be the theory, could be the data..
2. Progress is not continuous
3. no working scientist ever made the skeptical argument that it “wasnt their job” to come up
with a better more complete theory.”
Response to #1 – you omitted – perhaps intentionally, Feynman’s 3rd part of the explanation of “we don’t know”, which was WE COULD BE MISSING SOMETHING.
In the case of climate science, there are probably a myriad of things that scientists are still missing….
Response to #2 – yes progress is not continuous. In some cases, progress is even impeded by scientists who are clinging to a theory which is incorrect.
Response to #3 – Perhaps certain scientists need to be classified as “non-working scientists” in some cases, because some seem awfully reluctant to come up with a “better, more complete theory”.
Some theories, at least based upon available evidence, appear to be a good match to reality. Some theories appear to be an ok approximation of reality, but appear to be “missing something” so that they need to be modified to be made better and/or more complete. Some theories appear to be not so good at approximating reality, and no matter how much lipstick you try to put on the pig, it is still going to be a pig.
The current dominant theory of “climate change” certainly doesn’t appear, based on the evidence to fall into the first category. We shall see (probably sooner rather than later) whether it falls into category 2 or category 3.

June 12, 2014 11:10 am

Preaching to choir here at WUWT. The CAGW zealots take it on faith that man-made CO2 is “disrupting” the climate. Science and skeptical methods mean nothing to them, evidence John Holdren. Too many scientists who feed at the trough of government grants don’t want to endanger their paycheck by getting black balled by John Holdren’s dishonest NSF-puppet grant machine.
The lies from the Obama Administration are best understood as an CAGW Onion Theory.
This theory posits as follows:
The top lie is best embodied by the National Climate Assessment, a political statement, designed to scare a naive public, deceptively packaged as “science” to enable a regulatory assault on the US energy production via environmental and ecological justifications. Justifications such as expanded scope of the Clean Water Act and the Endangered Species Act are prime examples of the deception occurring at this level. This layer keeps the environmental nut-jobs as a realiable base for Democrats.
Removing that environmental layer of lies of Climate Change, reveals the next layer. Those “enhanced regulations” expand the power and enable Executive branch control over a larger fraction of the economy. The lie here is also enables more private lawsuits against industry and especially oil and gas — simply a payoff to the tort bar. This layer keeps lawyers firmly in the Democrat Party camp.
Removing this layer of lies, reveals the reach of the Federal government into areas previously off-limits such as private and state-lands that were the purview of States Rights via the 10th Amendment, i.e. powers not specifically granted to the Federal government are reserved to the People or the States. This is where things get interesting. Congress has not been complicit with this taking of individual and States’ rights. This has been enabled by actors like Harry Reid keeping the Congress dysfunctional since Democrats can’t control it without the House of Representatives. Individually, even many Democrat Congressmen and Senators express dismay at the Power grabs of the EPA and Fish and Wildlife Service, but in reality they do nothing but talk. They lie, just like Harry Reid. Congress refused to pass Cap and Trade in 2010, now the EPA is trying to force the States to impose it. This onion layer animates the dishonesty of the John Podesta camp.
Peel that layer back and another deeper, darker sinister layer is revealed. That is one of growing Imperial Presidency power justified by the adherents by the layer above of Congressional dysfunction. This one is truly antithetical to the intent of the framers of the US Constitution. President Obama makes claims on former judicial areas by deciding not to enforce various provisions of law by calling them unconstitutional, thereby appointing himself Judicial powers. Indeed, we see the most ardent CAGW zealots claiming that a dictator is needed to impose CO2 emission reductions on an unwilling population. This so far is the most dangerous level.
To be considered a theory, as this Onion Model theory of the current AGW alarmism does, it must make testable predictions. This one does too.
Prediction: Because of where is leads, the narcissist Obama next will attempt remain in power beyond his constitutionally limited term, based on a claim of the need to save the planet from Climate Disruption/Change.

Greg
June 12, 2014 11:15 am

No surprise that models match fairly well 1960-1990, they are tuned to that period , as John Kennedy fo Met. offfice confirmed in our discussion at Climate Etc.
You can model any period fairly well if you have enough ‘forcing’ variables. and a few “parameters”.
When you have to start clipping of the bits that don’t fit, you acknowledge your model does not work.

June 12, 2014 11:19 am

I overlooked to include the Sustainable Energy, i.e wind and solar, industries as a layer just below the top environmental layer. The solar industry, best embodied by Elon Musk, feeds voraciously at the trough of tax subsidies and tax write-offs. Billionaire Tom Steyer also operates at this level as he uses the environmental layer above while he makes huge billion-dollar bets on Green industry initiatives. His $100Million buy-out/bribe of Senate Democrats is merely an “investment” to help further the big bets that will return 10-20 fold more.

Greg
June 12, 2014 11:36 am

The pre-1900 period is important because there is a notable drop . This does not fit the simplistic CO2 + noise model

June 12, 2014 11:41 am

rgbatduke: I accept what you say that model averages cannot represent a prediction – but that is what the IPCC present. Roy Spencer’s approach displays the variability you describe, but I seem to recall Roy ran into some criticism about his normalisation procedure that I found difficult to evaluate.

by erasing the enormous spread of results produced by the fact that the models are nonlinear and chaotic so that butterfly-effect perturbations cause the same model

This in itself tells us the models are wrong since we know that Earth’s climate is stable. This is what happens when you only model positive feedbacks – some real and some imagined – and ignore the negative feedbacks that must exist to maintain stability. I’m guessing the main negative feedback is convection and clouds. The models could be programmed to maintain uniform surface temperature by making convection a variable. I’m not saying that is correct but it would certainly stabilise the models.
Worse still, unstable models has now resulted in the concept of unstable climate.
Doug Hoffman: You may be right, but it would not surprise me if the model for a large reservoir had a similar number of cells as a climate model (I could be totally wrong here). Reservoir models suffer from the same scaling / resolution problems that climate models suffer from but on a different scale. They have to include a lot of complex variables such as the PVT characteristics of petroleum and water, porosity and permeability distribution or the reservoir and the aquifer below it. Pressure, temperature etc.
The main point I’d make is that a reservoir modelling program will employ one reservoir modeller who understands most of the principles and how to work the software (Petrel) and he will call on the expertise of a geologist, a geophysicist, a reservoir engineer and a petrophysicist – all experts in different aspects of interpretation. A large model may represent 5 man years work. Those working on it have only one priority and that is to get it right. Compare this with the cash swallowing morass that is climate science.

Chuck Nolan
June 12, 2014 11:53 am

Mike Smith says:
June 12, 2014 at 10:16 am
“Back in 1990, the IPCC view on climate sensitivity was a range from 1.5 to 4.5˚C. In 2014 the IPCC view on climate sensitivity is a range from 1.5 to 4.5˚C. 24 years have past and billions of dollars spent and absolutely nothing has been learned!:
Bingo! And I’m 97% certain the climate sensitivity will turn out to be somewhat less than 1˚C
————————————-
97% certain? You made that up.
Where’s the raw data, the code, references and reviewer’s comments.
I’d like a list of your co-authors. Please include names, CVs and each person’s individual contributions to the research.
cn

Billy Liar
June 12, 2014 11:58 am

Steven Mosher says:
June 12, 2014 at 10:05 am
… The Thrilling Chase for the Ghostly Missing Heat …

richardscourtney
June 12, 2014 11:58 am

Steven Mosher:
At June 12, 2014 at 10:05 am you respond to

“24 years have past and billions of dollars spent and absolutely nothing has been learned! ”

by providing a load of irrelevant waffle about neutrino research and an assertion that research often has interruptions to progress of some decades.
You response is an improvement on your usual practice of posting brief and abusive ambiguities, but it is equally laughable.
The expenditure on AGW research has been running in excess of US$ 5 billion a year for three decades: the US alone has been spending in excess of US$ 2.5 billion a year.
Nothing has resulted except your hope that something useful may result in future. Well, if half that money had been spent on e.g. providing sanitation in the developing world then something useful would have resulted.
And nothing useful is likely to result from AGW research conducted in accordance with your post at June 12, 2014 at 10:45 am. It advocates the statistically unsupportable action of averaging the outputs of different GCMs and your excuse for the action is this nonsense

Of course we currently dont like the approach of taking the average of all models.
From a purist standpoint it’s a hack. There are all sorts of things wrong with the approach.
But it works better than other approaches. So absent a better alternative.. you go with what you have.. That doesnt necessarily mean you would use it for policy,
So, we might argue that a model of models is not the approach we would prefer to take.
We would prefer to select one model and improve it. But pragmatically and practically,
we often do things that ‘dont make sense” because it is the best ( but not perfect) we currently have. Until another approach demonstrates better performance you are kinda stuck with what works best.

You admit “There are all sorts of things wrong with the approach.”
then claim “But it works better than other approaches.”
so you assert “you go with what you have”.
NO! No scientist would adopt a procedure s/he knows is “wrong” because of lack of something better.
A scientist assesses if a procedure is adequate, then uses an adequate procedure and rejects an inadequate procedure.
And an honesty person certainly does NOT include the indications of a “wrong” procedure in a “Summary For Policymakers” if there is doubt that it should “be used for policy”.
Richard

The Ghost Of Big Jim Cooley
June 12, 2014 12:00 pm

Mr Mosher, be so kind as to answer:
If we really don’t know, do you advocate that we do nothing until we know something?
I am not in the field so science, I’m a service engineer. Faced with a technical problem I would try to ascertain the cause. But if I don’t know what is causing it then I would never advise my client to keep spending money until the problem is solved. And if my thoughts and actions don’t actually correspond to solving the issue, then I am clearly on the wrong track. I try something else.
I used to love the world of science, and envy scientists. Now, I don’t think my opinion of either could be lower. Science appears to be moving toward a religion. I appreciate what you say when you say that maybe we don’t know, so why do scientists and those that represent them continually make out that they do know?

blackadderthe4th
June 12, 2014 12:02 pm

AGU and Richard Alley, did the IPCC get its early projections right?

Sure did!

mebbe
June 12, 2014 12:12 pm

I took the average of 3 GCM outputs by entering their values in my desktop calculator.
Not only am I a modeler, I’m a computer modeler.

Jimbo
June 12, 2014 12:17 pm

The fifth assessment report (AR5) was published this year and the IPCC current view on future temperatures is shown in Figure 9 [6]. The inconvenient mismatch of alleged model data with reality in the period 1900 to 1950 is dealt with by chopping that time interval off the chart.

As long as they continue to fail then expect further choppings on their next report.

Jimbo
June 12, 2014 12:29 pm

“Back in 1990, the IPCC view on climate sensitivity was a range from 1.5 to 4.5˚C. In 2014 the IPCC view on climate sensitivity is a range from 1.5 to 4.5˚C. 24 years have past and billions of dollars spent and absolutely nothing has been learned!:

Oh but they have learned a lot about climate sensitivity – but won’t say. Therefore the range stays the same. 😉

Brian
June 12, 2014 12:34 pm

Mr Mosher @ 10:05am
I’m pretty sure that no one was trying to create sweeping international policy based on the un-proven neutrino.

kadaka (KD Knoebel)
June 12, 2014 12:35 pm

To achieve alignment of the HadCRUT4 reality with the IPCC models the following temperature corrections need to be applied: 1990 +0.5; 2001 -0.6; 2007 +0.6; 2014 -0.3. I cannot think of any good reason to continuously change the temperature datum other than to create a barrier to auditing the model results.
This would all be so much easier if we would simply align reality with the models.
Maybe Gavin will finish what Hansen started so they can check that one off the list.

euanmearns
June 12, 2014 12:36 pm

Bob Tisdale: The HadCRUT4 datum is 1961 to 1990 according to this source:
http://www.cru.uea.ac.uk/cru/data/temperature/
CRU:UEA – whoever they may be 😉 Its possible that rgbatduke has hit on the reason for this continuously changing at the IPCC but it seems to me there is a ±0.6˚K fluctuation in the baseline of the models. There is no excuse for the IPCC not making succinct comments about their methodology in the summary reports “readers may note that the baseline has changed by x˚C since the last report because…” but they don’t do it. I suspect 1) because they don’t know they are doing and 2) if they did know they would not know why. I simply don’t have the time to dig through the thousands of pages of main reports.
Steven Mosher: I have no problem with blue skies scientists hammering out data, theory and empiricism over decades or centuries. If it were not for energy policies being based on the findings of this fledgling imperfect science then I probably wouldn’t give a damn. But the politicization, and suspicion that politics may be directing scientific outcomes and the consequences for society of misguided energy policies is what drives me. I know many feel the same. The ascendency of the Green movement and its influence. In another post I say this:

While European strategy has failed to reduce global emissions, CO2 emissions have failed to raise global temperatures since 1997 as well. This is the key point. If it were the case that temperatures were rising out of control then we should all be backing measures to address the problem. But they are not (Figure 1).

euanmearns
June 12, 2014 12:46 pm

blackadderthe4th – good vid, but doesn’t tell the whole story since they made a number of forecasts, only 1 was right, that with the low climate sensitivity. So the speaker is actually being disingenuous.

kadaka (KD Knoebel)
June 12, 2014 12:58 pm

Dear Moderators,
It appears all these images are hosted on euanmearns-dot-com and were never copied over to the (free) WUWT WP account. As Energy Matters does have a donate button and does not appear to be hosted for free on WP or similar, I’m wondering if WUWT is blasting through Mearns’ bandwidth allotment and racking him up a significant bill.
[Don’t know right now, thank you for the heads up. .mod]

HAS
June 12, 2014 1:11 pm

Just to add to the confusion about what model to use, I’ve never understood why the absolute global temperatures they produce should differ from each other and the historic record, and only the anomalies show some measure of agreement.

rgbatduke
June 12, 2014 1:30 pm

of course there is a prediction of the models. You simply average them. This is a simple mathematical operation.
it is very simple. The best model is the model that matches the data best.
That model is a simple average of all models.
1. The model of models exists. (just do the math)
2. The model of models outperforms any given model. (just do the math)
3. what works should be preferred over what doesnt work.
Of course we currently dont like the approach of taking the average of all models.
From a purist standpoint it’s a hack. There are all sorts of things wrong with the approach.
But it works better than other approaches. So absent a better alternative.. you go with what you have.. That doesnt necessarily mean you would use it for policy,

Except, of course, that we are using it for policy and being told that it is reliable as pretty much the sole basis for the many statements of “confidence” scattered throughout, say, AR5 especially in the SPM.
I won’t address your model of models nonsense, because that’s precisely what it is. Oh, wait, yes I will.
First, there is no “statistical ensemble of independent and identically distributed models each representing perfect physics”. Really, there isn’t for any problem, not just this one. The term ensemble, especially when used in physics, has a very precise meaning:
http://en.wikipedia.org/wiki/Statistical_ensemble_%28mathematical_physics%29
and it is this precise meaning that describing the CMIP5 collection as a “MultiModel Ensemble” is attempting to co-opt. Obviously calling this collection an “ensemble” is sheer nonsense and/or wishful thinking. And yes, there are indeed ensembles used in climate science, and even used for good, not evil. If you visit here:
http://en.wikipedia.org/wiki/Climate_ensemble
you will find that there are two marginally defensible uses of the term ensemble in climate science and two indefensible uses. The two defensible uses are:
* The perturbed physics ensemble, which attempts to “average” over our ignorance — which is basically what ensembles in statistical physics always do (and you can take me as an modest expert in this as I did Monte Carlo computations in statistical physics for maybe 15 years, many of them gigaflop-years of total computation back when this was expensive). These basically jiggle the not-precisely-known parameters to see what happens within the plausible phase space of their presumed values.
* The initial condition ensemble, which tries to average over the chaotic nature of the simulations (weather prediction is where Lorenz discovered deterministic chaos in the first place). Unfortunately, the whole point of chaos is the divergence of future trajectories with some sort of Lyapunov exponent describing how fast even tiny perturbations within this “ensemble” fill the phase space of possible futures, along with the fact that this phase space itself is structured by attractors in high numbers of (fractally distributed) dimensions. Hence weather prediction runs out of gas in a matter of weeks, and no amount of additional computation can keep up with the growth in the size of the phase space integrated over even fairly tightly constrained initial conditions.
The two questionable uses are:
* The “forcing ensemble”. Do even the people who coined this name know what it means? Seriously. This basically means that they take CO_2 and make it go up at different schedules, and, with an entire mountain of additional assumptions on how it works to force the climate, claim that they can extract a “warming signal” that isn’t just built into the assumptions in the first place but now it is backed by an “ensemble” of computations and hence has some statistical relevance. Nonsense! In any event, they aren’t statistically sampling a space of forcings in any meaningful way, because there is no such thing. To the extent that this is either reliable or confirmable by comparison with reality, it is already implicit in the perturbed parameter and initial condition ensembles. The only reason it even has a name is to sell people on the danger of “forcings”.
* The “Grand Ensemble”, or an ensemble of ensembles. This is what the MME is pretending to be, but note well the diagram — it is not, not even remotely, a grand ensemble, which is basically a layering of the two valid ensembles above, perturbed physics and perturbed initial conditions. Indeed, there isn’t any particular need for the layering — one can perturb physics and initial conditions in a single computation and in chaotic problems. Furthermore, we know perfectly well that perturbing both the physics and the initial conditions in a single computation in nonlinear chaotic dynamical open systems does not produce the same distribution of outcomes as perturbing the physics and initial conditions independently as if the two problems are in some sense separable. Or rather, it had better not — because the only meaningful “ensemble” average is one that averages over our appropriately distributed, unbiased ignorance. Since we are simultaneously ignorant of physics and initial conditions and do not know the distribution or bias of our ignorance, the sole point of using ensemble methods at all is to compare a perturbed parameter ensemble (where both physics and initial conditions are sampled) and then compare the predictions to reality, one model at a time, to discover if our models are sampling the correct ranges of either one, with the correct distribution of future outcomes.
The default assumption in all uses of statistical mechanics in physics is that suitably averaged reality does the most probable thing, not the least probable thing, nearly all of the time. There are lots of reasons for this, but the heart of them all is the Central Limit Theorem. Once one average over the details of a correct ensemble, those details cease to matter as the CLT kicks in and the sample means start to be normally distributed around the true mean. And I’d be embarrassed if I told you how old I was when I finally had this epiphinaic realization in spite of taking courses that attempted to convey it to me on numerous occasions.
In AR5, the PPE is defined and used — per model — for good. Or it would be good if the outcomes of the PPE runs were individually compared to reality with an eye to rejecting bad models, which never seems to happen. In AR5, the MME is defined and used — collectively — for pure evil! Even the authors of chapter 9 acknowledge that there is no statistically defensible reason for flat averaging a bunch of PPE means from many non-independent models and then asserting that the result somehow is a normal distribution around a true mean expected behavior.
rgb

richard verney
June 12, 2014 1:37 pm

Steven Mosher says:
June 12, 2014 at 10:05 a
//////////////
Whilst there is some merit in your point, you conveniently overlook the substantial difference.
How much money has been thrown at climate science in all its guises? How much on research into Neutrinos. Many more orders of magnitude have been spent on Climate Science, and we are not as far forward as we were in the 1970s. Climate Science has regressed, not taken a step forward.

June 12, 2014 2:30 pm

Eustace Cranch:

So we are to implement painful and hideously expensive global policies, reducing worldwide standards of living, over “we don’t know?”

This is the crux of the matter. If climate science was some esoteric branch of physics that may discover the wanton in 200 years time, no one would care. But its not. It lies at the heart of global politics the credibility of science today and the welfare of human populations. And so we do care.
And its not that “we don’t know”. Its that the evidence points strongly in one direction, and that is (IMO) CO2 has marginal impact on global surface temperatures. It could be zero stretching up to 1.5˚K per doubling of CO2.
To lay my cards on the table, I’m more concerned about deforestation and over fishing. There has to be limits somewhere to what we can safely do to Earth ecosystems. The fact that we don’t know where these safe limits lie comes down to a lack of scientific rigour among those doing the work.

June 12, 2014 3:08 pm

Joel O’Bryan says:
June 12, 2014 at 11:10 am [ … ]
That’s my concern. If martial law is ever declared, we know what’s coming next.
+++++++++++++++++++++
blackadderthe4th,
Please. Richard Alley knows where his bread is buttered. Just because he falsely asserts that global warming is continuing, that doesn’t make it so.
Global warming has stopped. That’s what the real world is clearly telling us.

emsnews
June 12, 2014 3:48 pm

Even more clearly, global warming has been declining for the last 9,000+ years. We are seeing the jagged bumps up and down while the moving staircase is definitely going downwards relentlessly.

Jimbo
June 12, 2014 3:57 pm

Steven Mosher says:
June 12, 2014 at 10:45 am

Mosher are you looking for a new career in Climastrological modeling? Pisces says there is no future in that. That post of yours is full of garbage.

Steven Mosher says:
June 12, 2014 at 10:05 am

More garbage. Leave the models alone for a second and look at the temperature. You don’t need math for that.
How did we get to the stage when the obvious failure of the models turns into who can waffle best in the English language?

Jimbo
June 12, 2014 3:59 pm

Sheesh! I forgot that Mosher is actually good in English. Sorry Mosh.

Jimbo
June 12, 2014 4:06 pm

In my last comment I actually forgot that Mosh is good in English. Honestly. I did not mean to hint at credentials at all. My subconscious insight was not intended. Maybe Mosh can learn something from this. I have.

pat
June 12, 2014 4:19 pm

analyse this. Bloomberg trying to reinforce the ridiculous ABC/WaPo poll claiming americans want CC action, even if the cost to them is significant:
11 June: Bloomberg: Lisa Lerer: Americans by 2 to 1 Would Pay More to Curb
Climate Change
Americans are willing to bear the costs of combating climate change, and
most are more likely to support a candidate seeking to address the issue.
By an almost two-to-one margin, 62 percent to 33 percent, Americans say they
would pay more for energy if it would mean a reduction in pollution from
carbon emissions, according to the Bloomberg National Poll.
While Republicans were split, with 46 percent willing to pay more and 49
percent opposed to it, 82 percent of Democrats and 60 percent of
independents say they’d accept higher bills…
The EPA proposal is likely to be modified during a public comment period,
and a bipartisan coalition of coal-state lawmakers have vowed to pass
legislation to block them…
Obama’s proposal has divided his party along regional lines. While
Democratic Senate candidates in Iowa and Colorado back the emission limits,
others in coal-states such as West Virginia and Kentucky have distanced
themselves from them…
http://www.bloomberg.com/news/2014-06-10/americans-by-2-to-1-would-pay-more-to-curb-climate-change.html
the sample is the equivalent of approx 72 Australians being polled; only 5% really concerned about CC, yet Bloomberg/Selzer get an alleged huge majority willing to pay for action in the two pertinent questions! CAGW figures are always suspect.
Bloomberg News National Poll – SELZER & COMPANY
June 10 (Bloomberg) — The Bloomberg News National Poll, conducted June 6-9
for Bloomberg News by Selzer & Co. of Des Moines, IA, is based on interviews
with 1,005 U.S. adults ages 18 or older..
http://media.bloomberg.com/bb/avfile/rg._mQ264POU

Jimbo
June 12, 2014 4:47 pm

Why all the intricacies of language?
THE MODELS FAILED! That is all that matters. Emergency over. Climate sensitivity not as bad as we previously thought! The jig is almost over. Please stop the details, look at the fail, it’s called the embarrassed naked elephant in the room.
http://www.euanmearns.com/wp-content/uploads/2014/06/CMIP5-90-models-global-Tsfc-vs-obs-thru-2013.png
In any other science things would have moved on by now. What keeps this particular con going is the possibility of losing a LOT OF MONEY. Hey, I didn’t tell the BBC to invest huge chunks of their pensions into climate schemes. There are many other examples from individuals.

bobl
June 12, 2014 4:47 pm

@mosher
Steven
The model of models approach is fatally flawed unless the models that are shown to be wrong are incrementally excluded from the analysis. Nobody has ever gotten closer to an answer by averaging a correct value with a wildly incorrect value. The objective is to converge upon a model that is representative of temperature change over time. To do that, models that have failed must be excluded from the analysis, instead we see the same motley ensemble with no predictive value because at best only one of them is correct and at worst none of the are correct.
Any individual model for which the temperature falls outside the 2 sigma envelope for any significant cumulative period lets say > 8% OF 30 years( 2.5 years or more) should be dropped forthwith. If it turns out that all the models are excluded then clearly it’s back to the drawing board for climate science, because they have something very wrong. Instead we tenaciously grasp onto models that we know don’t work?

Jimmy Finley
June 12, 2014 4:54 pm

rgbatduke says:
June 12, 2014 at 9:16 am Aww, c’mon, RGB, tell us what your really think! Guffaw. And then Mosh steps into the pit with some drivel, and gets about 52 broadsides. Someone ought to figure out how to make this “climate science” a Punch and Judy show, charging admission at the door for the next breathless revelations, which of course will change by the next show. Couldn’t be more fun, except that Punch (warmists, communists, hate humans, etc.) actually wants to kill Judy (everybody else).

June 12, 2014 4:59 pm

… To abuse Winston Churchill just a bit: Never have so many concluded so much from so little actual statistically defensible computation and argumentation based on data.
rgb

It has been obvious since the 1980s that the “scientists” can not predict the future climate, and yet billions (trillions perhaps?) has been squandered on the results of these worthless computer models. It would be far better to get some pagans to examine the entrails of a chicken. Far better.
Can we find a Roman haruspex to practice climate divination via inspection of entrails? Any experts in divination out there?

June 12, 2014 5:01 pm

Thanks, Euan Mearns. Very good work.

Niff
June 12, 2014 5:08 pm

“In the UK, if a commercial research organisation were found cooking research results in order to make money with no regard for public safety they would find the authorities knocking at their door.”
…But of course the IPCC has no jurisdiction that can even poke a stick at them never mind prosecute.

bobl
June 12, 2014 5:18 pm

Niff,
This is a very good point, maybe we should stop banging our heads against a brick wall, and instead simply try to bring about some reform that makes the UN subject to legal challenge, through an international court. Much of the UNs stupidity comes about because of it’s lawlessness
In particular such a reform may make it possible to constrain UN assaults on national sovereignty via treaties. By allowing person disadvantaged by the treaties to sue the UN for damages.

June 12, 2014 6:10 pm

Mearns says: Comparing models with reality is severely hampered by the poor practice adopted by the IPCC … . … The variations in model output are consequently controlled by physical parameters like climate sensitivity … .There is no good scientific reason for the IPCC not adopting today the correct approach … that the sensitivity of the climate TO CO2 is likely much less than 1.5˚C … . … [T]he IPCC view on climate sensitivity was a range from 1.5 to 4.5˚C. … The wool has been pulled over the eyes of policy makers, governments and the public to the extent of total brain washing. Bold, caps added.
IPCC says: Climate sensitivity refers to the … change in the annual mean global surface temperature [MGST] following a doubling of the atmospheric equivalent carbon dioxide concentration. Bold added.
A skein is in the author’s eyes. Reality is that no one measures, and no climatologist is known who tries to measure, the rise in temperature FOLLOWING a rise in atmospheric CO2 concentration (C). A principle of science is that a cause must precede its effects, and to its credit, IPCC properly defines climate sensitivity as an effect, i.e., an event following, the rise in CO2, and models it accordingly. However, once defined, twice forgotten, apparently. No one in this field even attempts to assess the lead/lag relationship between CO2 and MGST. The definition is a cover for what climatologists actually do.
Reality is that they measure is the rise in temperature DURING a rise in CO2, and rationalize that T must be the effect from C past. It’s a bootstrap: AGW is evidence of AGW. But the Law of Solubility teaches that a rise in temperature causes water to emit CO2, and the temperature dependent flux from the ocean, being 15 times the greater (in GtC/yr), swamps man’s feeble emissions.
AGW has the Cause & Effect relationship exactly back-to-front. Global warming is the cause of the rise in atmospheric CO2. Nevertheless, one can always measure the relative rates of Delta-T to 2 x C, and some use the mere existence of that ratio as implicit evidence that C causes T. It doesn’t. T causes C, and reality is that climate sensitivity as defined is zero.
Yet the wrong number is still calculable from measurements to be inserted into models.
How fortuitous for everyone that the ratio is smaller than IPCC’s lower limit. The toast fell jelly side up.

Stupendus
June 12, 2014 7:44 pm

Just take any model, any one of them, divide by 3 (my estimate of the CO2 fudge factor they use) and bingo the model matches reality.
My thesis…take out the value for CO2 forcing and the model will work…there done now you can send the nobel prize to me at….

June 12, 2014 8:26 pm

Mosher writes “Basically, Pauli postulates a unicorn.”
And was right. The difference between AGW theory and conservation of energy is that conservation of energy is a strong principle whereas sensitivty predictions of 3C are weak.
If Pauli’s unicorn didn’t exist then our understanding of nature would have been turned on its head. If sensitivity turned out to be 0.5C then it would be no big deal and we’ll work out what feedbacks make that so. It wouldn’t be earth shattering.

June 12, 2014 10:11 pm

rgbatduke says:
June 12, 2014 at 9:16 am
Excellent.

rgbatduke
June 12, 2014 10:29 pm

Nobody has ever gotten closer to an answer by averaging a correct value with a wildly incorrect value. The objective is to converge upon a model that is representative of temperature change over time. To do that, models that have failed must be excluded from the analysis, instead we see the same motley ensemble with no predictive value because at best only one of them is correct and at worst none of the are correct.

And yet, that is precisely what they do not do in computing the MME mean as explicitly described and stated in AR5, chapter 9, section 9.2.2.3. Nor do they account for the fact that some model results included as “one model” represent (themselves) the mean results of hundreds of runs, while others only managed to complete a run or three, so that for some models 100 votes turns out to have the same statistical weight as one vote for some others. Also explicitly acknowledged in AR5, 9.2.2.3. Nor do they account for the fact that e.g. GISS has some six or seven models that are all closely related variants of the same basic program that are all counted as “independent” samples in this pseudo-“ensemble” which both strongly biases the mean of the entire collection towards whatever that variant produces and makes the “error estimate” even more meaningless than it already was. Oh, wait, no it doesn’t. That’s impossible, isn’t it, just like a negative pressure. And yes, AR5 explicitly acknowledges that error in 9.2.2.3. It even acknowledges that this creates “challenges” in interpreting any MME results as having any predictive or other value whatsoever.
What it doesn’t do is present the fact that all of this averaging and averaging of averages carefully conceals the fact that the actual model runs, one at a time, don’t generally look anything like the real climate. And that’s the rub. There is no such thing as an “average result” for the climate produced by any model. If a model produces daytime temperatures of 400 C and nighttime temperatures of 200 C, we cannot average this and assert “Look, our model produces average temperatures of 300 C, and that’s pretty close, so our model might be right. If we have two models and one produces averages of 400 C and the other produces averages of 200 C we cannot average them to 300 C and then crow that we’ve produced successful models. If we produce a model that shows it getting hot at night and cool during the day, but otherwise gets an average temperature that is in perfect agreement with measurements — sorry Charlie, still a failed model, and no, we cannot average it with other failed models in any way justified by the laws of statistics and claim that doing the averaging makes it any more likely to be correct.
If it did (or rather, if it appeared to), it would be pure luck, as there is no theory of statistics that proves or asserts that human errors or biases are uniformly and symmetrically distributed in such a way that they generally cancel. Indeed, we have many centuries of history that prove the exact opposite.
It is remarkable that otherwise intelligent people will go to such great lengths to defend what is fairly obviously a horrendous abuse of statistics undertaken for the sole purpose of defending a point they already believe to be true, independent of any possible evidence or argument that might contradict it. And to be fair, this happens with monotonous frequency on both sides of this particular debate. Warmists are “certain” that they are right and a catastrophe is inevitable, they are equally zealous in their self-righteous condemnation of anybody that disagrees, and in their willingness to spend any amount of other people’s wealth, health, happiness and life to take measures that even the proponents acknowledge will have no meaningful effect. Deniers are equally “certain” that there is no such thing as a greenhouse effect, that more CO_2 is if anything desirable, that everybody who disagrees with them in this is a commie pinko liberal, and that it is all part of a giant conspiracy.
I personally have no idea if the hypothesis is right or wrong. I don’t think anybody does. I think it is about as well-founded in actual evidence and argument as belief in Santa Claus or the Greek Pantheon, I think that the climate models that predict it are way beyond merely “dubious”, but just because climate models are broken and generally suck doesn’t mean that the basic assertion is wrong, only that we cannot trust the means most often used to try to prove it right. I do think that looking over the correspondence between CO_2 and temperature over the Phanerozoic Era (the last 600 million years, where we have decent data on both via a variety of proxies) that — well, there isn’t any. Correspondence, that is. CO_2 levels have generally but irregularly descended from 7000 ppm to the recent low water mark of 190 ppm (which really was nearly catastrophic). Temperatures have fluctuated over a much smaller relative range and are flat to rising slightly over the exact same interval. There is simply no visible first order correlation between atmospheric CO_2 and global average temperature visible anywhere in the geological record. Nobody who looked at the data, or a scatter plot of the data, would conclude “Gee, CO_2 is correlated with temperature.” They’d conclude the opposite.
rgb

Mark Luhman
June 12, 2014 10:30 pm

Euan I find it funny how in the private market place you get paid for your results, good results you can continue working. Yet in the public market place the reverse is true. Bad results mean more money to study or fix it it, You can see that in climate science, the post office, and the VA. Yet somehow we should be happy that government so lavishly rewards incompetence.

Girma
June 12, 2014 11:12 pm

Euan Mearns
“…makes it difficult to see how models initiated in 2001 now compare with 13 years of observations since.”
You can find it here:
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/figure-ts-26.html
Please make comparison with observation and update your post so that it will tell the story that you want to tell in a single graph.
This figure of the IPCC is probably the most important figure for the near term projection and it IS WRONG.
Thank you Euan for your work.

kowalk
June 13, 2014 12:24 am

I’m always wondering, how there can be a sensible notion of ‘global’ temperature. We know climate zones and seasons of the year. From CET it is well known that temperature in summer season is much less increasing (0.3°C) than in winter season (about 1.3°C) – over 350 years. Thus Atlantic climate seems to reduce the stress of cold winters, while summer temperatures increase only mildly. Is there anybody in UK/Scotland/Ireland complaining about this fact? IPCC should inform their clients (I mean the tax payers) whether they can really expect a CAGW, or only a comfortable local warming.

richardscourtney
June 13, 2014 12:39 am

rgbatduke:
Thankyou for your trenchant posts throughout this thread.
I write to support your post at June 12, 2014 at 10:29 pm.
I always summarise the issue of why averaging GCM results is an error by saying
Average wrong is wrong.
This is comprehensible to people with no knowledge of statistical concepts such as independent data sets.
Richard

June 13, 2014 12:50 am

The problem here is that the FAR 1990 “business as usual” scenario assumes that we would have gotten more than 1.2 W/m² since 1990 (see FAR Annex, Figure A.6). But the actual emissions we have seen since 1990 has been lower than FARs’ BAU scenario for CO2, lower for methane, and lower for CFC’s (see FAR Annex, Figure A.3, and compare to NOAA’s Aggregate Greenhouse Gas Index at http://www.esrl.noaa.gov/gmd/aggi/aggi.html). Which means the actual forcing we have seen since 1990 is about half that of FAR’s BAU scenario, and is in fact very close to FAR’s Scenario D.
And the predicted temperature rise under Scenario D is pretty close to what we have actually experienced (See FAR Chapter 6, Figure 6.11). Which means that the FAR models pretty much get it right, if we give them the right inputs.

June 13, 2014 2:28 am

Reblogged this on The GOLDEN RULE and commented:
A scientific look at the IPCC’s failure to produce realistic computer models, the basis for alarmist action against our societies welfare.
“Examining past reports is quite enlightening since it reveals what the IPCC has learned in the last 24 years.
I conclude that nothing has been learned other than how to obfuscate, mislead and deceive.”

Bill Illis
June 13, 2014 5:46 am

We can’t really take Mosher seriously in this thread since he is active in adjusting the temperature trend up so that it starts to get closer to the climate model’s predictions.
He shouldn’t be talking about the scientific method when he is violating its basic premise about objective observations.

BioBob
June 13, 2014 6:33 am

rgbatduke said: there is no theory of statistics that proves or asserts that human errors or biases are uniformly and symmetrically distributed in such a way that they generally cancel. Indeed, we have many centuries of history that prove the exact opposite.
———————
this witticism deserves amplification !! Thanks for that.
good thing I had already swallowed my coffee, otherwise it would have been all over my screen & keyboard.

Dougmanxx
June 13, 2014 6:55 am

All “anomalies”. What was the “average temperature” they predicted? Or did they even predict one? Since we know “adjustments” are ongoing in the past “data”, I think this is of critical importance. Why rely on an “anomaly” which allows fiddling with past data to achieve? Pin them down to an actual “average temperature”!

June 13, 2014 8:42 am

So, the idea that 24 years is a long time for zero progress is unscientific.
That’s great, except for the part where warmists decided over a decade ago that the science was settled and we had to start wrecking the economy RIGHT NOW or we’re all doomed. Oh yeah, and spend tens of billions annually on the research that’s either totally unnecessary (settled!) or totally unproductive (reality).
Neutrino theory proponents… not so much.

Peter Foster
June 13, 2014 10:32 am

The divergence between IPCC and reality did not start in 2000, it started in 1930. If one accepts the HadCRU / GISS NOAA reconstructions of global temperature then one has to accept that all the meteorologists of the world misread their thermometers.. i.e. from 1930 to 1940 they progressively read the thermometers higher than they should and from 1940 to 1975 they went from reading them high to reading them too low. From 1975 to the present they progressively under read them.
HadCRU and co came to the salvation of these errant meteorologists by correcting their collective errors. These are the adjustments they make to the historic record.
Not only that, but the corrections for station deterioration and UHI are all but non existent.
The latest corrections to the Iceland record illustrate the point. compare the GISS 2011 temperature reconstruction for Reykjavik with their 2013 reconstruction. Like many places in the world including the US earlier graphs show the 1940 period was as warm or warmer than today so essentially the trend (crest to crest of the 60 year warming cooling cycle) is flat.
Arhh but then GISS HadCRU and co do not want to recognise that natural cycle exists.
Question is – why are their clearly faulty reconstructions continually used.
Rather than doing their thing of trying to manufacture a global temperature with the paucity of data they have, would it not be better to take a number of well distributed stations with good siting history etc and simply average the annual mean temperatures from these stations.

Chris R.
June 13, 2014 2:42 pm

Two points with respect to Steven Mosher‘s attempted comparison
with high-energy physics.
One, and relatively trivial–I do not know where this author got the idea that Hahn &
Meitner discovered that the beta decay spectrum was continuous in 1911. Meitner
& Hahn wrote to Rutherford in 1911:

RaE (Bismuth-210) is the worst of all. We can only obtain a fairly broad band. We formerly thought that it was as narrow as the other bands [as found in other emitters], but that is not true.

But Hahn thought a secondary process was interfering with spectrum–he didn’t think
that the spectrum was continuous. That discovery was made by James Chadwick,
later discoverer of the neutron, just before World War 1 (1914). (see Chadwick’s Nobel
Prize biography, http://chadwick.nobmer.com/1.htm ).
Second, the reason that the neutrino postulate was not immediately laughed out of existence
was that the continuous spectrum observed in beta decay was a genuine crisis
for physics. Conservation of energy is one of the most fundamental principles of
physics–not just modern, but going all the way back to Newton. Yet here, in this one
area, it looked as those energy was not conserved. Despite Mosh’s derogatory
“unicorn” comment, Pauli’s postulate of a light, uncharged particle that interacted
weakly and thus was very hard to detect was, in fact, the most conservative
possible
explanation put forward for this crisis. (Conservative in the sense
of not requiring any fundamentally new physics.) Cowan & Reines did in fact,
in 1956, directly demonstrate the existence of the neutrino.
Now, note the differences.
(1) The principles involved for high-energy physics were simple. Particles
interacting with each other according to relatively well-understood laws
behaved pretty much as expected, except in the domain of beta decay.
High-energy physics was an emerging field, but there were principles that
were the absolute BEDROCK of physics being challenged.
Conservation of energy had been observed in macroscopic form for 3
centuries. The most conservative, cautious interpretation was that
a new particle must exist. This new particle allowed the laws of physics that
had been observed at the macroscopic scale, and in other respects at the
microscopic scale, to remain intact.
(2) By contrast, climate modeling is using computer codes to solve
physics problems that are, on the scale of the entire Earth, not well-understood.
Yes, fluid flow, Navier-Stokes equations, yes, yes–but here there are a number
of competing and confounding factors ( clouds, actions at different scales,
and don’t forget potential numerical analysis inaccuracies ) which simply
do not exist in the [beta] decay problem.
Conclusion: Mosher’s analogy and plea for patience is quite a bit of a reach.

June 13, 2014 2:51 pm

Well, in any case, Climate is and Climate does its climate thing, and we have thermometers to measure the facts – and as Alexius Meinong reminds us: “Truth is a purely human construct, but facts are eternal.” So I thought hmmm…, temperature arises from the zeroth law of thermodynamics and all the rest is nothing but energy shovelling around. What happens, therefore, if I try measuring ‘Climate’ in energy terms, like kWh or joule — even treating the IPCC forecast of a 4°C temperature rise by 2100 as ‘gospel’?. I tried, with the result at http://cleanenergypundit.blogspot.co.uk/2014/06/eating-sun-fourth-estatelondon-2009.html .
Generally, I specifically published my whole spreadsheet with its calculation methods shown, so that anyone can replicate it in about 10 minutes flat, I reckon. It is then also possible to change any input value to see what happens to the end result in a split-second. E.g. I have chosen 80 years for the time to achieve the 4°C temperature rise ‘consensed’ by the politicians of the IPCC. If anyone thinks that should be any other period, just put it in your spreadsheet, again for instant result. Same applies to any other input value anyone wishes to explore.

Bill Illis
June 13, 2014 6:45 pm

Computer models are like fashion models. Seductive, unreliable, easily corrupted and they make ordinarily sensible people make fools of themselves.

Village Idiot
June 14, 2014 12:07 am

“These reservoir models are likely every bit as complex as computer simulations of Earth’s atmosphere.”
Now that sentence made me choke on my coffee! Is it worth reading any further?

Village Idiot
June 14, 2014 12:58 am

So HadCRUT4 = reality, not one global temperature estimate. Interesting concept (and choice)

Mark BLR
June 14, 2014 7:34 am

“I could not find a summary of the Second Assessment Report (SAR) from 1994 …”
The nearest graph in the (1995) SAR to the ones in the article is probably “Figure 6.24 : Extreme range of possible changes in global mean temperature”, on page 323 of the SAR WG1 report (/ page 337 [ of 558 ] of the PDF “photocopied” file on the IPCC website) …

Jasper Gee
June 14, 2014 11:21 am

Steven Mosher June 12 2014 10:05 am begins by saying “24 years have past …”
So, he doesn’t even realise the difference between “past” and “passed”. What elementary else doesn’t he know?

woohoo02
June 14, 2014 2:08 pm

The fraud is evident, what we need is the funds to take these charlatans to court to prove their case, maybe crowd funding?

June 14, 2014 2:36 pm

@Euan Mearns
It’s A1FI (for Fossile Intensive), not A1F1

Editor
June 15, 2014 6:43 am

euanmearns says: “Bob Tisdale: The HadCRUT4 datum is 1961 to 1990 according to this source…”
Sorry for the delay in responding to your incomplete answer.
My question pertained to the data AND the models. Both are in your illustrations. Your answer only pertained to the data.

June 15, 2014 1:10 pm

Just a small add on to comments by the author, Hoffman and Mearns: I’d say a petroleum reservoir model was produced by a multidisciplinary team. It’s definitely not a geology model. These models can better be called 3d dynamic (sometimes compositional), (sometimes coupled with geomechanics), (sometimes non isothermal), and so on. The model grids can have millions of cells, and they can also get obscenely complex because they include fault offsets, layer discontinuities, and of course the producing and injection wells.
I don’t dare criticize the climate modelers because I haven’t been on the inside of their work flows. I assume climate modelers have the means to run regional coupled models which feed upscaled boundary conditions to GCMs and viceversa?

June 16, 2014 7:31 am

rgb, brilliant as usual.
I became more than passingly interested in climate change because of my background in simulations, and realized we were being fed manure while being told it was pudding. This got me interested in looking at surface data, and what I found was that global temperature averages were vastly different than the actual measurements.
Now from a science point of view, sure so what. But they are basing policy of complete crap, policy that is far from benign.
It all stinks!

June 17, 2014 1:17 am

Girma, June 12: It’s an interesting chart. The main thing it shows is that the central forecasts have not changed with time. In other words the IPCC think they got it right in 1990 and have no cause to update their models in light of new evidence that has cost billions to acquire. Where the fudge is, they initiate all the models in 2000 and not at the published initiation dates. And they are applying “random” datum corrections to create an illusion of forecast skill.