The temperature forecasting track record of the IPCC

Guest essay by Euan Mearns of Energy Matters

In geology we use computer models to simulate complex processes. A good example would be 4D simulation of fluid flow in oil and gas reservoirs. These reservoir models are likely every bit as complex as computer simulations of Earth’s atmosphere. An important part of the modelling process is to compare model realisations with what actually comes to pass after oil or gas production has begun. It is called history matching. At the outset, the models are always wrong but as more data is gathered they are updated and refined to the point that they have skill in hind casting what just happened and forecasting what the future holds. This informs the commercial decision making process.

The IPCC (Intergovernmental Panel on Climate Change) has now published 5 major reports, the First Assessment Report (FAR) in 1990. This provides an opportunity to examine what has been forecast with what has come to pass. Examining past reports is quite enlightening since it reveals what the IPCC has learned in the last 24 years.

I conclude that nothing has been learned other than how to obfuscate, mislead and deceive.

Figure 1 Temperature forecasts from the FAR (1990). Is this the best forecast the IPCC has ever made? It is clearly stated in the caption that each model uses the same emissions scenario. Hence the differences between Low, Best and High estimates are down to different physical assumptions such as climate sensitivity to CO2. Holding the key variable constant (CO2 emissions trajectory) allows the reader to see how different scientific judgements play out. This is the correct way to do this. All models are initiated in 1850 and by the year 2000 already display significant divergence. This is what should happen. So how does this compare to what came to pass and with subsequent IPCC practice?

I am aware that many others will have carried out this exercise before and in a much more sophisticated way than I do here. The best example I am aware of was done by Roy Spencer [1] who produced this splendid chart that also drew some criticism.

Figure 2 Comparison of multiple IPCC models with reality compiled by Roy Spencer. The fact that reality tracks along the low boundary of the models has been made many times by IPCC sceptics. The only scientists that this reality appears to have escaped are those attached to the IPCC.

My approach is much more simple and crude. I have simply cut and pasted IPCC graphics into XL charts where I compare the IPCC forecasts with the HadCRUT4 temperature reconstructions. As we shall see, the IPCC have an extraordinary lax approach to temperature datums and in each example a different adjustment has to be made to HadCRUT4 to make it comparable with the IPCC framework.

Figure 3 Comparison of the FAR (1990) temperature forecasts with HadCRUT4. HadCRUT4 data was downloaded from WoodForTrees [2] and annual averages calculated.

Figure 3 shows how the temperature forecasts from the FAR (1990) [3] compare with reality. It should be quite clear that the best model is the Low Model. I cannot easily find the parameters used to define the Low, Best and High models but the report states that a range of climate sensitivities from 1.5 to 4.5˚C are used. It should be abundantly clear that the Low model is the one that lies closest to the reality of HadCRUT4. The High model is already running about 1.2˚C too warm in 2013.

Figure 4 The TAR (2001) introduced the hockey stick. The observed temperature record is spliced onto the proxy record and the model record is spliced onto the observed record and no opportunity to examine the veracity of the models is offered. But 13 years have since past and we can see how reality compares with the models in that very short time period.

I could not find a summary of the Second Assessment Report (SAR) from 1994 and so jump to the TAR (third assessment report) from 2001 [4]. This was the year (I believe) that the hockey stick was born (Figure 4). In the imaginary world of the IPCC, Northern Hemisphere temperatures were constant from 1000 to 1900 AD with not the faintest trace of Medieval Warm Period or Little Ice Age where real people either prospered or died by the million. The actual temperature record is spliced onto the proxy record and the model world is spliced onto that to create a picture of future temperature catastrophe. So how does this compare with reality?

Figure 5 From 1850 to 2001 the IPCC background image is plotting observations (not model output) that agree with the HadCRUT4 observations. Well done IPCC! The detail of what has happened since 2001 is shown in Figure 6. To have any value or meaning all of the models should have been initiated in 1850. We would then see that the majority are running far too hot by 2001.

Figure 5 shows how HadCRUT4 compares with the model world. The fit from 1850 to 2001 is excellent.  That is because the background image is simply plotting observations in this period. I have nevertheless had to subtract 0.6˚C from HadCRUT4 to get it to match the observations while a decade earlier I had to add 0.5˚C. The 250 year x-axis scale makes it difficult to see how models initiated in 2001 now compare with 13 years of observations since. Figure 6 shows a blow up of the detail.

Figure 6 The single vertical grid line is the year 2000. The blue line is HadCRUT4 (reality) moving sideways while all of the models are moving up.

The detailed excerpt illustrates the nature of the problem in evaluating IPCC models. While real world temperatures have moved sideways since about 1997 and all the model trends are clearly going up, there is really not enough time to evaluate the models properly. To be scientifically valid the models should have been run from 1850, as before (Figure 1), but they have not. Had they been, by 2001 they would have been widely divergent (as 1990) and it would be easy to pick the winners. But they are brought together conveniently by initiating the models at around the year 2000. Scientifically this is bad practice.

Figure 7 IPCC future temperature scenarios from AR4 published in 2007. It seems that the IPCC has taken on board the need to initiate models in the past and in this case the initiation date stays at 2000 offering the same 14 years to compare models with what came to pass.

For the Fourth Assessment Report (AR4) [5] we move on to 2007 and the summary shown in Figure 7. By this stage I’m unsure what the B1 to A1F1 scenarios mean. The caption to this Figure in the reports says this:

Figure SPM.5. Solid lines are multi-model global averages of surface warming (relative to 1980–1999) for the scenarios A2, A1B and B1, shown as continuations of the 20th century simulations. Shading denotes the ±1 standard deviation range of individual model annual averages. The orange line is for the experiment where concentrations were held constant at year 2000 values. The grey bars at right indicate the best estimate (solid line within each bar) and the likely range assessed for the six SRES marker scenarios. The assessment of the best estimate and likely ranges in the grey bars includes the AOGCMs in the left part of the figure, as well as results from a hierarchy of independent models and observational constraints. {Figures 10.4 and 10.29}

Implicit in this caption is the assertion that the pre-year 2000 black line is a simulation produced by the post-2000 models (my bold). The orange line denotes constant CO2 and the fact that this is a virtual flat line shows that the IPCC at that time believed that variance in CO2 was the only process capable of producing temperature change on Earth. I don’t know if the B1 to A1F1 scenarios all use the same or different CO2 increase trajectories. What I do know for sure is that it is physically impossible for models that incorporate a range of physical input variables, initiated in the year 1900, to be closely aligned and to converge on the year 2000 as shown here. It is a physical impossibility as demonstrated by the IPCC models published in 1990 (Figure 1).

So how do the 2007 simulations stack up against reality?

Figure 7 Comparison of AR4 models with reality. Since 2000, reality is tracking along the lower bound of the models as observed by Roy Spencer and many others. If anything, reality is aligned with the zero anthropogenic forcing model shown in orange.

Last time out I had to subtract 0.6˚C to align reality with the IPCC models. Now I have to add 0.6˚C to HadCRUT4 to achieve alignment. And the luxury of tracking history from 1850 has now been curtailed to 1900. The pre-2000 simulations align pretty well with observed temperatures from 1940 even though we already know that it is impossible for the pre-2000 simulations to have been produced by a large number of different computer models programmed to do different things – how can this be? Post 2000, reality seems to be aligned best with the orange no CO2 rise /  no anthropogenic forcing model.

From 1900 to 1950 the alleged simulations do not in fact reproduce reality at all well (Figure 8). The actual temperature record rises at a steeper gradient than the model record. And reality has much greater variability due to natural processes that the IPCC by and large ignore.

Figure 8 From 1900 to 1950 the alleged AR4 simulations actually do a very poor job of simulating reality, HadCRUT4 in blue.

Figure 9 The IPCC view from AR5 (2014). The inconvenient mismatch 1900 to 1950 observed in AR4 is dealt with by simply chopping the chart to 1950. The flat blue line is essentially equivalent to the flat orange line shown in AR4.

The fifth assessment report (AR5) was published this year and the IPCC current view on future temperatures is shown in Figure 9 [6]. The inconvenient mismatch of alleged model data with reality in the period 1900 to 1950 is dealt with by chopping that time interval off the chart. A very simple simulation picture is presented. Future temperature trajectories are shown for a range of Representative Concentration Pathways (RCP). This is the completely wrong approach since the IPCC is no longer modelling climate but different human, societal and political choices, that result in different CO2 trajectories. Skepitcalscience provides these descriptions [7]:

RCP2.6 was developed by the IMAGE modeling team of the PBL Netherlands Environmental Assessment Agency. The emission pathway is representative of scenarios in the literature that lead to very low greenhouse gas concentration levels. It  is a “peak-and-decline”  scenario; its radiative forcing level first reaches a value of around 3.1 W/m2  by mid-century, and returns to 2.6 W/m2  by 2100. In order to reach such radiative forcing levels, greenhouse gas emissions (and indirectly emissions of air pollutants) are reduced substantially, over time (Van Vuuren et al. 2007a). (Characteristics quoted from van Vuuren et.al. 2011)

AND

RCP 8.5 was developed using the MESSAGE model and  the IIASA Integrated Assessment Framework by  the International  Institute  for  Applied  Systems  Analysis  (IIASA),  Austria.  This  RCP  is characterized by increasing greenhouse gas emissions over time, representative of scenarios in the literature that lead to high greenhouse gas concentration levels (Riahi et al. 2007).

This is Mickey Mouse science speak. In essence they show that 32 models programmed with a low future emissions scenario have lower temperature trajectories than 39 models programmed with high future emissions trajectories.

The models are initiated in 2005 (the better practice of using a year 2000 datum as employed in AR4 is ditched) and from 1950 to 2005 it is alleged that 42 models provide a reasonable version of reality (see below). We do not know which, if any, of the 71 post-2005 models are included in the pre-2005 group. We do know that pre-2005, each of the models should be using actual CO2 et al concentrations and since they are all closely aligned we must assume they all use similar climate sensitivities. What the reader really wants to see is how varying climate sensitivity influences different models using fixed CO2 trajectories and this is clearly not done. The modelling work shown in Figure 9 is effectively worthless. Nevertheless, let us see how it compares with reality.

Figure 10 Comparison of reality with the AR5 model scenarios.

With models initiated in 2005 we have only 8 years to compare models with reality. This time I have to subtract 0.3˚C from HadCRUT4 to get alignment with the models. Pre-2005 the models allegedly reproduce reality from 1950. Pre-1950 we are denied a view of how the models worked then. Post-2005 it is clear that reality is tracking along the lower limit of the two uncertainty envelopes that are plotted. This is an observation made by many others [e.g 1].

Concluding comments

  • To achieve alignment of the HadCRUT4 reality with the IPCC models the following temperature corrections need to be applied: 1990 +0.5; 2001 -0.6; 2007 +0.6; 2014 -0.3. I cannot think of any good reason to continuously change the temperature datum other than to create a barrier to auditing the model results.
  • Comparing models with reality is severely hampered by the poor practice adopted by the IPCC in data presentation. Back in 1990 it was done the correct way. That is all models were initiated in 1850 and used the same CO2 emissions trajectories. The variations in model output are consequently controlled by physical parameters like climate sensitivity and with the 164 years that have past since 1850 it is straight forward to select the models that provide the best match with reality. In 1990, it was quite clear that it was the “Low Model” that was best almost certainly pointing to a low climate sensitivity.
  • There is no good scientific reason for the IPCC not adopting today the correct approach adopted in 1990 other than to obscure the fact that the sensitivity of the climate to CO2 is likely much less than 1.5˚C based on my and others’ assertion that a component of the Twentieth Century warming is natural.
  • Back in 1990, the IPCC view on climate sensitivity was a range from 1.5 to 4.5˚C. In 2014 the IPCC view on climate sensitivity is a range from 1.5 to 4.5˚C. 24 years have past and billions of dollars spent and absolutely nothing has been learned! The wool has been pulled over the eyes of policy makers, governments and the public to the extent of total brain washing. Trillions of dollars have been misallocated on energy infrastructure that will ultimately lead to widespread misery among millions.
  • In the UK, if a commercial research organisation were found cooking research results in order to make money with no regard for public safety they would find the authorities knocking at their door.

References

[1] Roy Spencer: 95% of Climate Models Agree: The Observations Must be Wrong

[2] Wood For Trees

[3] IPCC: First Assessment Report – FAR

[4] IPCC: Third Assessment Report – TAR

[5] IPCC: Fourth Assessment Report – AR4

[6] IPCC: Fifth Assessment Report – AR5

[7] Skepticalscience: The Beginner’s Guide to Representative Concentration Pathways

Advertisements

  Subscribe  
newest oldest most voted
Notify of

Niece piece of work. No one should be surprised. CO2 is not a pollutant. It is not the reason the globe has warmed. Abbott gets it, Harper gets it. The next Republican president will get it too. Now, if Cameron and Merkel fall into line, we can end this madness.

Resourceguy

The AMO is already doing its own thing–down.

A rather simple transfer function applied to the best estimate of TSI (Wang, et al. 2005) produces an excellent estimator of the HADCRUT3 record going back to the invention of the thermometer. Kudos to HADCRUT because that fact tends to validate its algorithm. That fact also provides a simple explanation for the failure of models to forecast temperature: climatologists are trying to forecast the Sun from global average surface temperature. This is analogous to understanding what they are thinking by reading their EEGs.
The existence of that transfer function also shows that IPCC’s claim for the existence of AGW based on the GHE plus fingerprints of human activity on the CO2 record can’t be true unless man’s CO2 emissions are affecting the Sun. To the extent that AGW might exist, it is not measurable.

rgbatduke

You are missing several important issues in this discussion — many of which I have gone over on this list. First, you are implicitly buying into the notion that the Multimodel Ensemble Mean (MME mean) is “the prediction of the models” when in reality, there is no such thing as the prediction of the models. There are only the predictions of each model, one at a time. This completely explains the incorrect mean variation, but it also obscures the fact that actually, the individual models have many times greater variability than nature, not less. Further, they have the wrong autocorrelation times. Basically, they make the climate fluctuate far more wildly than it actually does, and with the wrong spectrum of relaxation times. Analysis of their fluctuation-dissipation is (I suspect) direct evidence of incorrect dynamics.
But even this isn’t sufficient, because each model is not presenting individual model runs from a given set of initial conditions, they data they contribute to the MME mean is already an average — the Perturbed Parameter Ensemble average — over many runs with slightly different initial conditions and parametric settings. The idea is for the model to predict the “average” future by erasing the enormous spread of results produced by the fact that the models are nonlinear and chaotic so that butterfly-effect perturbations cause the same model, started from basically the same initial conditions to far smaller perturbations than we can actually measure, to sometimes lead to rapid extreme heating, sometimes to rapid extreme cooling, and sometimes to stuff that is in between or switching rapidly from one to the other.
If you analyze the fluctuations of the individual model runs and compare their relative variation to that of the real climate, again they generally have completely incorrect relaxation times, nearly an order of magnitude too much variance, and sketch out an envelope with no practical predictive value. And this is (as you say) ignoring the fact that the models are invariably normalized “close to the present” — either in the “reference period” shown in grey in the figures above or — to minimize the growing divergence — they re-normalize them to maximally correspond in (say) 1990 at the expense of the agreement in the reference period visible in e.g. figure 9.2a of AR5.
The open abuse of the axioms and rules of ordinary statistics in all of the figures above and in the assertions of “confidence”, from the draft or from the actual report, is — again in my opinion — one of the greatest scandals in modern science.
To abuse Winston Churchill just a bit: Never have so many concluded so much from so little actual statistically defensible computation and argumentation based on data.
rgb

emsnews

Any graphs showing global warming must be first extended to include all the ice ages. This gives us perspective. Minus that, it is utterly meaningless for predicting the future since the future is very heavily weighted towards repeating the past.
And in the past, the warmest times are right when an Ice Age ends, and we are not nearly so warm today as when we first figured out how to do agriculture and tame animals.

Neil

Try using Mike’ Nature trick to hide the decline in correlation between observered and projected temperatures.
No one will notice then.

Nice article but I have to disagree with the statement that oil and gas reservoir models are every bit as complex as computer simulations of Earth’s atmosphere. There are some similarities, both based on partial differential equations (actually difference equations when translated into computer code), but modeling the atmosphere has a greater number of contributing factors. Indeed, the climate models, as complex as they are, are not complex enough. On top of that is the matter of scale. The size of an oil field is much smaller than the size of the planet’s entire atmosphere. The reservoir simulation can use a much finer computational grid. One of the major problems with atmospheric simulation is the coarseness of the grid, on the order of hundreds of kilometers on a side for each cell. This is larger than many important phenomena, like tropical storms and clouds. Using a finer grid increases the computational cost for each time step. With current computer technology this simply cannot be done today. So for these and many other reasons, oil and gas reservoir simulations can provide useful output but climate models have failed miserably.

richardscourtney

rgbatduke:
I write to draw attention to your post at June 12, 2014 at 9:16 am which is here.
Your entire post is good, but I wave a football scarf at these two excerpts.

First, you are implicitly buying into the notion that the Multimodel Ensemble Mean (MME mean) is “the prediction of the models” when in reality, there is no such thing as the prediction of the models. There are only the predictions of each model, one at a time. This completely explains the incorrect mean variation, but it also obscures the fact that actually, the individual models have many times greater variability than nature, not less. Further, they have the wrong autocorrelation times. Basically, they make the climate fluctuate far more wildly than it actually does, and with the wrong spectrum of relaxation times.

and

If you analyze the fluctuations of the individual model runs and compare their relative variation to that of the real climate, again they generally have completely incorrect relaxation times, nearly an order of magnitude too much variance, and sketch out an envelope with no practical predictive value .

Yes! Oh, yes!
Richard

Euan Mearns: If memory serves, the IPCC used different base years for anomalies for each of their reports, and sometimes the base years varied within the report. What are they for the illustrations you’ve chosen? And are your presentations of HADCRUT4 data referenced to the same base years?

Oldseadog

“….. obfuscate, mislead and deceive.”
Why am I not surprised?

“24 years have past and billions of dollars spent and absolutely nothing has been learned! ”
http://www.amazon.com/Neutrino-Hunters-Thrilling-Particle-Universe/dp/0374220638
Interesting story about how science works
“In 1911 Lise Meitner and Otto Hahn performed an experiment that showed that the energies of electrons emitted by beta decay had a continuous rather than discrete spectrum. This was in apparent contradiction to the law of conservation of energy, as it appeared that energy was lost in the beta decay process. ”
So here we have an experiment with real data and real observations, that contradict the law of the conservation of energy.
Question: do scientists collectively run around yelling “falsified”? Nope.
20 years later….
“The neutrino was originally postulated, in 1931, almost as “a form of scientific witchcraft”, says Jayawardhana. “When scientists couldn’t account for energy that went missing during radioactive decay, one theorist found it necessary to invent a new particle to account for that missing energy,” he adds. The theorist was the physicist Wolfgang Pauli.”
Basically, Pauli postulates a unicorn.
more than 20 years later we get project poltergiest
http://documentaryheaven.com/project-poltergeist/
Next enter in the late 60s
John Bahcall. The first experiment to test his solar neutrino theory
http://en.wikipedia.org/wiki/Homestake_Experiment
Opps the experiment detects 1/3 of what theory predicts .
Feynman talks to Bahcall. What does Feynman say? Does feynman reject the theory that is falsified by the data?
Nope: see page 91 of the book. When face with a conflict between theory and data Feynman says
“we dont know” we dont whether the theory is wrong or the data is somehow wrong.. or if there is something we are missing.
fast forward another 30 years and the mystery is solved.. 90 years or so from the initial problem.
So, the idea that 24 years is a long time for zero progress is unscientific. If we look at the data, the history if science, we see that the theory that science progress can be judged within a couple of decades is not a good theory. we see that and also these things, that is a data driven look at how science actually works, shows the following
1. discrepancy between theory and observation settles nothing. As Feynman explained to Bahcall, ‘we dont know’. could be the theory, could be the data..
2. Progress is not continuous
3. no working scientist ever made the skeptical argument that it “wasnt their job” to come up
with a better more complete theory.

And, my consistent, and complete complaint. THERE IS NO BASIS TO USE AVERAGE TEMPERATURES. Complete and utter, indefensible nonsense. Unless one is doing ENTHALPY, i.e. the ENERGY PER CUBIC VOLUME OF THE ATMOSPHERE, which demands a rather COMPLETE and detailed humidity profile, all the time and historically, almost all of this “work” is worse than the wizard of Oz in the Wizard of OZ. I’ve asked, I’ve begged, I’ve pleaded, for a simple “first pass” at taking, merely, surface station humdities with TEMPERATURES and trying to figure out local ENTHALPY. And then to see if there is ANY significant trend there.

Mike Smith

“Back in 1990, the IPCC view on climate sensitivity was a range from 1.5 to 4.5˚C. In 2014 the IPCC view on climate sensitivity is a range from 1.5 to 4.5˚C. 24 years have past and billions of dollars spent and absolutely nothing has been learned!:
Bingo! And I’m 97% certain the climate sensitivity will turn out to be somewhat less than 1˚C.

TheOtherJohnInCA

Alan Poirier gives Republicans too much credit. Americans do not elect people based on how knowledgeable or reasonable they are. They are elected based on how well they campaign. Even the best American presidents of the latter half of the 20th century only won because they were the better campaigners.
We have no idea of what the next POTUS thinks of AGW. At best, we will know what the MSM deigns to share about what he/she knows, and there is no guarantee what they shar will reflect his/her views or reality.

“First, you are implicitly buying into the notion that the Multimodel Ensemble Mean (MME mean) is “the prediction of the models” when in reality, there is no such thing as the prediction of the models. There are only the predictions of each model, one at a time.”
of course there is a prediction of the models. You simply average them. This is a simple mathematical operation.
it is very simple. The best model is the model that matches the data best.
That model is a simple average of all models.
1. The model of models exists. (just do the math)
2. The model of models outperforms any given model. (just do the math)
3. what works should be preferred over what doesnt work.
Of course we currently dont like the approach of taking the average of all models.
From a purist standpoint it’s a hack. There are all sorts of things wrong with the approach.
But it works better than other approaches. So absent a better alternative.. you go with what you have.. That doesnt necessarily mean you would use it for policy,
So, we might argue that a model of models is not the approach we would prefer to take.
We would prefer to select one model and improve it. But pragmatically and practically,
we often do things that ‘dont make sense” because it is the best ( but not perfect) we currently have. Until another approach demonstrates better performance you are kinda stuck with what works best.
its fun to look at what folks do for hurricanes
http://www.nhc.noaa.gov/verification/verify6.shtml

Eustace Cranch

Steven Mosher says:
June 12, 2014 at 10:05 am
OK, fine.
So we are to implement painful and hideously expensive global policies, reducing worldwide standards of living, over “we don’t know?”

Neil

@Steven Mosher,
Very true and very interesting. I don’t recall a cap-and-trade on the ‘missing’ energy, though.

Gunga Din

The solution to the IPCC’s problem is simple. They need more hot air.

William Astley

In support of and further comment to:
“Comparing models with reality is severely hampered by the poor practice adopted by the IPCC in data presentation..”
I have commercial experience in testing and debugging complex modeling software (senior technical specialist, primary responsibility for scenario identification and commercial definition to facilitate debugging/model development to ensure the model meets the needs of technical users). It is standard and an absolute necessary practice to force the modeling software supplier to run specific, defined modeling scenarios to validate or invalidate models, to help debug the software, and to provide insight into the weighting of parameters/requirements of the modeling software. Payment for the modeling software is contingent on performance for the multiple ‘testing’ scenarios.
It is pathetic, embarrassing, ridiculous that the IPCC does not include a modeling scenario that starts in 1850, that includes the rise of atmospheric CO2, and that compares actual temperature rise to predicted temperature rise as that scenario would provide unequivocal proof the general circulation models (GCM) that are the basis for the extreme AWG agenda have multiple fundamental errors and that the majority of the warming in the last 150 years was due to something else besides the increase in atmospheric CO2.
There are multiple observations that indicate the GCM are incorrect. For example:
A) High Latitudinal Warming (lack of tropical region warming) Paradox
The warming in the last 150 years is primarily at high latitudes which does not match what the GCM’s predict. As changes to atmospheric CO2 are quickly equalized in the atmosphere the potential for the CO2 to increase the temperature of the earth should be the same by latitude, if all other factors were the same.
As the actual increase in forcing due to the increase in atmospheric CO2 is directly proportional to the amount of long wave radiation that was emitted at the latitude in question before the increase in CO2, the largest forcing change and the greatest increase in temperature should have occurred at the equator. This is not observed.
http://wattsupwiththat.files.wordpress.com/2012/12/ipcc-ar5draft-fig-1-4.gif
http://arxiv.org/ftp/arxiv/papers/0809/0809.0581.pdf
Limits on CO2 Climate Forcing from Recent Temperature Data of Earth
The global atmospheric temperature anomalies of Earth reached a maximum in 1998 which has not been exceeded during the subsequent 10 years (William: 16 years and counting). The global anomalies are calculated from the average of climate effects occurring in the tropical and the extratropical latitude bands. El Niño/La Niña effects in the tropical band are shown to explain the 1998 maximum while variations in the background of the global anomalies largely come from climate effects in the northern extratropics. These effects do not have the signature associated with CO2 climate forcing. (William: This observation indicates something is fundamental incorrect with the IPCC models, likely negative feedback in the tropics due to increased or decreased planetary cloud cover to resist forcing). However, the data show a small underlying positive trend that is consistent with CO2 climate forcing with no-feedback. (William: This indicates a significant portion of the 20th century warming has due to something rather than CO2 forcing.)
….The effects in the northern extratropics are not consistent with CO2 forcing alone.
An underlying temperature trend of 0.062±0.010ºK/ decade was estimated from data in the tropical latitude band. Corrections to this trend value from solar and aerosols climate forcings are estimated to be a fraction of this value. The trend expected from CO2climate forcing is 0.070g ºC/decade, where g is the gain due to any feedback. If the underlying trend is due to CO2 then g ~1. Models giving values of greater than 1 would need a negative climate forcing to partially cancel that from CO2. This negative forcing cannot be from aerosols.
These conclusions are contrary to the IPCC [2007] statement: “[M]ost of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.”
B) Lack of high altitude warming in tropical troposphere paradox
In addition to latitude forcing change the actual forcing due to the increase in atmospheric CO is altitude dependent. At lower altitudes in the atmosphere there is more CO2 (the amount of forcing is depend on the number of molecules per unit volume and there are more CO2 molecules at the surface of the planet due to change the higher pressure) and there is more water vapor (due to both temperature and atmospheric pressure). As almost half of the CO2 absorption spectrum overlaps with water and the greenhouse gas potential to cause warming saturates with increasing concentration (actual forcing change is logarithmic so twice as much increase in CO2 is required to cause the same forcing which is a very good way to validate or invalidate the GCM as CO2 rises), theoretically the majority of the forcing change should occur higher in troposphere where there is less water and less CO2 molecules. The warming of the surface of the earth then occurs due to long wave radiation that is emitted from the higher regions of the troposphere back down to the surface of the earth. This higher troposphere warming is not observed.
Analyzing piecewise segments where there were temperature changes and/or CO2 changes would shows that a key missing forcing function (four different mechanisms related to solar magnetic cycle changes ) are causing the planet to warm and cool and that something specifically inhibits the CO2 forcing mechanism above around 180 ppm.
http://icecap.us/images/uploads/DOUGLASPAPER.pdf
A comparison of tropical temperature trends with model predictions
We examine tropospheric temperature trends of 67 runs from 22 ‘Climate of the 20th Century’ model simulations and try to reconcile them with the best available updated observations (in the tropics during the satellite era). Model results and observed temperature trends are in disagreement in most of the tropical troposphere, being separated by more than twice the uncertainty of the model mean. In layers near 5 km, the modelled trend is 100 to 300% higher than observed, and, above 8 km, modelled and observed trends have opposite signs. These conclusions contrast strongly with those of recent publications based on essentially the same data.

PeterB in Indianapolis

” 1. discrepancy between theory and observation settles nothing. As Feynman explained to Bahcall, ‘we dont know’. could be the theory, could be the data..
2. Progress is not continuous
3. no working scientist ever made the skeptical argument that it “wasnt their job” to come up
with a better more complete theory.”
Response to #1 – you omitted – perhaps intentionally, Feynman’s 3rd part of the explanation of “we don’t know”, which was WE COULD BE MISSING SOMETHING.
In the case of climate science, there are probably a myriad of things that scientists are still missing….
Response to #2 – yes progress is not continuous. In some cases, progress is even impeded by scientists who are clinging to a theory which is incorrect.
Response to #3 – Perhaps certain scientists need to be classified as “non-working scientists” in some cases, because some seem awfully reluctant to come up with a “better, more complete theory”.
Some theories, at least based upon available evidence, appear to be a good match to reality. Some theories appear to be an ok approximation of reality, but appear to be “missing something” so that they need to be modified to be made better and/or more complete. Some theories appear to be not so good at approximating reality, and no matter how much lipstick you try to put on the pig, it is still going to be a pig.
The current dominant theory of “climate change” certainly doesn’t appear, based on the evidence to fall into the first category. We shall see (probably sooner rather than later) whether it falls into category 2 or category 3.

joelobryan

Preaching to choir here at WUWT. The CAGW zealots take it on faith that man-made CO2 is “disrupting” the climate. Science and skeptical methods mean nothing to them, evidence John Holdren. Too many scientists who feed at the trough of government grants don’t want to endanger their paycheck by getting black balled by John Holdren’s dishonest NSF-puppet grant machine.
The lies from the Obama Administration are best understood as an CAGW Onion Theory.
This theory posits as follows:
The top lie is best embodied by the National Climate Assessment, a political statement, designed to scare a naive public, deceptively packaged as “science” to enable a regulatory assault on the US energy production via environmental and ecological justifications. Justifications such as expanded scope of the Clean Water Act and the Endangered Species Act are prime examples of the deception occurring at this level. This layer keeps the environmental nut-jobs as a realiable base for Democrats.
Removing that environmental layer of lies of Climate Change, reveals the next layer. Those “enhanced regulations” expand the power and enable Executive branch control over a larger fraction of the economy. The lie here is also enables more private lawsuits against industry and especially oil and gas — simply a payoff to the tort bar. This layer keeps lawyers firmly in the Democrat Party camp.
Removing this layer of lies, reveals the reach of the Federal government into areas previously off-limits such as private and state-lands that were the purview of States Rights via the 10th Amendment, i.e. powers not specifically granted to the Federal government are reserved to the People or the States. This is where things get interesting. Congress has not been complicit with this taking of individual and States’ rights. This has been enabled by actors like Harry Reid keeping the Congress dysfunctional since Democrats can’t control it without the House of Representatives. Individually, even many Democrat Congressmen and Senators express dismay at the Power grabs of the EPA and Fish and Wildlife Service, but in reality they do nothing but talk. They lie, just like Harry Reid. Congress refused to pass Cap and Trade in 2010, now the EPA is trying to force the States to impose it. This onion layer animates the dishonesty of the John Podesta camp.
Peel that layer back and another deeper, darker sinister layer is revealed. That is one of growing Imperial Presidency power justified by the adherents by the layer above of Congressional dysfunction. This one is truly antithetical to the intent of the framers of the US Constitution. President Obama makes claims on former judicial areas by deciding not to enforce various provisions of law by calling them unconstitutional, thereby appointing himself Judicial powers. Indeed, we see the most ardent CAGW zealots claiming that a dictator is needed to impose CO2 emission reductions on an unwilling population. This so far is the most dangerous level.
To be considered a theory, as this Onion Model theory of the current AGW alarmism does, it must make testable predictions. This one does too.
Prediction: Because of where is leads, the narcissist Obama next will attempt remain in power beyond his constitutionally limited term, based on a claim of the need to save the planet from Climate Disruption/Change.

Greg

No surprise that models match fairly well 1960-1990, they are tuned to that period , as John Kennedy fo Met. offfice confirmed in our discussion at Climate Etc.
You can model any period fairly well if you have enough ‘forcing’ variables. and a few “parameters”.
When you have to start clipping of the bits that don’t fit, you acknowledge your model does not work.

joelobryan

I overlooked to include the Sustainable Energy, i.e wind and solar, industries as a layer just below the top environmental layer. The solar industry, best embodied by Elon Musk, feeds voraciously at the trough of tax subsidies and tax write-offs. Billionaire Tom Steyer also operates at this level as he uses the environmental layer above while he makes huge billion-dollar bets on Green industry initiatives. His $100Million buy-out/bribe of Senate Democrats is merely an “investment” to help further the big bets that will return 10-20 fold more.

Greg

The pre-1900 period is important because there is a notable drop . This does not fit the simplistic CO2 + noise model

rgbatduke: I accept what you say that model averages cannot represent a prediction – but that is what the IPCC present. Roy Spencer’s approach displays the variability you describe, but I seem to recall Roy ran into some criticism about his normalisation procedure that I found difficult to evaluate.

by erasing the enormous spread of results produced by the fact that the models are nonlinear and chaotic so that butterfly-effect perturbations cause the same model

This in itself tells us the models are wrong since we know that Earth’s climate is stable. This is what happens when you only model positive feedbacks – some real and some imagined – and ignore the negative feedbacks that must exist to maintain stability. I’m guessing the main negative feedback is convection and clouds. The models could be programmed to maintain uniform surface temperature by making convection a variable. I’m not saying that is correct but it would certainly stabilise the models.
Worse still, unstable models has now resulted in the concept of unstable climate.
Doug Hoffman: You may be right, but it would not surprise me if the model for a large reservoir had a similar number of cells as a climate model (I could be totally wrong here). Reservoir models suffer from the same scaling / resolution problems that climate models suffer from but on a different scale. They have to include a lot of complex variables such as the PVT characteristics of petroleum and water, porosity and permeability distribution or the reservoir and the aquifer below it. Pressure, temperature etc.
The main point I’d make is that a reservoir modelling program will employ one reservoir modeller who understands most of the principles and how to work the software (Petrel) and he will call on the expertise of a geologist, a geophysicist, a reservoir engineer and a petrophysicist – all experts in different aspects of interpretation. A large model may represent 5 man years work. Those working on it have only one priority and that is to get it right. Compare this with the cash swallowing morass that is climate science.

Chuck Nolan

Mike Smith says:
June 12, 2014 at 10:16 am
“Back in 1990, the IPCC view on climate sensitivity was a range from 1.5 to 4.5˚C. In 2014 the IPCC view on climate sensitivity is a range from 1.5 to 4.5˚C. 24 years have past and billions of dollars spent and absolutely nothing has been learned!:
Bingo! And I’m 97% certain the climate sensitivity will turn out to be somewhat less than 1˚C
————————————-
97% certain? You made that up.
Where’s the raw data, the code, references and reviewer’s comments.
I’d like a list of your co-authors. Please include names, CVs and each person’s individual contributions to the research.
cn

Billy Liar

Steven Mosher says:
June 12, 2014 at 10:05 am
… The Thrilling Chase for the Ghostly Missing Heat …

richardscourtney

Steven Mosher:
At June 12, 2014 at 10:05 am you respond to

“24 years have past and billions of dollars spent and absolutely nothing has been learned! ”

by providing a load of irrelevant waffle about neutrino research and an assertion that research often has interruptions to progress of some decades.
You response is an improvement on your usual practice of posting brief and abusive ambiguities, but it is equally laughable.
The expenditure on AGW research has been running in excess of US$ 5 billion a year for three decades: the US alone has been spending in excess of US$ 2.5 billion a year.
Nothing has resulted except your hope that something useful may result in future. Well, if half that money had been spent on e.g. providing sanitation in the developing world then something useful would have resulted.
And nothing useful is likely to result from AGW research conducted in accordance with your post at June 12, 2014 at 10:45 am. It advocates the statistically unsupportable action of averaging the outputs of different GCMs and your excuse for the action is this nonsense

Of course we currently dont like the approach of taking the average of all models.
From a purist standpoint it’s a hack. There are all sorts of things wrong with the approach.
But it works better than other approaches. So absent a better alternative.. you go with what you have.. That doesnt necessarily mean you would use it for policy,
So, we might argue that a model of models is not the approach we would prefer to take.
We would prefer to select one model and improve it. But pragmatically and practically,
we often do things that ‘dont make sense” because it is the best ( but not perfect) we currently have. Until another approach demonstrates better performance you are kinda stuck with what works best.

You admit “There are all sorts of things wrong with the approach.”
then claim “But it works better than other approaches.”
so you assert “you go with what you have”.
NO! No scientist would adopt a procedure s/he knows is “wrong” because of lack of something better.
A scientist assesses if a procedure is adequate, then uses an adequate procedure and rejects an inadequate procedure.
And an honesty person certainly does NOT include the indications of a “wrong” procedure in a “Summary For Policymakers” if there is doubt that it should “be used for policy”.
Richard

The Ghost Of Big Jim Cooley

Mr Mosher, be so kind as to answer:
If we really don’t know, do you advocate that we do nothing until we know something?
I am not in the field so science, I’m a service engineer. Faced with a technical problem I would try to ascertain the cause. But if I don’t know what is causing it then I would never advise my client to keep spending money until the problem is solved. And if my thoughts and actions don’t actually correspond to solving the issue, then I am clearly on the wrong track. I try something else.
I used to love the world of science, and envy scientists. Now, I don’t think my opinion of either could be lower. Science appears to be moving toward a religion. I appreciate what you say when you say that maybe we don’t know, so why do scientists and those that represent them continually make out that they do know?

blackadderthe4th

AGU and Richard Alley, did the IPCC get its early projections right?

Sure did!

mebbe

I took the average of 3 GCM outputs by entering their values in my desktop calculator.
Not only am I a modeler, I’m a computer modeler.

Jimbo

The fifth assessment report (AR5) was published this year and the IPCC current view on future temperatures is shown in Figure 9 [6]. The inconvenient mismatch of alleged model data with reality in the period 1900 to 1950 is dealt with by chopping that time interval off the chart.

As long as they continue to fail then expect further choppings on their next report.

Jimbo

“Back in 1990, the IPCC view on climate sensitivity was a range from 1.5 to 4.5˚C. In 2014 the IPCC view on climate sensitivity is a range from 1.5 to 4.5˚C. 24 years have past and billions of dollars spent and absolutely nothing has been learned!:

Oh but they have learned a lot about climate sensitivity – but won’t say. Therefore the range stays the same. 😉

Brian

Mr Mosher @ 10:05am
I’m pretty sure that no one was trying to create sweeping international policy based on the un-proven neutrino.

kadaka (KD Knoebel)

To achieve alignment of the HadCRUT4 reality with the IPCC models the following temperature corrections need to be applied: 1990 +0.5; 2001 -0.6; 2007 +0.6; 2014 -0.3. I cannot think of any good reason to continuously change the temperature datum other than to create a barrier to auditing the model results.
This would all be so much easier if we would simply align reality with the models.
Maybe Gavin will finish what Hansen started so they can check that one off the list.

euanmearns

Bob Tisdale: The HadCRUT4 datum is 1961 to 1990 according to this source:
http://www.cru.uea.ac.uk/cru/data/temperature/
CRU:UEA – whoever they may be 😉 Its possible that rgbatduke has hit on the reason for this continuously changing at the IPCC but it seems to me there is a ±0.6˚K fluctuation in the baseline of the models. There is no excuse for the IPCC not making succinct comments about their methodology in the summary reports “readers may note that the baseline has changed by x˚C since the last report because…” but they don’t do it. I suspect 1) because they don’t know they are doing and 2) if they did know they would not know why. I simply don’t have the time to dig through the thousands of pages of main reports.
Steven Mosher: I have no problem with blue skies scientists hammering out data, theory and empiricism over decades or centuries. If it were not for energy policies being based on the findings of this fledgling imperfect science then I probably wouldn’t give a damn. But the politicization, and suspicion that politics may be directing scientific outcomes and the consequences for society of misguided energy policies is what drives me. I know many feel the same. The ascendency of the Green movement and its influence. In another post I say this:

While European strategy has failed to reduce global emissions, CO2 emissions have failed to raise global temperatures since 1997 as well. This is the key point. If it were the case that temperatures were rising out of control then we should all be backing measures to address the problem. But they are not (Figure 1).

euanmearns

blackadderthe4th – good vid, but doesn’t tell the whole story since they made a number of forecasts, only 1 was right, that with the low climate sensitivity. So the speaker is actually being disingenuous.

kadaka (KD Knoebel)

Dear Moderators,
It appears all these images are hosted on euanmearns-dot-com and were never copied over to the (free) WUWT WP account. As Energy Matters does have a donate button and does not appear to be hosted for free on WP or similar, I’m wondering if WUWT is blasting through Mearns’ bandwidth allotment and racking him up a significant bill.
[Don’t know right now, thank you for the heads up. .mod]

HAS

Just to add to the confusion about what model to use, I’ve never understood why the absolute global temperatures they produce should differ from each other and the historic record, and only the anomalies show some measure of agreement.

rgbatduke

of course there is a prediction of the models. You simply average them. This is a simple mathematical operation.
it is very simple. The best model is the model that matches the data best.
That model is a simple average of all models.
1. The model of models exists. (just do the math)
2. The model of models outperforms any given model. (just do the math)
3. what works should be preferred over what doesnt work.
Of course we currently dont like the approach of taking the average of all models.
From a purist standpoint it’s a hack. There are all sorts of things wrong with the approach.
But it works better than other approaches. So absent a better alternative.. you go with what you have.. That doesnt necessarily mean you would use it for policy,

Except, of course, that we are using it for policy and being told that it is reliable as pretty much the sole basis for the many statements of “confidence” scattered throughout, say, AR5 especially in the SPM.
I won’t address your model of models nonsense, because that’s precisely what it is. Oh, wait, yes I will.
First, there is no “statistical ensemble of independent and identically distributed models each representing perfect physics”. Really, there isn’t for any problem, not just this one. The term ensemble, especially when used in physics, has a very precise meaning:
http://en.wikipedia.org/wiki/Statistical_ensemble_%28mathematical_physics%29
and it is this precise meaning that describing the CMIP5 collection as a “MultiModel Ensemble” is attempting to co-opt. Obviously calling this collection an “ensemble” is sheer nonsense and/or wishful thinking. And yes, there are indeed ensembles used in climate science, and even used for good, not evil. If you visit here:
http://en.wikipedia.org/wiki/Climate_ensemble
you will find that there are two marginally defensible uses of the term ensemble in climate science and two indefensible uses. The two defensible uses are:
* The perturbed physics ensemble, which attempts to “average” over our ignorance — which is basically what ensembles in statistical physics always do (and you can take me as an modest expert in this as I did Monte Carlo computations in statistical physics for maybe 15 years, many of them gigaflop-years of total computation back when this was expensive). These basically jiggle the not-precisely-known parameters to see what happens within the plausible phase space of their presumed values.
* The initial condition ensemble, which tries to average over the chaotic nature of the simulations (weather prediction is where Lorenz discovered deterministic chaos in the first place). Unfortunately, the whole point of chaos is the divergence of future trajectories with some sort of Lyapunov exponent describing how fast even tiny perturbations within this “ensemble” fill the phase space of possible futures, along with the fact that this phase space itself is structured by attractors in high numbers of (fractally distributed) dimensions. Hence weather prediction runs out of gas in a matter of weeks, and no amount of additional computation can keep up with the growth in the size of the phase space integrated over even fairly tightly constrained initial conditions.
The two questionable uses are:
* The “forcing ensemble”. Do even the people who coined this name know what it means? Seriously. This basically means that they take CO_2 and make it go up at different schedules, and, with an entire mountain of additional assumptions on how it works to force the climate, claim that they can extract a “warming signal” that isn’t just built into the assumptions in the first place but now it is backed by an “ensemble” of computations and hence has some statistical relevance. Nonsense! In any event, they aren’t statistically sampling a space of forcings in any meaningful way, because there is no such thing. To the extent that this is either reliable or confirmable by comparison with reality, it is already implicit in the perturbed parameter and initial condition ensembles. The only reason it even has a name is to sell people on the danger of “forcings”.
* The “Grand Ensemble”, or an ensemble of ensembles. This is what the MME is pretending to be, but note well the diagram — it is not, not even remotely, a grand ensemble, which is basically a layering of the two valid ensembles above, perturbed physics and perturbed initial conditions. Indeed, there isn’t any particular need for the layering — one can perturb physics and initial conditions in a single computation and in chaotic problems. Furthermore, we know perfectly well that perturbing both the physics and the initial conditions in a single computation in nonlinear chaotic dynamical open systems does not produce the same distribution of outcomes as perturbing the physics and initial conditions independently as if the two problems are in some sense separable. Or rather, it had better not — because the only meaningful “ensemble” average is one that averages over our appropriately distributed, unbiased ignorance. Since we are simultaneously ignorant of physics and initial conditions and do not know the distribution or bias of our ignorance, the sole point of using ensemble methods at all is to compare a perturbed parameter ensemble (where both physics and initial conditions are sampled) and then compare the predictions to reality, one model at a time, to discover if our models are sampling the correct ranges of either one, with the correct distribution of future outcomes.
The default assumption in all uses of statistical mechanics in physics is that suitably averaged reality does the most probable thing, not the least probable thing, nearly all of the time. There are lots of reasons for this, but the heart of them all is the Central Limit Theorem. Once one average over the details of a correct ensemble, those details cease to matter as the CLT kicks in and the sample means start to be normally distributed around the true mean. And I’d be embarrassed if I told you how old I was when I finally had this epiphinaic realization in spite of taking courses that attempted to convey it to me on numerous occasions.
In AR5, the PPE is defined and used — per model — for good. Or it would be good if the outcomes of the PPE runs were individually compared to reality with an eye to rejecting bad models, which never seems to happen. In AR5, the MME is defined and used — collectively — for pure evil! Even the authors of chapter 9 acknowledge that there is no statistically defensible reason for flat averaging a bunch of PPE means from many non-independent models and then asserting that the result somehow is a normal distribution around a true mean expected behavior.
rgb

richard verney

Steven Mosher says:
June 12, 2014 at 10:05 a
//////////////
Whilst there is some merit in your point, you conveniently overlook the substantial difference.
How much money has been thrown at climate science in all its guises? How much on research into Neutrinos. Many more orders of magnitude have been spent on Climate Science, and we are not as far forward as we were in the 1970s. Climate Science has regressed, not taken a step forward.

Eustace Cranch:

So we are to implement painful and hideously expensive global policies, reducing worldwide standards of living, over “we don’t know?”

This is the crux of the matter. If climate science was some esoteric branch of physics that may discover the wanton in 200 years time, no one would care. But its not. It lies at the heart of global politics the credibility of science today and the welfare of human populations. And so we do care.
And its not that “we don’t know”. Its that the evidence points strongly in one direction, and that is (IMO) CO2 has marginal impact on global surface temperatures. It could be zero stretching up to 1.5˚K per doubling of CO2.
To lay my cards on the table, I’m more concerned about deforestation and over fishing. There has to be limits somewhere to what we can safely do to Earth ecosystems. The fact that we don’t know where these safe limits lie comes down to a lack of scientific rigour among those doing the work.

Joel O’Bryan says:
June 12, 2014 at 11:10 am [ … ]
That’s my concern. If martial law is ever declared, we know what’s coming next.
+++++++++++++++++++++
blackadderthe4th,
Please. Richard Alley knows where his bread is buttered. Just because he falsely asserts that global warming is continuing, that doesn’t make it so.
Global warming has stopped. That’s what the real world is clearly telling us.

emsnews

Even more clearly, global warming has been declining for the last 9,000+ years. We are seeing the jagged bumps up and down while the moving staircase is definitely going downwards relentlessly.

Jimbo

Steven Mosher says:
June 12, 2014 at 10:45 am

Mosher are you looking for a new career in Climastrological modeling? Pisces says there is no future in that. That post of yours is full of garbage.

Steven Mosher says:
June 12, 2014 at 10:05 am

More garbage. Leave the models alone for a second and look at the temperature. You don’t need math for that.
How did we get to the stage when the obvious failure of the models turns into who can waffle best in the English language?

Jimbo

Sheesh! I forgot that Mosher is actually good in English. Sorry Mosh.

Jimbo

In my last comment I actually forgot that Mosh is good in English. Honestly. I did not mean to hint at credentials at all. My subconscious insight was not intended. Maybe Mosh can learn something from this. I have.

pat

analyse this. Bloomberg trying to reinforce the ridiculous ABC/WaPo poll claiming americans want CC action, even if the cost to them is significant:
11 June: Bloomberg: Lisa Lerer: Americans by 2 to 1 Would Pay More to Curb
Climate Change
Americans are willing to bear the costs of combating climate change, and
most are more likely to support a candidate seeking to address the issue.
By an almost two-to-one margin, 62 percent to 33 percent, Americans say they
would pay more for energy if it would mean a reduction in pollution from
carbon emissions, according to the Bloomberg National Poll.
While Republicans were split, with 46 percent willing to pay more and 49
percent opposed to it, 82 percent of Democrats and 60 percent of
independents say they’d accept higher bills…
The EPA proposal is likely to be modified during a public comment period,
and a bipartisan coalition of coal-state lawmakers have vowed to pass
legislation to block them…
Obama’s proposal has divided his party along regional lines. While
Democratic Senate candidates in Iowa and Colorado back the emission limits,
others in coal-states such as West Virginia and Kentucky have distanced
themselves from them…
http://www.bloomberg.com/news/2014-06-10/americans-by-2-to-1-would-pay-more-to-curb-climate-change.html
the sample is the equivalent of approx 72 Australians being polled; only 5% really concerned about CC, yet Bloomberg/Selzer get an alleged huge majority willing to pay for action in the two pertinent questions! CAGW figures are always suspect.
Bloomberg News National Poll – SELZER & COMPANY
June 10 (Bloomberg) — The Bloomberg News National Poll, conducted June 6-9
for Bloomberg News by Selzer & Co. of Des Moines, IA, is based on interviews
with 1,005 U.S. adults ages 18 or older..
http://media.bloomberg.com/bb/avfile/rg._mQ264POU

Jimbo

Why all the intricacies of language?
THE MODELS FAILED! That is all that matters. Emergency over. Climate sensitivity not as bad as we previously thought! The jig is almost over. Please stop the details, look at the fail, it’s called the embarrassed naked elephant in the room.
http://www.euanmearns.com/wp-content/uploads/2014/06/CMIP5-90-models-global-Tsfc-vs-obs-thru-2013.png
In any other science things would have moved on by now. What keeps this particular con going is the possibility of losing a LOT OF MONEY. Hey, I didn’t tell the BBC to invest huge chunks of their pensions into climate schemes. There are many other examples from individuals.

bobl

@mosher
Steven
The model of models approach is fatally flawed unless the models that are shown to be wrong are incrementally excluded from the analysis. Nobody has ever gotten closer to an answer by averaging a correct value with a wildly incorrect value. The objective is to converge upon a model that is representative of temperature change over time. To do that, models that have failed must be excluded from the analysis, instead we see the same motley ensemble with no predictive value because at best only one of them is correct and at worst none of the are correct.
Any individual model for which the temperature falls outside the 2 sigma envelope for any significant cumulative period lets say > 8% OF 30 years( 2.5 years or more) should be dropped forthwith. If it turns out that all the models are excluded then clearly it’s back to the drawing board for climate science, because they have something very wrong. Instead we tenaciously grasp onto models that we know don’t work?