Guest essay Energy Matters
In geology we use computer models to simulate complex processes. A good example would be 4D simulation of fluid flow in oil and gas reservoirs. These reservoir models are likely every bit as complex as computer simulations of Earth’s atmosphere. An important part of the modelling process is to compare model realisations with what actually comes to pass after oil or gas production has begun. It is called history matching. At the outset, the models are always wrong but as more data is gathered they are updated and refined to the point that they have skill in hind casting what just happened and forecasting what the future holds. This informs the commercial decision making process.
The IPCC (Intergovernmental Panel on Climate Change) has now published 5 major reports, the First Assessment Report (FAR) in 1990. This provides an opportunity to examine what has been forecast with what has come to pass. Examining past reports is quite enlightening since it reveals what the IPCC has learned in the last 24 years.
I conclude that nothing has been learned other than how to obfuscate, mislead and deceive.
Figure 1 Temperature forecasts from the FAR (1990). Is this the best forecast the IPCC has ever made? It is clearly stated in the caption that each model uses the same emissions scenario. Hence the differences between Low, Best and High estimates are down to different physical assumptions such as climate sensitivity to CO2. Holding the key variable constant (CO2 emissions trajectory) allows the reader to see how different scientific judgements play out. This is the correct way to do this. All models are initiated in 1850 and by the year 2000 already display significant divergence. This is what should happen. So how does this compare to what came to pass and with subsequent IPCC practice?
I am aware that many others will have carried out this exercise before and in a much more sophisticated way than I do here. The best example I am aware of was done by Roy Spencer [1] who produced this splendid chart that also drew some criticism.
Figure 2 Comparison of multiple IPCC models with reality compiled by Roy Spencer. The fact that reality tracks along the low boundary of the models has been made many times by IPCC sceptics. The only scientists that this reality appears to have escaped are those attached to the IPCC.
My approach is much more simple and crude. I have simply cut and pasted IPCC graphics into XL charts where I compare the IPCC forecasts with the HadCRUT4 temperature reconstructions. As we shall see, the IPCC have an extraordinary lax approach to temperature datums and in each example a different adjustment has to be made to HadCRUT4 to make it comparable with the IPCC framework.
Figure 3 Comparison of the FAR (1990) temperature forecasts with HadCRUT4. HadCRUT4 data was downloaded from WoodForTrees [2] and annual averages calculated.
Figure 3 shows how the temperature forecasts from the FAR (1990) [3] compare with reality. It should be quite clear that the best model is the Low Model. I cannot easily find the parameters used to define the Low, Best and High models but the report states that a range of climate sensitivities from 1.5 to 4.5˚C are used. It should be abundantly clear that the Low model is the one that lies closest to the reality of HadCRUT4. The High model is already running about 1.2˚C too warm in 2013.
Figure 4 The TAR (2001) introduced the hockey stick. The observed temperature record is spliced onto the proxy record and the model record is spliced onto the observed record and no opportunity to examine the veracity of the models is offered. But 13 years have since past and we can see how reality compares with the models in that very short time period.
I could not find a summary of the Second Assessment Report (SAR) from 1994 and so jump to the TAR (third assessment report) from 2001 [4]. This was the year (I believe) that the hockey stick was born (Figure 4). In the imaginary world of the IPCC, Northern Hemisphere temperatures were constant from 1000 to 1900 AD with not the faintest trace of Medieval Warm Period or Little Ice Age where real people either prospered or died by the million. The actual temperature record is spliced onto the proxy record and the model world is spliced onto that to create a picture of future temperature catastrophe. So how does this compare with reality?
Figure 5 From 1850 to 2001 the IPCC background image is plotting observations (not model output) that agree with the HadCRUT4 observations. Well done IPCC! The detail of what has happened since 2001 is shown in Figure 6. To have any value or meaning all of the models should have been initiated in 1850. We would then see that the majority are running far too hot by 2001.
Figure 5 shows how HadCRUT4 compares with the model world. The fit from 1850 to 2001 is excellent. That is because the background image is simply plotting observations in this period. I have nevertheless had to subtract 0.6˚C from HadCRUT4 to get it to match the observations while a decade earlier I had to add 0.5˚C. The 250 year x-axis scale makes it difficult to see how models initiated in 2001 now compare with 13 years of observations since. Figure 6 shows a blow up of the detail.
Figure 6 The single vertical grid line is the year 2000. The blue line is HadCRUT4 (reality) moving sideways while all of the models are moving up.
The detailed excerpt illustrates the nature of the problem in evaluating IPCC models. While real world temperatures have moved sideways since about 1997 and all the model trends are clearly going up, there is really not enough time to evaluate the models properly. To be scientifically valid the models should have been run from 1850, as before (Figure 1), but they have not. Had they been, by 2001 they would have been widely divergent (as 1990) and it would be easy to pick the winners. But they are brought together conveniently by initiating the models at around the year 2000. Scientifically this is bad practice.
Figure 7 IPCC future temperature scenarios from AR4 published in 2007. It seems that the IPCC has taken on board the need to initiate models in the past and in this case the initiation date stays at 2000 offering the same 14 years to compare models with what came to pass.
For the Fourth Assessment Report (AR4) [5] we move on to 2007 and the summary shown in Figure 7. By this stage I’m unsure what the B1 to A1F1 scenarios mean. The caption to this Figure in the reports says this:
Figure SPM.5. Solid lines are multi-model global averages of surface warming (relative to 1980–1999) for the scenarios A2, A1B and B1, shown as continuations of the 20th century simulations. Shading denotes the ±1 standard deviation range of individual model annual averages. The orange line is for the experiment where concentrations were held constant at year 2000 values. The grey bars at right indicate the best estimate (solid line within each bar) and the likely range assessed for the six SRES marker scenarios. The assessment of the best estimate and likely ranges in the grey bars includes the AOGCMs in the left part of the figure, as well as results from a hierarchy of independent models and observational constraints. {Figures 10.4 and 10.29}
Implicit in this caption is the assertion that the pre-year 2000 black line is a simulation produced by the post-2000 models (my bold). The orange line denotes constant CO2 and the fact that this is a virtual flat line shows that the IPCC at that time believed that variance in CO2 was the only process capable of producing temperature change on Earth. I don’t know if the B1 to A1F1 scenarios all use the same or different CO2 increase trajectories. What I do know for sure is that it is physically impossible for models that incorporate a range of physical input variables, initiated in the year 1900, to be closely aligned and to converge on the year 2000 as shown here. It is a physical impossibility as demonstrated by the IPCC models published in 1990 (Figure 1).
So how do the 2007 simulations stack up against reality?
Figure 7 Comparison of AR4 models with reality. Since 2000, reality is tracking along the lower bound of the models as observed by Roy Spencer and many others. If anything, reality is aligned with the zero anthropogenic forcing model shown in orange.
Last time out I had to subtract 0.6˚C to align reality with the IPCC models. Now I have to add 0.6˚C to HadCRUT4 to achieve alignment. And the luxury of tracking history from 1850 has now been curtailed to 1900. The pre-2000 simulations align pretty well with observed temperatures from 1940 even though we already know that it is impossible for the pre-2000 simulations to have been produced by a large number of different computer models programmed to do different things – how can this be? Post 2000, reality seems to be aligned best with the orange no CO2 rise / no anthropogenic forcing model.
From 1900 to 1950 the alleged simulations do not in fact reproduce reality at all well (Figure 8). The actual temperature record rises at a steeper gradient than the model record. And reality has much greater variability due to natural processes that the IPCC by and large ignore.
Figure 8 From 1900 to 1950 the alleged AR4 simulations actually do a very poor job of simulating reality, HadCRUT4 in blue.
Figure 9 The IPCC view from AR5 (2014). The inconvenient mismatch 1900 to 1950 observed in AR4 is dealt with by simply chopping the chart to 1950. The flat blue line is essentially equivalent to the flat orange line shown in AR4.
The fifth assessment report (AR5) was published this year and the IPCC current view on future temperatures is shown in Figure 9 [6]. The inconvenient mismatch of alleged model data with reality in the period 1900 to 1950 is dealt with by chopping that time interval off the chart. A very simple simulation picture is presented. Future temperature trajectories are shown for a range of Representative Concentration Pathways (RCP). This is the completely wrong approach since the IPCC is no longer modelling climate but different human, societal and political choices, that result in different CO2 trajectories. Skepitcalscience provides these descriptions [7]:
RCP2.6 was developed by the IMAGE modeling team of the PBL Netherlands Environmental Assessment Agency. The emission pathway is representative of scenarios in the literature that lead to very low greenhouse gas concentration levels. It is a “peak-and-decline” scenario; its radiative forcing level first reaches a value of around 3.1 W/m2 by mid-century, and returns to 2.6 W/m2 by 2100. In order to reach such radiative forcing levels, greenhouse gas emissions (and indirectly emissions of air pollutants) are reduced substantially, over time (Van Vuuren et al. 2007a). (Characteristics quoted from van Vuuren et.al. 2011)
AND
RCP 8.5 was developed using the MESSAGE model and the IIASA Integrated Assessment Framework by the International Institute for Applied Systems Analysis (IIASA), Austria. This RCP is characterized by increasing greenhouse gas emissions over time, representative of scenarios in the literature that lead to high greenhouse gas concentration levels (Riahi et al. 2007).
This is Mickey Mouse science speak. In essence they show that 32 models programmed with a low future emissions scenario have lower temperature trajectories than 39 models programmed with high future emissions trajectories.
The models are initiated in 2005 (the better practice of using a year 2000 datum as employed in AR4 is ditched) and from 1950 to 2005 it is alleged that 42 models provide a reasonable version of reality (see below). We do not know which, if any, of the 71 post-2005 models are included in the pre-2005 group. We do know that pre-2005, each of the models should be using actual CO2 et al concentrations and since they are all closely aligned we must assume they all use similar climate sensitivities. What the reader really wants to see is how varying climate sensitivity influences different models using fixed CO2 trajectories and this is clearly not done. The modelling work shown in Figure 9 is effectively worthless. Nevertheless, let us see how it compares with reality.
Figure 10 Comparison of reality with the AR5 model scenarios.
With models initiated in 2005 we have only 8 years to compare models with reality. This time I have to subtract 0.3˚C from HadCRUT4 to get alignment with the models. Pre-2005 the models allegedly reproduce reality from 1950. Pre-1950 we are denied a view of how the models worked then. Post-2005 it is clear that reality is tracking along the lower limit of the two uncertainty envelopes that are plotted. This is an observation made by many others [e.g 1].
Concluding comments
- To achieve alignment of the HadCRUT4 reality with the IPCC models the following temperature corrections need to be applied: 1990 +0.5; 2001 -0.6; 2007 +0.6; 2014 -0.3. I cannot think of any good reason to continuously change the temperature datum other than to create a barrier to auditing the model results.
- Comparing models with reality is severely hampered by the poor practice adopted by the IPCC in data presentation. Back in 1990 it was done the correct way. That is all models were initiated in 1850 and used the same CO2 emissions trajectories. The variations in model output are consequently controlled by physical parameters like climate sensitivity and with the 164 years that have past since 1850 it is straight forward to select the models that provide the best match with reality. In 1990, it was quite clear that it was the “Low Model” that was best almost certainly pointing to a low climate sensitivity.
- There is no good scientific reason for the IPCC not adopting today the correct approach adopted in 1990 other than to obscure the fact that the sensitivity of the climate to CO2 is likely much less than 1.5˚C based on my and others’ assertion that a component of the Twentieth Century warming is natural.
- Back in 1990, the IPCC view on climate sensitivity was a range from 1.5 to 4.5˚C. In 2014 the IPCC view on climate sensitivity is a range from 1.5 to 4.5˚C. 24 years have past and billions of dollars spent and absolutely nothing has been learned! The wool has been pulled over the eyes of policy makers, governments and the public to the extent of total brain washing. Trillions of dollars have been misallocated on energy infrastructure that will ultimately lead to widespread misery among millions.
- In the UK, if a commercial research organisation were found cooking research results in order to make money with no regard for public safety they would find the authorities knocking at their door.
References
[1] Roy Spencer: 95% of Climate Models Agree: The Observations Must be Wrong
[2] Wood For Trees
[3] IPCC: First Assessment Report – FAR
[4] IPCC: Third Assessment Report – TAR
[5] IPCC: Fourth Assessment Report – AR4
[6] IPCC: Fifth Assessment Report – AR5
[7] Skepticalscience: The Beginner’s Guide to Representative Concentration Pathways











Mike Smith says:
June 12, 2014 at 10:16 am
“Back in 1990, the IPCC view on climate sensitivity was a range from 1.5 to 4.5˚C. In 2014 the IPCC view on climate sensitivity is a range from 1.5 to 4.5˚C. 24 years have past and billions of dollars spent and absolutely nothing has been learned!:
Bingo! And I’m 97% certain the climate sensitivity will turn out to be somewhat less than 1˚C
————————————-
97% certain? You made that up.
Where’s the raw data, the code, references and reviewer’s comments.
I’d like a list of your co-authors. Please include names, CVs and each person’s individual contributions to the research.
cn
Steven Mosher says:
June 12, 2014 at 10:05 am
… The Thrilling Chase for the Ghostly Missing Heat …
Steven Mosher:
At June 12, 2014 at 10:05 am you respond to
by providing a load of irrelevant waffle about neutrino research and an assertion that research often has interruptions to progress of some decades.
You response is an improvement on your usual practice of posting brief and abusive ambiguities, but it is equally laughable.
The expenditure on AGW research has been running in excess of US$ 5 billion a year for three decades: the US alone has been spending in excess of US$ 2.5 billion a year.
Nothing has resulted except your hope that something useful may result in future. Well, if half that money had been spent on e.g. providing sanitation in the developing world then something useful would have resulted.
And nothing useful is likely to result from AGW research conducted in accordance with your post at June 12, 2014 at 10:45 am. It advocates the statistically unsupportable action of averaging the outputs of different GCMs and your excuse for the action is this nonsense
You admit “There are all sorts of things wrong with the approach.”
then claim “But it works better than other approaches.”
so you assert “you go with what you have”.
NO! No scientist would adopt a procedure s/he knows is “wrong” because of lack of something better.
A scientist assesses if a procedure is adequate, then uses an adequate procedure and rejects an inadequate procedure.
And an honesty person certainly does NOT include the indications of a “wrong” procedure in a “Summary For Policymakers” if there is doubt that it should “be used for policy”.
Richard
Mr Mosher, be so kind as to answer:
If we really don’t know, do you advocate that we do nothing until we know something?
I am not in the field so science, I’m a service engineer. Faced with a technical problem I would try to ascertain the cause. But if I don’t know what is causing it then I would never advise my client to keep spending money until the problem is solved. And if my thoughts and actions don’t actually correspond to solving the issue, then I am clearly on the wrong track. I try something else.
I used to love the world of science, and envy scientists. Now, I don’t think my opinion of either could be lower. Science appears to be moving toward a religion. I appreciate what you say when you say that maybe we don’t know, so why do scientists and those that represent them continually make out that they do know?
AGU and Richard Alley, did the IPCC get its early projections right?
Sure did!
I took the average of 3 GCM outputs by entering their values in my desktop calculator.
Not only am I a modeler, I’m a computer modeler.
As long as they continue to fail then expect further choppings on their next report.
Oh but they have learned a lot about climate sensitivity – but won’t say. Therefore the range stays the same. 😉
Mr Mosher @ur momisugly 10:05am
I’m pretty sure that no one was trying to create sweeping international policy based on the un-proven neutrino.
To achieve alignment of the HadCRUT4 reality with the IPCC models the following temperature corrections need to be applied: 1990 +0.5; 2001 -0.6; 2007 +0.6; 2014 -0.3. I cannot think of any good reason to continuously change the temperature datum other than to create a barrier to auditing the model results.
This would all be so much easier if we would simply align reality with the models.
Maybe Gavin will finish what Hansen started so they can check that one off the list.
Bob Tisdale: The HadCRUT4 datum is 1961 to 1990 according to this source:
http://www.cru.uea.ac.uk/cru/data/temperature/
CRU:UEA – whoever they may be 😉 Its possible that rgbatduke has hit on the reason for this continuously changing at the IPCC but it seems to me there is a ±0.6˚K fluctuation in the baseline of the models. There is no excuse for the IPCC not making succinct comments about their methodology in the summary reports “readers may note that the baseline has changed by x˚C since the last report because…” but they don’t do it. I suspect 1) because they don’t know they are doing and 2) if they did know they would not know why. I simply don’t have the time to dig through the thousands of pages of main reports.
Steven Mosher: I have no problem with blue skies scientists hammering out data, theory and empiricism over decades or centuries. If it were not for energy policies being based on the findings of this fledgling imperfect science then I probably wouldn’t give a damn. But the politicization, and suspicion that politics may be directing scientific outcomes and the consequences for society of misguided energy policies is what drives me. I know many feel the same. The ascendency of the Green movement and its influence. In another post I say this:
blackadderthe4th – good vid, but doesn’t tell the whole story since they made a number of forecasts, only 1 was right, that with the low climate sensitivity. So the speaker is actually being disingenuous.
Dear Moderators,
It appears all these images are hosted on euanmearns-dot-com and were never copied over to the (free) WUWT WP account. As Energy Matters does have a donate button and does not appear to be hosted for free on WP or similar, I’m wondering if WUWT is blasting through Mearns’ bandwidth allotment and racking him up a significant bill.
[Don’t know right now, thank you for the heads up. .mod]
Just to add to the confusion about what model to use, I’ve never understood why the absolute global temperatures they produce should differ from each other and the historic record, and only the anomalies show some measure of agreement.
Except, of course, that we are using it for policy and being told that it is reliable as pretty much the sole basis for the many statements of “confidence” scattered throughout, say, AR5 especially in the SPM.
I won’t address your model of models nonsense, because that’s precisely what it is. Oh, wait, yes I will.
First, there is no “statistical ensemble of independent and identically distributed models each representing perfect physics”. Really, there isn’t for any problem, not just this one. The term ensemble, especially when used in physics, has a very precise meaning:
http://en.wikipedia.org/wiki/Statistical_ensemble_%28mathematical_physics%29
and it is this precise meaning that describing the CMIP5 collection as a “MultiModel Ensemble” is attempting to co-opt. Obviously calling this collection an “ensemble” is sheer nonsense and/or wishful thinking. And yes, there are indeed ensembles used in climate science, and even used for good, not evil. If you visit here:
http://en.wikipedia.org/wiki/Climate_ensemble
you will find that there are two marginally defensible uses of the term ensemble in climate science and two indefensible uses. The two defensible uses are:
* The perturbed physics ensemble, which attempts to “average” over our ignorance — which is basically what ensembles in statistical physics always do (and you can take me as an modest expert in this as I did Monte Carlo computations in statistical physics for maybe 15 years, many of them gigaflop-years of total computation back when this was expensive). These basically jiggle the not-precisely-known parameters to see what happens within the plausible phase space of their presumed values.
* The initial condition ensemble, which tries to average over the chaotic nature of the simulations (weather prediction is where Lorenz discovered deterministic chaos in the first place). Unfortunately, the whole point of chaos is the divergence of future trajectories with some sort of Lyapunov exponent describing how fast even tiny perturbations within this “ensemble” fill the phase space of possible futures, along with the fact that this phase space itself is structured by attractors in high numbers of (fractally distributed) dimensions. Hence weather prediction runs out of gas in a matter of weeks, and no amount of additional computation can keep up with the growth in the size of the phase space integrated over even fairly tightly constrained initial conditions.
The two questionable uses are:
* The “forcing ensemble”. Do even the people who coined this name know what it means? Seriously. This basically means that they take CO_2 and make it go up at different schedules, and, with an entire mountain of additional assumptions on how it works to force the climate, claim that they can extract a “warming signal” that isn’t just built into the assumptions in the first place but now it is backed by an “ensemble” of computations and hence has some statistical relevance. Nonsense! In any event, they aren’t statistically sampling a space of forcings in any meaningful way, because there is no such thing. To the extent that this is either reliable or confirmable by comparison with reality, it is already implicit in the perturbed parameter and initial condition ensembles. The only reason it even has a name is to sell people on the danger of “forcings”.
* The “Grand Ensemble”, or an ensemble of ensembles. This is what the MME is pretending to be, but note well the diagram — it is not, not even remotely, a grand ensemble, which is basically a layering of the two valid ensembles above, perturbed physics and perturbed initial conditions. Indeed, there isn’t any particular need for the layering — one can perturb physics and initial conditions in a single computation and in chaotic problems. Furthermore, we know perfectly well that perturbing both the physics and the initial conditions in a single computation in nonlinear chaotic dynamical open systems does not produce the same distribution of outcomes as perturbing the physics and initial conditions independently as if the two problems are in some sense separable. Or rather, it had better not — because the only meaningful “ensemble” average is one that averages over our appropriately distributed, unbiased ignorance. Since we are simultaneously ignorant of physics and initial conditions and do not know the distribution or bias of our ignorance, the sole point of using ensemble methods at all is to compare a perturbed parameter ensemble (where both physics and initial conditions are sampled) and then compare the predictions to reality, one model at a time, to discover if our models are sampling the correct ranges of either one, with the correct distribution of future outcomes.
The default assumption in all uses of statistical mechanics in physics is that suitably averaged reality does the most probable thing, not the least probable thing, nearly all of the time. There are lots of reasons for this, but the heart of them all is the Central Limit Theorem. Once one average over the details of a correct ensemble, those details cease to matter as the CLT kicks in and the sample means start to be normally distributed around the true mean. And I’d be embarrassed if I told you how old I was when I finally had this epiphinaic realization in spite of taking courses that attempted to convey it to me on numerous occasions.
In AR5, the PPE is defined and used — per model — for good. Or it would be good if the outcomes of the PPE runs were individually compared to reality with an eye to rejecting bad models, which never seems to happen. In AR5, the MME is defined and used — collectively — for pure evil! Even the authors of chapter 9 acknowledge that there is no statistically defensible reason for flat averaging a bunch of PPE means from many non-independent models and then asserting that the result somehow is a normal distribution around a true mean expected behavior.
rgb
Steven Mosher says:
June 12, 2014 at 10:05 a
//////////////
Whilst there is some merit in your point, you conveniently overlook the substantial difference.
How much money has been thrown at climate science in all its guises? How much on research into Neutrinos. Many more orders of magnitude have been spent on Climate Science, and we are not as far forward as we were in the 1970s. Climate Science has regressed, not taken a step forward.
Eustace Cranch:
This is the crux of the matter. If climate science was some esoteric branch of physics that may discover the wanton in 200 years time, no one would care. But its not. It lies at the heart of global politics the credibility of science today and the welfare of human populations. And so we do care.
And its not that “we don’t know”. Its that the evidence points strongly in one direction, and that is (IMO) CO2 has marginal impact on global surface temperatures. It could be zero stretching up to 1.5˚K per doubling of CO2.
To lay my cards on the table, I’m more concerned about deforestation and over fishing. There has to be limits somewhere to what we can safely do to Earth ecosystems. The fact that we don’t know where these safe limits lie comes down to a lack of scientific rigour among those doing the work.
Joel O’Bryan says:
June 12, 2014 at 11:10 am [ … ]
That’s my concern. If martial law is ever declared, we know what’s coming next.
+++++++++++++++++++++
blackadderthe4th,
Please. Richard Alley knows where his bread is buttered. Just because he falsely asserts that global warming is continuing, that doesn’t make it so.
Global warming has stopped. That’s what the real world is clearly telling us.
Even more clearly, global warming has been declining for the last 9,000+ years. We are seeing the jagged bumps up and down while the moving staircase is definitely going downwards relentlessly.
Mosher are you looking for a new career in Climastrological modeling? Pisces says there is no future in that. That post of yours is full of garbage.
More garbage. Leave the models alone for a second and look at the temperature. You don’t need math for that.
How did we get to the stage when the obvious failure of the models turns into who can waffle best in the English language?
Sheesh! I forgot that Mosher is actually good in English. Sorry Mosh.
In my last comment I actually forgot that Mosh is good in English. Honestly. I did not mean to hint at credentials at all. My subconscious insight was not intended. Maybe Mosh can learn something from this. I have.
analyse this. Bloomberg trying to reinforce the ridiculous ABC/WaPo poll claiming americans want CC action, even if the cost to them is significant:
11 June: Bloomberg: Lisa Lerer: Americans by 2 to 1 Would Pay More to Curb
Climate Change
Americans are willing to bear the costs of combating climate change, and
most are more likely to support a candidate seeking to address the issue.
By an almost two-to-one margin, 62 percent to 33 percent, Americans say they
would pay more for energy if it would mean a reduction in pollution from
carbon emissions, according to the Bloomberg National Poll.
While Republicans were split, with 46 percent willing to pay more and 49
percent opposed to it, 82 percent of Democrats and 60 percent of
independents say they’d accept higher bills…
The EPA proposal is likely to be modified during a public comment period,
and a bipartisan coalition of coal-state lawmakers have vowed to pass
legislation to block them…
Obama’s proposal has divided his party along regional lines. While
Democratic Senate candidates in Iowa and Colorado back the emission limits,
others in coal-states such as West Virginia and Kentucky have distanced
themselves from them…
http://www.bloomberg.com/news/2014-06-10/americans-by-2-to-1-would-pay-more-to-curb-climate-change.html
the sample is the equivalent of approx 72 Australians being polled; only 5% really concerned about CC, yet Bloomberg/Selzer get an alleged huge majority willing to pay for action in the two pertinent questions! CAGW figures are always suspect.
Bloomberg News National Poll – SELZER & COMPANY
June 10 (Bloomberg) — The Bloomberg News National Poll, conducted June 6-9
for Bloomberg News by Selzer & Co. of Des Moines, IA, is based on interviews
with 1,005 U.S. adults ages 18 or older..
http://media.bloomberg.com/bb/avfile/rg._mQ264POU
Why all the intricacies of language?
THE MODELS FAILED! That is all that matters. Emergency over. Climate sensitivity not as bad as we previously thought! The jig is almost over. Please stop the details, look at the fail, it’s called the embarrassed naked elephant in the room.
http://www.euanmearns.com/wp-content/uploads/2014/06/CMIP5-90-models-global-Tsfc-vs-obs-thru-2013.png
In any other science things would have moved on by now. What keeps this particular con going is the possibility of losing a LOT OF MONEY. Hey, I didn’t tell the BBC to invest huge chunks of their pensions into climate schemes. There are many other examples from individuals.
@mosher
Steven
The model of models approach is fatally flawed unless the models that are shown to be wrong are incrementally excluded from the analysis. Nobody has ever gotten closer to an answer by averaging a correct value with a wildly incorrect value. The objective is to converge upon a model that is representative of temperature change over time. To do that, models that have failed must be excluded from the analysis, instead we see the same motley ensemble with no predictive value because at best only one of them is correct and at worst none of the are correct.
Any individual model for which the temperature falls outside the 2 sigma envelope for any significant cumulative period lets say > 8% OF 30 years( 2.5 years or more) should be dropped forthwith. If it turns out that all the models are excluded then clearly it’s back to the drawing board for climate science, because they have something very wrong. Instead we tenaciously grasp onto models that we know don’t work?