The real IPCC AR5 draft bombshell – plus a poll

Take a look at Figure 1.4 from the AR5 draft (shown below). The gray bars in Fig 1.4 are irrelevant (because they flubbed the definition of them), the colored bands are the ones that matter because they provide bounds for all current and previous IPCC model forecasts, FAR, SAR, TAR, AR4.

Look for the surprise in the graph. 

IPCC_Fig1-4_models_obs

Here is the caption for this figure from the AR5 draft:

Estimated changes in the observed globally and annually averaged surface temperature (in °C) since 1990 compared with the range of projections from the previous IPCC assessments. Values are aligned to match the average observed value at 1990. Observed global annual temperature change, relative to 1961–1990, is shown as black squares  (NASA (updated from Hansen et al., 2010; data available at http://data.giss.nasa.gov/gistemp/); NOAA (updated from  Smith et al., 2008; data available at http://www.ncdc.noaa.gov/cmb-faq/anomalies.html#grid); and the UK Hadley  Centre (Morice et al., 2012; data available at http://www.metoffice.gov.uk/hadobs/hadcrut4/) reanalyses). Whiskers  indicate the 90% uncertainty range of the Morice et al. (2012) dataset from measurement and sampling, bias and coverage (see Appendix for methods). The coloured shading shows the projected range of global annual mean near surface temperature change from 1990 to 2015 for models used in FAR (Scenario D and business-as-usual), SAR (IS92c/1.5 and IS92e/4.5), TAR (full range of TAR Figure 9.13(b) based on the GFDL_R15_a and DOE PCM parameter settings), and AR4 (A1B and A1T). The 90% uncertainty estimate due to observational uncertainty and  internal variability based on the HadCRUT4 temperature data for 1951-1980 is depicted by the grey shading. Moreover, the publication years of the assessment reports and the scenario design are shown.

So let’s see how readers see this figure – remember ignore the gray bands as they aren’t part of the model scenarios.

I’ll have a follow up with the results later, plus an essay on what else was found in the IPCC AR5 draft report related to this.

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
372 Comments
Inline Feedbacks
View all comments
Kelvin Vaughan
December 20, 2012 2:56 am

HenryP says:
December 20, 2012 at 2:43 am
how stupid can you be….
smoking causes cancer, but does cancer also cause smoking?…….
Eventually at the Crematorium!

December 20, 2012 3:34 am

Yes, 6 annual observations are below the projections, and 5 are above.
The projections tend to be linear, whereas the observations have a definite fluctuation to them.
These fluctuations correspond pretty well with the Pacific Decadal Oscillation, but not so well with the solar cycle: http://greenerblog.blogspot.co.uk/2012/11/climate-models-are-they-any-good.html
There is a fair match between the outliers in the graph above and the behaviour of the PDO .
Therefore it looks as if the modellers need to turn up the gain on the PDO, in order to get a better match with observations.
However, since the PDO is an oscillation, it impacts only upon the short term correspondence between models and observations, not on the long term trends, and it is these long term trends which are important to policy makers.

richardscourtney
December 20, 2012 3:51 am

Terry Oldberg:
re your post December 19, 2012 at 6:41 pm
We need to return climate science to being real science before it damages the reputation of ALL science.
The “inmates” ARE “in charge of the asylum”. That is the problem.
Your non sequitur merely demonstrates you don’t understand the problem. Please read my post at December 19, 2012 at 5:59 pm (which you claim to be answering) again because you have failed to understand it.
Since 1999 I have been reporting that the models are bunkum and for much more fundamental reasons than those which you claim. But being right is not the issue here. Getting climate science right is the issue. And your ideas cannot be – and will not be – given any proper consideration until climate science is returned to being real science and not pseudoscience.
You are trying to ensure the “inmates” remain “in charge of the asylum”.
The climate modelers set their criterion for assessing their work. That provides an opportunity to hold them to account. You are helping them to again ‘move the goalposts’ of their accountability. And your work will be ignored until they are held to account.
Richard

Reply to  richardscourtney
December 20, 2012 8:55 am

richardscourtney:
We are in agreement that: a) the inmates (IPCC climatologists) are in charge of the asylum, b) presently, global warming climatology is a pseudoscience and, c) getting climate science right is the prime issue. Thus, it seems to me that it would be well for us explore only areas where our views are apparently not congruent.
Over a period of 13 years, my job was to design and manage a long succession of scientific studies. While in this job, I learned how to design a study whose methodology was without logical error. In the large research institute for which I worked, this ability was unique. It resulted from a command of advanced information theory that I acquired by hiring the person who had made these advances. Among researchers and academics, advanced information theory did not catch on. For this reason, my background in it remains highly unusual.
In viewing IPCC climatology, one facet of it that stands out for derision is that it lacks a statistical population. Without a population, it is impossible for the IPCC climate models to convey information to policy makers about the outcomes from their policy decisions; thus, though policy makers are currently going through the motions of regulating the climate they do so while uninformed of the consequences from their actions. Also impossible is for the models to be statistically validated; the claims that are made by the models are non-falsifiable.
Though these truths are evident to me, they are apparently not evident to IPCC climatologists for they continue to go along the path they have trod for decades. In their failure to reform, a contributing factor seems to be the widespread notion that existing climate models can be tested by comparison of the time rate of change of the global temperature in a selected projection with the time rate of change of the global temperature in a selected global temperature time series.
To make this comparison is to perform an IPCC-style “evaluation.” In IPCC climatology, the idea of “projection” replaces the idea in legitimate science of “prediction” while the idea of “evaluation” replaces the idea in legitimate science of “validation.” In lieu of the existence of a statistical population, it is impossible to perform a validation. It is nonetheless possible to perform an evaluation. In doing so, one must swallow unsupported and unsupporable assumptions of linearity, normality and statistical independence in the elements of a non-existent statistical population. Swallowing these assumptions is something which thus far you have been willing and even eager to do. To do so, however, is to contribute to keeping the inmates in charge of the asylum.

December 20, 2012 6:30 am

grahamw says
So if more investigation into the balance was to take place, more reliable projections could perhaps be made?
henry says
true. but I already did the whole job myself, to satisfy my own curiosity. Key for me was to figure out the amount of heat coming through the atmosphere. If current warming (or part thereof) were due to human release of CO2 or more GHG, one way or another, one would have to see minima rising, pushing up means.
I found the opposite
http://blogs.24.com/henryp/2011/05/06/henrys-pool-table-on-global-warming/
I found it was maxima pushing up means, and minima, not the other way around.
There were a few stations where minima were found moving up faster, like in Las Vegas, but here I found that a desert was changed into a luscious green city within a few decades. Obviously, you will find more heat trapped here by the increase in greenery (due to water being pumped in from afar?).
next, after finding out that it is natural warming pushing up the temps.
I determined the rates of warming/cooling over time
http://blogs.24.com/henryp/2012/04/23/global-cooling-is-here/#comment-211
which ultimately let me to this graph:
http://blogs.24.com/henryp/2012/10/02/best-sine-wave-fit-for-the-drop-in-global-maximum-temperatures/
Therefore, I figure that my own projection is probably the most reliable:
There is no global warming anymore. No matter how much CO2 we will pump up:
We will cool,
by about 0.3 degrees K on the maxima and the means (because earth’s energy store is depleted)
in the next 8 years.
Be aware of it.
Prepare for it. (buy some extra warm cloths)

mpainter
December 20, 2012 9:53 am

Glad to have your thoughts. Actually, these last sixteen years do suffice to refute AGW theory. It is true that the models were invalid to begin with, having made false assumptions and misapplication of principles of physics. But theoretical disputes are inconclusive, and theory stands or falls according to observations. This fundamental aspect of science i.e., testing theory by observations, has been ignored by the AGW crowd. It is high time to put an end to this perversion of the principles of science. Do you not agree?

Reply to  mpainter
December 20, 2012 6:15 pm

mpainter:
Sometimes its possible to resolve a theoretical dispute through citation of the pertinent logical rule or rules. One of the disputes that can be resolved in this way is over the claim that in a recent period global warming was statistically insignificant. This claim violates logical rules. I’d be pleased to take this matter up with you in detail if you would like to do so.

Graham W
December 20, 2012 10:26 am

Terry, could you explain what you mean by IPCC science lacking a statistical population. Excuse my ignorance! Thanks.

Reply to  Graham W
December 20, 2012 7:24 pm

Graham W:
Thanks for taking the time to reply. I’ll intiate my response with a caution. In making an argument, one should avoid use of polysemic terms (terms with multiple meanings) for to use one or more of them is to foster improper inferences. “Science” is a polysemic term, hence should be avoided.
As I use the term “statistical population,” it references a set of statistically independent events. For example, it references a sequence of flips of a coin.
Events are of two types. One of these types is describable by its outcome. The other is describable by its condition and its outcome. The outcome is observable. The condition, if any, is also observable. Conditions and outcomes are both examples of states of nature.
A “prediction” is an extrapolation from an observed condition of an event to an unobserved outcome of the same event in which the outcome is inferred. In this way, the notion of “predictions” references the notion of events in a statistical population.
A “sample” is a subset of the events in a statistical population in which the outcome and the condition (if any) of each event have been observed. In this way, the notion of a “sample” references the notion of events in a statistical population.
That a model has been “validated” signifies that a match has been observed between the predicted and the observed relative frequencies of the various possible outcomes in a sample. In this way, the notion of “validation” references the notion of events in a statistical population.
That a model conveys “information” to makers of policy on CO2 emissions on the outcomes from their policy decisions implies the existence of the events in a statistical population that have these outcomes as properties. In this way, the notion of “information” references the notion of events in a statistical population.
As I’ve demonstrated, under the scientific method of inquiry the underlying statistical population plays a central role. Absent the underlying statistical population, the methodology of a study is not “scientific” under the disambiguation of this term by the courts of the United States. After a diligent four year search, I have discovered no evidence of a statistical population underlying the inquiry by the climatological community into global warming. In the same search, I have uncovered strong evidence of the non-existence of this population.
If, as seems to be the case, the inquiry into global warming has not had a statistical population, this inquiry was not “scientific” under the above referenced disambiguation of “scientific.” If this conclusion surprises you, I suspect this surprise is a consequence from incorporation by climatologists of the equivocation fallacy into their arguments. Under the equivocation fallacy, polysemic terms change their meanings in the midst of an argument with the consequence that improper inferences are drawn.

maxberan
December 20, 2012 4:59 pm

When you (Terry) write “In lieu of the existence of a statistical population, it is impossible to perform a validation” as you have done more than once, are you using “in lieu of” to mean “in the absence of” or does it have some other technical meaning? I have more to say but need this cleared up in case I’m barking up the wrong tree.

Reply to  maxberan
December 20, 2012 8:20 pm

maxberan:
Thanks for giving me the opportunity to clarify. By “in lieu of” I mean “in the absence of.”

Gail Combs
December 20, 2012 7:16 pm

Terry Oldberg says:
December 20, 2012 at 6:15 pm
mpainter:
Sometimes its possible to resolve a theoretical dispute through citation of the pertinent logical rule or rules. One of the disputes that can be resolved in this way is over the claim that in a recent period global warming was statistically insignificant. This claim violates logical rules. I’d be pleased to take this matter up with you in detail if you would like to do so.
>>>>>>>>>>>>>>>>>>>>>>>>>>>
Why do you say that?
Assuming the temperature data is valid and that it has error bars, if the temperature only varies within those error bars and does not go outside those error bars then there is no statistically significant warming (or cooling).
The estimate of error:

…The title of this graph indicates this is the CRU computed sampling (measurement) error in C for 1969. Note how large these sampling errors are. They start at 0.5°C, which is the mark where any indication of global warming is just statistical noise and not reality. Most of the data is in the +/- 1°C range, which means any attempt to claim a global increase below this threshold is mathematically false. Imagine the noise in the 1880 data! You cannot create detail (resolution) below what your sensor system can measure. CRU has proven my point already – they do not have the temperature data to detect a 0.8°C global warming trend since 1960, let alone 1880….
http://strata-sphere.com/blog/index.php/archives/11420

mpainter
December 20, 2012 7:33 pm

I am glad for your kind offer to explain everything to me but first, I would like for you to respond to Graham W and maxberan, because they raised some important points which I would also like to see addressed.
After that, there are a few comments you made upthread I would like for you to address, if you would, just as a matter of curtesy.
And then, I would be delighted to hear about logical rules.

December 20, 2012 8:22 pm

mpainter:
I’d be grateful if you would restate the upthread points you’d like me to address.

john robertson
December 20, 2012 9:19 pm

Oldberg 7:24 So the science of the IPCC team can not be tested, because they have none?
And snickering about their failures (by their own claims) is irrelevant?
I noticed the disagreement , feel most involved are saying same thing different words.
Boils down to the IPCC is a deliberate scam to regulate people, under the cloak of pretending to gather science about co2 effecting weather.
All statistical claims are moot, cause there is no data as such. Nothing to work with.
I do agree many arguments can be resolved by stopping to define the terms. And that climatology is notable for shifting the meaning of terms. Arguing with the mist in effect.
Or do I miss the thrust of your posting?

Reply to  john robertson
December 21, 2012 8:59 am

john robertson:
It sounds as though we see the situation similarly.

maxberan
December 21, 2012 2:53 am

Much of my professional life was spent at the boundary of hydrology and statistics and I am not unaccustomed to the pleas from academically trained statisticians telling me that my treatment of the data didn’t fulfil the strict requirements of the chosen statistical technique. It was often a case of the best being made an enemy of the good and an insufficient appreciation of the compromises required given the nature of the subject, the availability of the data, and the demands of the client.
The way I look at this population issue is like this. We envisage an all-knowing entity – the great climatologist in the sky – with a large sack containing all the annual global mean temperatures that ever were and will be. Every year she reaches into the sack and throws us the number for the year. Bad eyesight or fat finger intervene so different climatologists get the number a bit wrong but not so wrong that the average among them doesn’t reveal a pattern when plotted against the year number.
The contents of the sack is the population, the climatologists data constitutes a sample from the population. It may not be a sample in the “subset of the population” sense but it’s a workable stab at it especially given the questionable reality of the numbers in the sack.
The attention of the policymaker is drawn to some worrying features of the pattern so she asks the climatologists what they think the numbers coming out of the sack will look like in 100 years time. Different climatologists come up with different ways of second guessing this but the group with the brightest mathematic and physics credentials (some claiming a personal hotline to she of the sack) reckon they know something about the mechanism underlying the numbers in the sack, not everything, but enough to be getting along with. Armed with this fuzzy knowledge they generate different versions of the future under different assumptions about the modelled process and its inputs. This is summarised as an ever-broadening fan of trajectories somewhere within which the poor policymaker has to decide what her working supposition will be for deciding what to do.
To help further, the intellectual leap is then made that this fan comprises alternative “samples” B1, B2, B3 etc from the population in the sack and a cottage industry then is created comparing A with the various B’s within their common period as a basis for projection out into the future using the “best” bits of sample B as judged from the comparison.
Obviously there is a mass of missing detail in this “toy story” – more variables in the sack, other methods yielding samples C, D etc and their role if any, conditionalities on policy decisions – but I address here the broad methodological principle so I hope we don’t get sidetracked into enumerating lacunae as I know them well enough.
Anyway, back to the broad methodological principle: for the life of me I can’t see what is so wrong with it as a pragmatic approach to the policymaker’s question as to render it meaningless. Indeed how else would one proceed? Something along these lines must be repeated right across applied science and technology where we don’t have the luxury of repeated experiments or manipulation. Okay, we’re not privy to the contents of the sack (i.e. we don’t know the population which seems to preoccupy you so much) and the sampling might fail many of your definition tests, but hypothesis testing can cope with comparing samples through appropriate randomisation. In the context of the compromises and fuzzinesses inherent in environmental sciences and policymaking I cannot conceive that substituting an information theory based approach to the choice between alternative projected futures will make an important difference to using sampling distribution based tests.

Reply to  maxberan
December 21, 2012 9:31 am

maxberan:
Thank you for providing us with an excellent overview of a way of thinking that may be the way of thinking of those climatologists who created the methodology of the inquiry into global warming that is reviewed in the IPCC’s periodic assessment reports. There is something wrong with this way of thinking. This is that a consequence from it is for information to be fabricated. The fabricated information provides researchers with information about the distribution of the elements of the non-existent statistical population. It tells them, for example, that the sample mean of the global temperature varies linearly with the time.
In the worst case, these researchers’ propensity for fabricating information leads them to dispense entirely with identification of the statistical population that underlies those models which are a product of their research. Under this circumstance, it is not possible for the models to convey information to policy makers about the outcomes from from their policy decisions with the result that the research is a total flop. However, through clever use of the equivocation fallacy researchers make it seem to the naive that policy makers are provided with information. This worst case scenario is what we’ve gotten from the climatological community.
By the way, the models that have come to us from the several hundred billion US$ in climatological research are not predictive models. If they were, the elements of the underlying population would be observable and would not be global temperatures.

Graham W
December 21, 2012 6:49 am

Thanks for the clarification. I think I’m starting to understand…though there is a statistical population for each measurable variable of climate eg temperature, humidity etc…there is no statistical population for, say the Greenhouse Effect. So if the models were only basing projections of future climate change based on previous climate change, the results of the projections could be statistically validated or otherwise. But since the models incorporate an unknown, as yet unmeasurable effect (which has no statistical population) into their projections, there can be no statistical validation. In fact it is not a statistical process at all.
So by debating whether there has/hasnt been a 16 year period of no-warming, and whether this falsifies the models or not, in essence we are playing into the hands of the IPCC, by making the false assumption that the models CAN be statistically validated or falsified. When what really needs to happen is for the entire process to be exposed as unscientific for the reasons you have stated, then all arguments of 16 years warming/no warming will be moot. Is this what you’re saying?

Reply to  Graham W
December 21, 2012 9:57 am

Graham W:
That’s not exactly what I’m trying to say. The statistical population that I have in mind would underlie the IPCC climate models if they were predictive models but they are not predictive models and there is no underlying population. Models that are not predictive models are useless for the purpose of making policy on CO2 emissions because they convey no information to policy makers about the outcomes from their policy decisions. The authors of the periodic IPCC assessment reports have covered up this state of affairs through a deceptive use of language that exploits the equivocation fallacy. A principle of logic states that a proper inference may not be drawn from an equivocation but that is exactly what these authors do. An equivocation is a misleading use of a term wherein the meaning of this term changes in the middle of an argument. “Sixteen years with no statistically significant warming” is an improper inference that is drawn from an equivocation.

December 21, 2012 7:37 am

I only needed a sample of 47 weather stations to establish that we are cooling,
http://blogs.24.com/henryp/2012/04/23/global-cooling-is-here/
which ultimately led me here
http://blogs.24.com/henryp/2012/10/02/best-sine-wave-fit-for-the-drop-in-global-maximum-temperatures/
we will continue to cool
until 2040
live with it.prepare for it.
Anyone who says otherwise will be proved wrong.
(you can see the binomial curve in the graph going up from 1992-1998 and now starting to curve down)
To prove we are cooling, see here:
http://www.woodfortrees.org/plot/hadcrut4gl/from:2002/to:2013/plot/hadcrut4gl/from:2002/to:2013/trend/plot/hadcrut3vgl/from:2002/to:2013/plot/hadcrut3vgl/from:2002/to:2013/trend/plot/rss/from:2002/to:2013/plot/rss/from:2002/to:2013/trend/plot/gistemp/from:2002/to:2013/plot/gistemp/from:2002/to:2013/trend/plot/hadsst2gl/from:2002/to:2013/plot/hadsst2gl/from:2002/to:2013/trend
all major data sets now say so, just as I had predicted.

Graham W
December 21, 2012 7:49 am

In fact you could even allege that the “16 years of no warming” meme may actually have come originally from the IPCC themselves, it almost seems too good to be true from their perspective…you now have all the “deniers” (their terminology not mine) in essence SUPPORTING this notion that their models can be statistically “proven” or falsified (I know “proven” is probably the wrong word but didn’t know what else to use) – giving credence to a methodology that was not scientific to begin with.

Graham W
December 21, 2012 7:58 am

P.S in my opinion Henry Ps research falls into none of the logical “traps” that the IPCC has…being as how Henry is using actual past measurements only, and using a logical process of looking at the temperature means, minima and maxima to incorporate the Greenhouse Effect into his predictions in a scientific way. Not by just saying “the Greenhouse Effect must be x amount because this is how much we warmed from y date to z date and we know we can ignore this, this and this”. Henry Ps conclusions are more valid than the IPCCs in light of what we are discussing.

Graham W
December 21, 2012 8:19 am

The 16 years no warming…is not a battle the IPCC even needs to win. Even if forced to acknowledge (as they rightly should) that their models are over-estimating, they will simply claim they just need adjusting and will be more reliable in future. If you are accepting a false means by which the effects of these “adjustments” can be statistically confirmed or rejected another 15 years down the line, their lies can only be perpetuated indefinitely using more circular logic and faulty reasoning.

December 21, 2012 10:08 am

Gail Combs:
I may have addressed your question of Dec. 20, 2012 at 7:16 pm to your satisfaction in responding to other bloggers. If not, please respond with a description of the remaining issues and I’ll address them.

December 21, 2012 11:38 am

By putting the Gleisberg solar cycle into a chart, as I have done, (and others can follow and copy??), I think it is possible for me to estimate that all observed warming is natural or very nearly completely natural. Please correct me if you think I am wrong.
Consider the fact that we really do not have a global temp. record to speak of since at least around 1925. In those days they just manufactured thermometers, never realizing that after time they need to be re-calibrated…..I have challenged anyone to bring me the calibration certificates of thermometers used in weather stations from before that time, with no response.
This means that if we look at my chart, which is looking at energy-in
(not be confused with energy-out)
http://blogs.24.com/henryp/2012/10/02/best-sine-wave-fit-for-the-drop-in-global-maximum-temperatures/
we must rather look at the absolute value (positive) of the of the increase in the heat coming through the top of the atmosphere from 1927 (85 years ago) until 1950. This means an increase of ca. 0.037/2 (roughly integrated) x 23 = 0.4 degrees K. In the next period from 1950 to 1995, when records were firmly established we are seeing the warming that everyone started to fear, namely 0.037/4 (roughly integrated) 45 = 0.4 degrees K. From 1995 until 2012 it looks we went down at ca. 0.037/2 x 17 = 0.3
So I have 0.4 + 0.4 -0.3 = 0.5 degrees K up since 1927
now look here:
http://www.woodfortrees.org/plot/hadcrut4gl/from:1927/to:2013/plot/hadcrut4gl/from:1927/to:2013/trend/plot/hadcrut3vgl/from:1927/to:2013/plot/hadcrut3vgl/from:1927/to:2013/trend/plot/rss/from:1927/to:2013/plot/rss/from:1927/to:2013/trend/plot/gistemp/from:1927/to:2013/plot/gistemp/from:1927/to:2013/trend/plot/hadsst2gl/from:1927/to:2013/plot/hadsst2gl/from:1927/to:2013/trend
there is no “extra’ man made global warming.
But, please do correct me if you think my reasoning is wrong.

Graham W
December 21, 2012 12:09 pm

Terry, by a lack of a statistical population are you referring to global temperatures…in that you cannot directly observe “a global temperature”, since its an average from measurements all around the world? Or is it the measure of a Greenhouse Effect that’s the problem, or is it both things? Or is it all measurements and variables? I understand what you are saying regarding the lack of a statistical population meaning that the reports offer no useful information to policy makers. I’m still just a little hazy on the particulars of what is lacking and why it is lacking.

maxberan
December 21, 2012 12:45 pm

I am afraid your firm will be in for a rude awakening if it were ever to be approached for advice about future climate (or even current unperturbed climate). I note from the bibliography your background in climatology is limited and suspect strongly that you will discover huge difficulty in screwing (abstracting) from a system with such a vast state space, comprising multiple outputs and processes spanning atmosphere, hydrosphere and biosphere with vastly differing characteristic time and space scales some nice populations that suit the requirements of your treatment – as you put it “a set of statistically independent events”.
And you demand that the “elements” of this population be “observable – again you will be lucky; hardly anything in the natural environment is directly observable, it is almost invariably filtered through some instrument sensor or proxy measurement subject to bias and drift and reparameterised to fit the calculation scheme involving temporal and/or 2 and 3-D spatial extension.
What you call “fabricating information” and perhaps I would recognise as exercising professional judgment, will be no less than what is applied for current climate modelling.
I did not at all recognise what you say about fabricated information telling us that
Expectation(Tbar(year)) = a + b* year
I suppose it might be that post model running the coefficient of year^2 is found to be not significant, or maybe there is a strong pre-disposition for something like this because, despite superficial complexity, a GCM when it comes down to it behaves like a zero-dimension energy balance equation driven by exponentially growing ghg implying linear growth in forcing. But you can’t be sure this is the way it will work out and it wouldn’t under other scenarios for ghg growth.
I rather fear you fit closely the picture I presented in my previous posting where I charged my statistical theorist with “It was often a case of the best being made an enemy of the good and an insufficient appreciation of the compromises required given the nature of the subject, the availability of the data, and the demands of the client.”

Reply to  maxberan
December 21, 2012 3:54 pm

maxberan:
Thanks for taking the time to respond. My bulleted responses to your comments follow.
*Using proper jargon, one is said to “extract a state-space from a ‘feature space’.” Using available technology, to extract a state-space from a feature space of vast dimensionality is no problem. In the construction of an information theoretically optimal long-range weather forecasting model, Ron Christensen and his colleagues at Entropy Limited extracted a state-space from a feature space that was the Cartesian product of 100,000 features. Each state in the extracted state-space was observable.
*In the practice that I call “fabricating information,” one makes up information. To call it “exercizing professional judgement” is to cover up the unscrupulousness of the referenced professional.
* For the user of the resulting model, fabricating information has the downside that the model fails to validate when tested. Climatologists avoid this embarrassment via a dodge in which they “evaluate” their models rather than “validating” them. These models are insusceptible to being validated because they make no predictions and reference no statistical population. However, few taxpayers, journalists or politicians are aware of the fact that an “evaluation” differs from a “validation” and only the latter is associated with the scientific method of inquiry
Thus, taxpayer money continues to leave the pockets of taxpayers and enter the pockets of climatologists for purposes of pseudo-scientific research.
*The notion of “information” is defined in terms of observables. Thus, when a variable is not observable, one may not obtain information about its numerical value. The vast majority of global temperature values are not observable in the interval in which it is claimed there was no statistically significant global warming because these values were neither measured nor recorded. Thus, these temperature values can only have been been fabricated.
Your closing paragraph amounts to an ad hominem argument for continuing to pump public money into worthless research. This argument is faulty.

mpainter
December 21, 2012 9:49 pm

maxberan says: December 21, 2012 at 12:45 pm
Anyway, back to the broad methodological principle: for the life of me I can’t see what is so wrong with it as a pragmatic approach to the policymaker’s question as to render it meaningless.
========================================================
Do you advocate that policy decisions be formulated on the product of the GCM’s?

maxberan
Reply to  mpainter
December 22, 2012 5:56 am

In principle mpainter, yes (asking about my attitude to GCMs), though with due regard to their faults and deficiencies (which amounts to a “no” to what’s actually happened). I don’t blame policy makers for wanting answers, I don’t blame climatologists for wanting to help; it’s what happens after that where the problems arise and blame begins.
To amplify, the greenhouse effect is real and some anthropogenic contribution due to ghgs and land-use change flows naturally from that. I’d be okay with policy makers making policy on the basis of models whose headline output on global temperature was between zero and half a degree change from 1990 to 2012, whose output on precipitation change was zero, and whose derived output on extreme events was zero change. This could be achieved with GCMs by inputting forcings relieved of their unobserved positive feedbacks and not trying to match model with observation with any fluctuations short of 20 years.
The advantage of a GCM (over lower dimensioned energy balance or single variable regression model for example) is that it provides internal consistency between places and across parameters and this property might be important for strategic policy making. However as the changes I can believe are so close to the current situation I suppose you could argue why bother with a model, just use the back data with some mild tweaking which would also preserve internal consistency.
One would hope that the policy makers would react to a sane view of what climate science can reliably provide by way of forecast changes by re-directing their policy making attentions to known problems in the here and now and leaving climate to those charged with adaptation to weather fluctuations like agronomists, flood protection engineers, insurance companies etc etc. I realise there is fat chance of this happening as climate science is such a minor component of what is currently driving their policies. To the shame of those involved, policy makers have no difficulty recruiting compliant climatologists to provide retrospective support for what they want to do for other reasons, nor agents from those weather sensitive sectors latching on to the climate change idea as a profit or greenwash opportunity.
Sorry, long answer to a short question!

mpainter
December 22, 2012 8:37 am

maxberan says: December 22, 2012 at 5:56 am
I realise there is fat chance of this happening as climate science is such a minor component of what is currently driving their policies.
=============================================
Welcome to politics. See what happens when a vehemently disputed science is used to engender public panic and is precipitated into the realm of public affairs, thanks to the AGW hypes. What seems to escape you is that this was all deliberately done in a campaign of long endurance. When in the political arena one wins big, or loses big, and so it is no longer a search for understanding. Thus is science discredited and climate science, especially. Scientific certitude and integrity have been sacrificed to advance the agenda of a political combination, a most powerful and influential component of which is self-interest and profit-seeking. In short, it is the ugliest thing ever seen in science.
What are you willing to do to cure this malignancy?

maxberan
Reply to  mpainter
December 22, 2012 10:29 am

Sorry, can’t cope with all that mpainter. Too many flips between climatologists leading politicians by the nose and politicians leading climatologists by the nose even in the same sentence. Surprised you say scientific certitude is sacrificed – I’d have thought you would have said the opposite, that it had been invoked falsely.

December 22, 2012 8:43 am

maxbaran and mpainter:
The prospects for building a GCM that is suitable for policy making purposes do not look good. Using available technology, it is possible to build a model that is maximally efficient in its use of information thus conveying the maximum possible information to a policy maker about the outcomes from his/her policy decisions. Using this hypothetical model as a benchmark, one can draw some conclusions about the possibility of controlling the climate through the predictions of a GCM.
An early step in the construction of such a model would be to identify the duration of an event. For avoidance of a waste of capital from premature retirement of power producing facilities, this duration should approximate the lifetime of a power producing facility. For the sake of illustration, I’ll assume that this lifetime is thirty years. Thirty years, then, is the duration of an event in the statistical population that underlies my hypothetical model and is the period over which this model predicts.
The period going back to the year 1850, when the hadcrut3 global temperature time series begins, contains between 5 and 6 statistically independent observed events of thirty year duration each. Experience in building maximally efficient models suggests that the minimum number of events for construction of a predictive model for a complex system is 150. Thus, we are short on observed events by at least 145 events. We will have the minimal number of them in about 4350 years. To be relatively safe, I’ll assume that we’ll need 1500 observed events (a small sample by the standards of medical research). We’ll need another 44850 years to gather the 1495 observed events that we do not already have.
It can be concluded that the temperature record does not presently support control of the climate. For policy making purposes, the outcomes of events vary randomly with respect to the CO2 concentration and all other independent variables. To cover the contingency that escalating CO2 concentrations will produce significant warming, the rational policy response is to prepare to mitigate effects from the warming, perhaps through geoengineering.
There is a possible alternative. If a reliable proxy for the global temperature were to be discovered then perhaps we could extract observed events in large numbers from cores gathered by drilling into geological strata. The resulting model could then be used in the control of the climate.

December 22, 2012 9:50 am

henry says
http://wattsupwiththat.com/2012/12/14/the-real-ipcc-ar5-draft-bombshell-plus-a-poll/#comment-1179335
henry says (to himself)
there were a couple of errors there,
thanks for pointing that out to me
I will have to think about that again,
in more detail.
mpainter says
….to advance the agenda of a political combination, a most powerful and influential component of which is self-interest and profit-seeking.
henry says
I was just thinking today of how God came into this world as a helpless little baby, completely depending on us, humans, even to survive the whims of powerful dictators, (Herod)
compared to how He must be so great to be able to create life, seemingly going on forever, (at least ca. 500 million years and counting) by putting measures in the sky, so as to control temperatures on earth, for life to be able to continue;
often we want to think as God as being like Superman able to do anything, but we do not want to see Him as a helpless baby, in a manger.
Perhaps it is helplessness that eventually makes us great?;