A Clear Example of IPCC Ideology Trumping Fact

By Paul C. Knappenberger and Patrick J. Michaels

Center for the Study of Science, Cato Institute

Within the U.S. federal government (and governments around the world), the U.N.’s Intergovernmental Panel on Climate Change (IPCC) is given authority when it comes to climate change opinion.

This isn’t a good idea.

Here perhaps is the clearest example yet. By the time you get to the end of this post, we think you may be convinced that the IPCC does not seek to tell the truth—the truth being that it has overstated the case for climate worry in in its previous reports. The “consensus of scientists” instead prefers to obfuscate.

IN doing so, the IPCC is negatively impacting the public health and welfare of all of mankind as it influences governments to limit energy use, instead of seeking ways to help expand energy availability (or, just stay out of the way of the market).

Everyone knows that the pace of global warming (as represented by the rise in the earth’s average surface temperature) has slowed during the past decade and a half. Coming up with reasons why is the hottest topic in climate change science these days, with about a dozen different explanations being forwarded.

Climate model apologists are scrambling to try to save their models’ (and their own) reputations—because the one thing that they do not want to have to admit is perhaps the simplest and most obvious answer of all—that climate models exaggerate the amount that the earth’s average temperature will increase as a result of human greenhouse gas emissions. If the models are overheated, then so too are all the impacts that derive from the model projectionswhich is the death knell for all those proposed regulations limiting our use of fossil fuels for energy.

In the Summary for Policymakers (SPM) section of its Fifth Assessment Report, even the IPCC recognizes the recent divergence of model simulations and real-world observations:

“There are, however, differences between simulated and observed trends over periods as short as 10 to 15 years (e.g., 1998 to 2013).”

But, lest this leads you to think that there may be some problem with the climate models, the IPCC clarifies:

“The long-term climate model simulations show a trend in global-mean surface temperature from 1951 to 2012 that agrees with the observed trend.”

Whew! For a minute there it seemed like the models were struggling to contain reality, but we can rest assured that over the long haul, say, since the middle of the 20th century, according to the IPCC, that model simulations and observations “agree” as to what is going on.

The IPCC references its “Box 9.2” in support of the statements quoted above.

In “Box 9.2” the IPCC helpfully places the observed trends in the context of the distribution of simulated trends from the collection of climate models it uses in its report. The highlights from Box 9.2 are reproduced below (as our Figure 1). In this Figure, the observed trend for different periods is in red and the distribution of model trends is in grey.

clip_image002

Figure 1. Distribution of the trend in the global average surface temperature from 114 model runs used by the IPCC (grey) and the observed temperatures as compiled by the U.K.’s Hadley Center (red). (Figure from the IPCC Fifth Assessment Report)

As can be readily seen in Panel (a), during the period 1998-2012, the observed trend lies below almost all the model trends. The IPCC describes this as:

…111 out of 114 realizations show a GMST [global mean surface temperature] trend over 1998–2012 that is higher than the entire HadCRUT4 trend ensemble

This gives rise to the IPCC SPM statement (quoted above) that “There are, however, differences between simulated and observed trends over periods as short as 10 to 15 years (e.g., 1998 to 2013).”

No kidding!

Now let’s turn our attention to the period 1951-2012, Panel (c) in Figure 1.

The IPCC describes the situation depicted there as:

Over the 62-year period 1951–2012, observed and CMIP5 [climate model] ensemble-mean trends agree to within 0.02°C per decade…

This sounds like the model are doing pretty good—only off by 0.02°C/decade. And this is the basis for the IPCC SPM statement (also quoted above):

The long-term climate model simulations show a trend in global-mean surface temperature from 1951 to 2012 that agrees with the observed trend.

Interestingly, the IPCC doesn’t explicitly tell you how many of the 114 climate models are greater than the observed trend for the period 1951-2012. And, it is basically impossible to figure that out for yourself based on their Panel (c) since some of the bars of the histogram go off the top of the chart and the x-axis scale is so large as to bunch up the trends such that there are only six populated bins representing the 114 model runs. Consequently, you really can’t assess how well the models are doing and how large a difference of 0.02°C/decade over 62 years really is. You are left to take the IPCC’s word for it.

We don’t.

The website Climate Explorer archives and makes available the large majority of the climate model output used by the IPCC. From there, you can assess 108 (of the 114) climate model runs incorporated into the IPCC graphic—a large enough majority to quite accurately reproduce the results.

We do this in our Figure 2. However, we adjust both axes of the graph such that all the data are shown and that you can ascertain the details of what is going on.

 

clip_image004

Figure 2. Distribution of the trend in the global average surface temperature from 108 model runs used by the IPCC (blue) and the observed temperatures as compiled by the U.K.’s Hadley Center (red) for the period 1951-2012 (the model trends are calculated from historical runs with the RCP4.5 results appended after 2006). This presents the nearly identical data in Figure 1 Panel (c).

What we find is that there are 90 (of 108) model runs that simulate more global warming to have taken place from 1951-2012 than actually occurred and 18 model runs simulating less warming to have occurred. Which is another way of saying the observations fall at the 16th percentile of model runs (the 50th percentile being the median model trend value).

So let us ask you this question, on a scale of 1 to 5, or rather, using these descriptors, “very low,” “low,” “medium,” “high,” or “very high,” how would you describe your “confidence” in this statement:

The long-term climate model simulations show a trend in global-mean surface temperature from 1951 to 2012 that agrees with the observed trend.

OK. You got your answer?

Our answer is, maybe, “medium.”

After all, there is plenty there is room for improvement.

For example, the model range could be much tighter, indicating that the models were in better agreement with one another as to what the simulated trend should be. As it is now, the model range during the period 1951-2012 extends from 0.07°C/decade to 0.21°C/decade (note that the observed trend is 0.107°C/decade). And this is from models which were run largely with observed changes in climate forcings (such as greenhouse gas emissions, aerosol emissions, volcanoes, etc.) and for a period of time (62 years) during which short-term weather variations should all average out. In other words, they are all over the place.

Another way the agreement between model simulations and real-world observations could be improved would be if the observed trend fell closer to the center of the distribution of model projections. For instance, the agreement would be better if, say, 58 model runs produced more warming and the other 50 produced less warming.

What would lower our confidence?

The opposite set of tendencies. The model distribution could be even wider than it is currently, indicating that the models agreed with each other even less than they do now as to how the earth’s surface temperature should evolve in the real world (or that natural variability was very large over the period of trend analysis). Or the observed trend could move further from the center point of the model trend distribution. This would indicate an increased mismatch between observations and models (more similar to that which has taken place over the 1998-2012 period).

In fact, the latter situation is ongoing—that is, the observed trend is moving steadily leftward in the distribution of model simulated trends.

Figure 3 shows at which percentile the observed trend falls for each period of time starting from 1951 and ending each year from 1980 through 2013.

clip_image006

Figure 3. The percentile rank of the observed trend in the global average surface temperature beginning in the year 1951 and ending in the year indicated on the x-axis within the distribution of 108 climate model simulated trends for the same period. The 50th percentile is the median trend simulated by the collection of climate models.

After peaking at the 42nd percentile (still below the median model simulation which is the 50th percentile) during the period 1951-1998, the observed trend has steadily fallen in the percent rank, and currently (for the period 1951-2013) is at its lowest point ever (14th percentile) and is continuing to drop. Clearly, as anyone can see, this trend is looking bad for the models as the level of agreement with observations is steadily decreasing with time.

In statistical parlance, if the observed trend drops beneath the 2.5th percentile, it would be widely considered that the evidence was strong enough to indicate that the observations were not drawn from the population of model results. In other words, statistician would describe that situation that the models disagree with the observations with “very high confidence.” Some researchers use a more lax standard and would consider that falling below the 5th percentile would be enough to consider the observations not to be in agreement with the models. We could consider that case to be described as “high confidence” that the models and observations disagree with one another.

So, just how far away from either of these situations?

It all depends on how the earth’s average surface temperature evolves in the near future.

We explore three different possibilities (scenarios) between now and the year 2030.

Scenario 1: The earth’s average temperature during each year of the period 2014-2030 remains the same as is average temperature observed during the first 13 years of this century (2001-2013). This scenario represents a continuation of the ongoing “pause” in the rise of global temperatures.

Scenario 2: The earth’s temperature increases year-over-year at a rate equal to the observed rise in the temperature observed during the period 1951-2012 (a rate of 0.0107°C/decade). This represents a continuation of the observed trend.

Scenario 3: The earth’s temperature increases year-over-year during the period 2014-2030 at a rate equal to that observed during the period 1977-1998—the period often identified as the 2nd temperature rise of the 20th century. The rate of temperature increase during this period was 0.17°C/decade. This represents a scenario in which the temperature rises at the most rapid rate observed during the period often associated with an anthropogenic influence on the climate.

Figure 4 shows how the percentile rank of the observations evolves under all three scenarios from 2013 through 2030. Under Scenario 1, the observed trend would fall below the 5th percentile of the distribution of model simulations in the year 2018 and beneath the 2.5th percentile in 2023. Under Scenario 2, the years to reach the 5th and 2.5th percentiles are 2019 and 2026, respectively. And under Scenario 3, the observed trend (starting in 1951) would fall beneath the 5th percentile of model simulated trends in the year 2020 and beneath the 2.5th percentile in 2030.

clip_image008

Figure 4. Percent rank of the observed trend within the distribution of model simulations beginning in 1951 and ending at the year indicated on the x-axis under the application of the three scenarios of how the observed global average temperature will evolve between 2014 and 2030. The climate models are run with historical forcing from 1951 through 2006 and the RCP4.5 scenario thereafter.

It is clearly not a good situation for climate models when even a sustained temperature rise equal to the fastest observed (Scenario 3) still leads to complete model failure within two decades.

So let’s review.

1) Examining 108 climate model runs spanning the period from 1951-2012 shows that the model-simulated trends in the global average temperature vary by a factor of three—hardly a high level of agreement as to what should have taken place among models.

2) The observed trend during the period 1951-2012 falls at the 16th percentile of the model distribution, with 18 model runs producing a smaller trend and 90 climate model runs yielding a greater trend. Not particularly strong agreement.

3) The observed trend has been sliding farther and farther away from the model median and towards ever-lower percentiles for the past 15 years. The agreement between the observed trend and the modeled trends is steadily getting worse.

4) Within the next 5 to 15 years, the long-term observed trend (beginning in 1951) will more than likely fall so far below model simulations as to be statistically recognized as not belonging to the modeled population of outcomes. This disagreement between observed trends and model trends would be complete.

So with all this information in hand, we’ll give you a moment to you revisit your initial response to this question:

On a scale of 1 to 5, or rather, using these descriptors, “very low,” “low,” “medium,” “high,” or “very high,” how would you describe your “confidence” in this statement:

The long-term climate model simulations show a trend in global-mean surface temperature from 1951 to 2012 that agrees with the observed trend.

Got your final answer?

OK, let’s compare that to the IPCC’s assessment of the agreement between models and observations.

The IPCC gave it “very high confidence”—the highest level of confidence that they assign.

Do we hear stunned silence?

This in a nutshell sums up the IPCC process. The facts show that the agreement between models and observations is tenuous and steadily eroding and will be statistically unacceptable in about a decade, and yet the IPCC assigns its highest confidence level to the current agreement between models and observations.

If the models are wrong (predict too much warming) then all the impacts from climate change and the urgency to “do something” about it are lessened. The “crisis” dissipates.

This is politically unacceptable.

So the IPCC does not seek to tell the truth, but instead to further the “climate change is bad” narrative. After all, governments around the world have spent a lot of effort in trying to combat climate change based upon previous IPCC assessments. The IPCC can’t very well go back and say, oops, we were wrong, sorry about that! So they continue to perpetuate the myth and lead policymakers astray.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
125 Comments
Inline Feedbacks
View all comments
April 17, 2014 2:45 pm

Very good analysis.
It also seems the climate models are fundmentally flawed in their bottoms up, detailed over many years approach. This propagates small errors. This is obvious because the models vary so much. They should all be rejected on this basis only. If any one turns out correct, it will be only a coincident. Whoever thought of this appraoch inthe first had very little common sense it seems. On the other hand, they are so complicated few people understand them. Good job security. And the large variations lets the IPCC say the temperature rise might be 50% higher than their expected value, which is also too high.

April 17, 2014 3:18 pm

Mr. Knappenberger provides us with a link ( http://judithcurry.com/2013/01/19/peer-review-the-skeptic-filter/ ) to “model forecasts.” As “forecast” is synonymous with “prediction” this sounds like a response to my request for one or more citations model predictions. However, upon following this link to a blog post by Dr. Curry and a paper by Michaels et al, I find references to model “projections” but not to model “predictions” (aka forecasts). There is no question that the models make projections. At issue is whether they make predictions. I don’t think so.

Farmer Gez
April 18, 2014 3:02 am

Liken the IPPC approach to a sporting contest. If a team leads by a good margin in the first two quarters but then gradually loses ground in the second half and are finally beaten on the siren. Statistically you could point to a good result for the losing team. Go “Team IPPC”.

April 18, 2014 7:44 am

It shocks me that the author of the IPCC would use the 1951-2010 as the basis for their claim of model accuracy. This is prima facae invalid argument. The data from 1951-2000 has been incorporated into the models. Obviously they jigger the models to fit the data or they wouldn’t exist. So, saying you did a decent fit is not worthy of a first year college graduate. The only data that counts is the data after you fit the curve. Obviously that’s the problem. They fit to the existing data and then immediately the data fails to confirm their fit. This is basically PROOF that their fit is wrong or at least very likely wrong. Even they have to admit this basic fact of the way science works. I just don’t understand how anyone at all swallows such a stupid argument that they did a decent (albeit incredibly expensive) fit to the data and now it’s just the recent data that doesn’t fit. That’s all that matters!!!

April 18, 2014 7:49 am

Terry Oldberg (April 17, 2014 at 3:18 pm):
Climate model make predictions of the future climate given an input of forcings. How forcings may change in the distant future is hard to say. But there are a range of scenarios to cover that. Models predict the climate outcome of any of these scenarios. Model apologists prefer the term “projections” so when the model are wrong, they can claim “well, we never said they were predictions.” That’s BS in my book. If they are not predictions, then they are worthless from the outset.
-Chip

Reply to  Chip Knappenberger
April 18, 2014 8:47 am

Chip Knappenberger:
Thank you for taking the time to respond.
A model that makes predictions has a different mathematical and logical structure than that of the IPCC climate models. A “prediction” is an extrapolation to the outcome of an event. A count of events of a particular description is a “frequency.” A ratio of two frequencies is a “relative frequency.” In testing a predictive model, one compares the predicted to the observed relative frequencies of the outcomes of events. If there is not a match, the model is falsified by the evidence. Otherwise, it is validated.
The IPCC climate models are insusceptible to being falsified or validated. In the parlance of the IPCC, they are “evaluated.” In an evaluation, projected global temperatures are plotted on X-Y coordinates together with a selected global temperature time series. An evaluation establishes the magnitudes of the errors of the various projections. However, it neither falsifies nor validates the associated model.
It follows from the lack of falsifiability that the research referenced by the IPCC assessment reports has not had a scientific methdology. One of the consequences is for the models to fail to deliver information to policy makers about the outcomes from their policy decisions. Thus, this research has failed to meet its objective of guiding policy. Policy makers have been led by the IPCC to think that they have information when they have none.
That this is true has been obscured by widespread application of the equivocation fallacy in making global warming arguments. An “equivocation” is an argument in which a term changes meanings in the midst of this argument. By logical rule, one cannot legitimately draw a conclusion from an equivocation. To draw an illegitimate conclusion from an equivocation is the equivocation fallacy. The equivocation fallacy is invoked when the terms “prediction” and “projection” are treated as synonyms in making global warming arguments, for the two words have differing meanings. Further information on this topic is available at http://wmbriggs.com/blog/?p=7923 .

April 18, 2014 8:22 am

Thanks, Paul and Patrick. Good article.
The IPCC should have very little credibility, based on past performance.

April 18, 2014 9:52 am

Chip, “Models predict the climate outcome of any of these [various forcing] scenarios.
Chip, models can’t predict the outcome of any forcing scenario, whatever. They haven’t the physical accuracy to do so. That is, the theory they deploy is a poor theory of climate. Models can’t accurately simulate the terrestrial climate, they can’t make predictions at all, and the scenarios they do produce have no physical meaning.
Terry Oldberg is right about them. The IPCC use equivocal language to give themselves a back door out of failure.

April 18, 2014 10:13 am

Pat Frank (April 18, 2014 at 9:52 am):
“Chip, models can’t predict the outcome of any forcing scenario, whatever.”
Sure they can and they do. Physical accuracy has nothing whatsoever to do with making a prediction. It helps when trying to make a good prediction, though!
-Chip

Reply to  Chip Knappenberger
April 18, 2014 10:36 am

It seems to me that the only basis for the argument between Knappenberger and Frank is use by Knappenberger of the polysemic form of “predict” and the use by Frank of the monosemic form of of the same word. Use of the polysemic form makes of Knappenberger’s argument an equivocation invalidating Knappenberger’s conclusion. Use of the monosemic form makes of Knappenberger’s argument a syllogism whose conclusion is therefore true.

April 18, 2014 11:25 am

Chip, the meaning of a prediction in science is that the statement be derived from theory and be single-valued and unique, so as to pose a threat of theory falsification.
Climate model expectation values do not meet either criterion. They are not single-valued and they are not unique. The reason is that climate theory is incomplete and the boundary conditions are poorly constrained.
That means any single set of forcing conditions, applied within any single model, will produce multiple model expectation values. Climate models are unable to produce unique solutions to any forcing scenario. They do not make falsifiable predictions.
Any given model expectation value, e.g. the global T anomaly at 2050 = +1 C, will be accompanied by a confidence interval that reflects the lack of accuracy in the model; the high multiplicity of model solutions. The confidence interval is so large — in this case a minimum of about (+/-)5 C — that the expectation value (+1 C) has no real physical meaning. It imparts no information about the state of the future climate.
A model expectation value of 1(+/-)5 C is not a prediction. Virtually any air temperature at 2050 will fall within that range. The model doesn’t make unique predictions, it cannot be falsified. All the climate models are equally unreliable in that sense. It will never be possible to choose among their varied solutions because all of their expectation values will be subsumed within their huge confidence intervals.
The short of it is that models cannot reproduce the behavior of the terrestrial climate. They are unable to resolve the response of the climate to emitted GHGs. We presently can’t know, therefore, whether these GHGs are having any effect at all on the climate.

April 18, 2014 11:32 am

By the way, Chip, the reason it looks like models make predictions is that climate modelers never include confidence intervals from propagated error with their scenario trends. You get the lines, you don’t get the error bars.
That makes the lines look visually like a prediction. Everyone goes for the visual impact, reacts to that, and concludes that they’ve seen a prediction. But they haven’t. They’ve seen an instance of incompetent presentation.

Reply to  Pat Frank
April 18, 2014 12:48 pm

Pat Frank:
Right on. With error bars, a falsifiable conclusion is reached that the value of the variable will be found to lie within the range of the error bars when observed. Without error bars, a non-falsifiable conclusion is reached which in effect states that the value of the variable is “about” the stated value where “about” is a polysemic term whose meaning varies dependent upon the conclusion that one wishes to reach.

Andrejs Vanags
April 18, 2014 1:23 pm

I had a chance to participate in the review, and I wish that at the time I had more time and had done more. I only contributed a few comments on the summary. I objected to the high level of confidence given an expected increase in temperatures in the next decade and pointed out that the expectation was that temperatures would drop instead, supported by the drop in sunspot numbers.
For sure I expected to be ‘black listed’ for those comments, but I noticed that they acknowledge me as a reviewer in one of the report addendums. Good for them.
[Thank you for the courtesy of your reply. Mod]

gofigure560
April 18, 2014 5:21 pm

Keep in mind that these models were written long after 1951, so it’s rather dubious (to put it politely) for the IPCC to include a period in their analysis which was used during the model building process. That proves absolutely nothing. The test is what happens AFTER that.

April 18, 2014 9:15 pm

Pat and Terry,
Oh, I see. You guys have eschewed the common useage of the term in order to carry on some esotetic conversation. At this point, I am content to Ieave it to you two to carry on.
-Chip

Reply to  Chip Knappenberger
April 18, 2014 9:29 pm

Chip:
That sounds like a grudging capitulation. Are you capitulating? If not, what is your argument?

April 18, 2014 11:30 pm

Chip, I speak as 30 years an experimental scientist. Nothing I wrote is scientifically exotic. It’s a description of the standard way models and physical results are evaluated in science: unique predictions, accurate observables, and error analysis. It’s not mysterious. And climate modeling has failed that standard.
Hmm. Your bio shows you’ve got the training, Chip. Nothing I wrote should be a mystery to you.
On the other hand, not one single climate modeler I’ve encountered has shown the slightest familiarity with propagation of error. Not one has displayed any understanding of the meaning of a confidence interval derived from physical error. I have some of that evidenced in black-and-white. It’s been as though they had never encountered the concepts until I discussed them; concepts that are basic to an education in Physics or Chemistry.
It’s a very peculiar thing that climate modelers apparently have no idea how to evaluate the physical reliability of their own projections, but that seems to be the case. So, tell me, just out of curiosity: did your education include physical error analysis, and propagation of error through a calculation?

4TimesAYear
April 19, 2014 1:09 am

Since when did they start trying to model the past?

April 19, 2014 4:52 am

“The IPCC still thinks it might be possible to hit the emissions target by tripling, to 80%, the share of low-carbon energy sources, such as solar, wind and nuclear power, used in electricity generation. It reckons this would require investment in such energy to go up by $147 billion a year until 2030 (and for investment in conventional carbon-producing power generation to be cut by $30 billion a year). In total, the panel says, the world could keep carbon concentrations to the requisite level by actions that would reduce annual economic growth by a mere 0.06 percentage points in 2100.”
It is just possible they might do that with nuclear, but never with intermittent renewables, because without storage the cost of this per kg CO2 saved starts to get exponential as more and more output is discarded at the peaks of the generation to allow the average level of generation to rise.
And in fact et EROI starts to get to be less than unity after about 70% intermittent renewable generation. That is you are using more energy to construct the renewable generation than you will ever get out of it. Simply because, without storage you must adopt a ‘more renewable generating than we can use, so that worst cases of low wind/sun/wave/tide are fully allowed for’.
Currently the storage we have to co-operate with intermittent renewables consists of some hydro where the geography is favourable, but that is already built. Plus massive use of stored energy in fossil fuels.
But even if more is built, that takes us into a regime where the cost, financial and energy, of the renewable source PLUS whatever is used to co-operate with it to provide dispatch also rises to a very high level, and possibly into less than unity overall EROI.
Its the same for the ‘diversity’ solutions – e.g. building a pan global grid to allow e.g. sunlight on one side of the earth to generate power for the other side.The cost of the (undersea) link exceeds the cost of a (nuclear?) power station at distances around the 1,000km mark. And uses a lot of copper, and anyway you need more than one in case one goes down.
In theory any and all of these renewable scheme ‘fixes’ could work, but as with climate change, that is not what is under dispute. What is under dispute is whether they could ever be cost effective, and even more cogently, whether they would actually produce more energy over a service lifetime than they took to build. A renewable solution that doesn’t pay back its on energy cost is unsustainable, under any definition of that term.
With nuclear the situation is fundamentally different, as nuclear power represents a stored energy source, and therefore needs no fixes to allow it to fully supply a grid 24×7. France generates overall around 75% of its national needs with nuclear power, and a large part of the rest with hydro. Like Switzerland, the combination of the two allows an almost completely carbon free grid.
http://gridwatch.templar.co.uk/france/
They can even throw in a bit of cosmetic solar and wind using the existing hydro to balance that, too.
France built its nuclear reactors in 15 years from start to finish.
I am no great supporter of the AGW concept, but cheap fossil fuels are becoming rarer by the decade. We sustain such populations as we have by massive per capita generation of energy over and above what is required to sustain life in isolation, simply because of population density. A rural man may drink from a stream, pick fruit from the trees, hunt the odd deer and shit in the woods, living in a hut made of local wood and thatched with local straw.. Nature will, at low population densities, replenish and recycle all that. At higher densities we are forced to farm crops, carry out animal husbandry, arrange to transport that into the cities, built from similarly transported materials, along with clean water and sewage pumping.
At considerably higher energy cost per capita.
In short CIVILisation, that is living in cities, begins with agriculture, and cheap slave labour, and ends with access to cheap energy. As does the current population level.
And renewable energy will not mark the survival of civilisation, but its death. Renewable energy is absolutely unsustainable in the short to medium term.
Whereas nuclear is sustainable in the medium to long term. Even if we have to thank the egregious de Gaulle and his dreams of an independent French nuclear deterrent and French electricity grid, to demonstrate it.
I write this to warn those who may not totally believe in AGW, but who still think that ‘renewable energy’ is a Good Idea. Its not. It represents a greater and more immediate threat to mankind’s current population levels than AGW ever did.
AS Germany is busy finding out.

April 19, 2014 8:35 am

Over a period of 13 years, my job was to design and manage a succession of scientific studies. While in this job, I learned that the first order of business in the design of a study was to ensure the falsifiability of the claims that would come from the model that would be the point of delivery for the information which would be conveyed to decision makers by the associated model. The property of falsifiability was lent to these claims by the statistical population underlying the model.
For global warming climatology, applications of the equivocation fallacy have replaced falsifiability. A result is for policy makers to base their policies upon a pseudo-science that appears to them to be a science. Applications of the equivocation fallacy make it seem to these policy makers as though they have information about the outcomes of the events of the future when they have no such information!

April 21, 2014 8:38 am

Terry and Frank,
I am bristling at you guys trying to co-opt the word “prediction.” There is a much more common usage of the word that is perfectly applicable to pedestrian conversations. If you want to discuss whether or not climate model output fits your definition of the term, go right ahead. But, the results of that conversation will not impact my usage of the term. Or at least that is my prediction.
-Chip

April 21, 2014 10:07 am

Chip, we’re discussing science, not pedestrianism.
In science, prediction has one and only one meaning: use of a physical model to describe a future observable. to be useful, the prediction and the observable must be single-valued (have tight error bars). Error propagated through the model tells us the resolution of the theory — the magnitude of the observable the theory can reliably predict.
Climate models do not have the resolution to reliably predict the effects of GHGs. There isn’t any question about that.
You can use prediction any way you like. You’ll be discussing science, however, only when you use it correctly.

1 3 4 5