A Clear Example of IPCC Ideology Trumping Fact

By Paul C. Knappenberger and Patrick J. Michaels

Center for the Study of Science, Cato Institute

Within the U.S. federal government (and governments around the world), the U.N.’s Intergovernmental Panel on Climate Change (IPCC) is given authority when it comes to climate change opinion.

This isn’t a good idea.

Here perhaps is the clearest example yet. By the time you get to the end of this post, we think you may be convinced that the IPCC does not seek to tell the truth—the truth being that it has overstated the case for climate worry in in its previous reports. The “consensus of scientists” instead prefers to obfuscate.

IN doing so, the IPCC is negatively impacting the public health and welfare of all of mankind as it influences governments to limit energy use, instead of seeking ways to help expand energy availability (or, just stay out of the way of the market).

Everyone knows that the pace of global warming (as represented by the rise in the earth’s average surface temperature) has slowed during the past decade and a half. Coming up with reasons why is the hottest topic in climate change science these days, with about a dozen different explanations being forwarded.

Climate model apologists are scrambling to try to save their models’ (and their own) reputations—because the one thing that they do not want to have to admit is perhaps the simplest and most obvious answer of all—that climate models exaggerate the amount that the earth’s average temperature will increase as a result of human greenhouse gas emissions. If the models are overheated, then so too are all the impacts that derive from the model projectionswhich is the death knell for all those proposed regulations limiting our use of fossil fuels for energy.

In the Summary for Policymakers (SPM) section of its Fifth Assessment Report, even the IPCC recognizes the recent divergence of model simulations and real-world observations:

“There are, however, differences between simulated and observed trends over periods as short as 10 to 15 years (e.g., 1998 to 2013).”

But, lest this leads you to think that there may be some problem with the climate models, the IPCC clarifies:

“The long-term climate model simulations show a trend in global-mean surface temperature from 1951 to 2012 that agrees with the observed trend.”

Whew! For a minute there it seemed like the models were struggling to contain reality, but we can rest assured that over the long haul, say, since the middle of the 20th century, according to the IPCC, that model simulations and observations “agree” as to what is going on.

The IPCC references its “Box 9.2” in support of the statements quoted above.

In “Box 9.2” the IPCC helpfully places the observed trends in the context of the distribution of simulated trends from the collection of climate models it uses in its report. The highlights from Box 9.2 are reproduced below (as our Figure 1). In this Figure, the observed trend for different periods is in red and the distribution of model trends is in grey.

clip_image002

Figure 1. Distribution of the trend in the global average surface temperature from 114 model runs used by the IPCC (grey) and the observed temperatures as compiled by the U.K.’s Hadley Center (red). (Figure from the IPCC Fifth Assessment Report)

As can be readily seen in Panel (a), during the period 1998-2012, the observed trend lies below almost all the model trends. The IPCC describes this as:

…111 out of 114 realizations show a GMST [global mean surface temperature] trend over 1998–2012 that is higher than the entire HadCRUT4 trend ensemble

This gives rise to the IPCC SPM statement (quoted above) that “There are, however, differences between simulated and observed trends over periods as short as 10 to 15 years (e.g., 1998 to 2013).”

No kidding!

Now let’s turn our attention to the period 1951-2012, Panel (c) in Figure 1.

The IPCC describes the situation depicted there as:

Over the 62-year period 1951–2012, observed and CMIP5 [climate model] ensemble-mean trends agree to within 0.02°C per decade…

This sounds like the model are doing pretty good—only off by 0.02°C/decade. And this is the basis for the IPCC SPM statement (also quoted above):

The long-term climate model simulations show a trend in global-mean surface temperature from 1951 to 2012 that agrees with the observed trend.

Interestingly, the IPCC doesn’t explicitly tell you how many of the 114 climate models are greater than the observed trend for the period 1951-2012. And, it is basically impossible to figure that out for yourself based on their Panel (c) since some of the bars of the histogram go off the top of the chart and the x-axis scale is so large as to bunch up the trends such that there are only six populated bins representing the 114 model runs. Consequently, you really can’t assess how well the models are doing and how large a difference of 0.02°C/decade over 62 years really is. You are left to take the IPCC’s word for it.

We don’t.

The website Climate Explorer archives and makes available the large majority of the climate model output used by the IPCC. From there, you can assess 108 (of the 114) climate model runs incorporated into the IPCC graphic—a large enough majority to quite accurately reproduce the results.

We do this in our Figure 2. However, we adjust both axes of the graph such that all the data are shown and that you can ascertain the details of what is going on.

 

clip_image004

Figure 2. Distribution of the trend in the global average surface temperature from 108 model runs used by the IPCC (blue) and the observed temperatures as compiled by the U.K.’s Hadley Center (red) for the period 1951-2012 (the model trends are calculated from historical runs with the RCP4.5 results appended after 2006). This presents the nearly identical data in Figure 1 Panel (c).

What we find is that there are 90 (of 108) model runs that simulate more global warming to have taken place from 1951-2012 than actually occurred and 18 model runs simulating less warming to have occurred. Which is another way of saying the observations fall at the 16th percentile of model runs (the 50th percentile being the median model trend value).

So let us ask you this question, on a scale of 1 to 5, or rather, using these descriptors, “very low,” “low,” “medium,” “high,” or “very high,” how would you describe your “confidence” in this statement:

The long-term climate model simulations show a trend in global-mean surface temperature from 1951 to 2012 that agrees with the observed trend.

OK. You got your answer?

Our answer is, maybe, “medium.”

After all, there is plenty there is room for improvement.

For example, the model range could be much tighter, indicating that the models were in better agreement with one another as to what the simulated trend should be. As it is now, the model range during the period 1951-2012 extends from 0.07°C/decade to 0.21°C/decade (note that the observed trend is 0.107°C/decade). And this is from models which were run largely with observed changes in climate forcings (such as greenhouse gas emissions, aerosol emissions, volcanoes, etc.) and for a period of time (62 years) during which short-term weather variations should all average out. In other words, they are all over the place.

Another way the agreement between model simulations and real-world observations could be improved would be if the observed trend fell closer to the center of the distribution of model projections. For instance, the agreement would be better if, say, 58 model runs produced more warming and the other 50 produced less warming.

What would lower our confidence?

The opposite set of tendencies. The model distribution could be even wider than it is currently, indicating that the models agreed with each other even less than they do now as to how the earth’s surface temperature should evolve in the real world (or that natural variability was very large over the period of trend analysis). Or the observed trend could move further from the center point of the model trend distribution. This would indicate an increased mismatch between observations and models (more similar to that which has taken place over the 1998-2012 period).

In fact, the latter situation is ongoing—that is, the observed trend is moving steadily leftward in the distribution of model simulated trends.

Figure 3 shows at which percentile the observed trend falls for each period of time starting from 1951 and ending each year from 1980 through 2013.

clip_image006

Figure 3. The percentile rank of the observed trend in the global average surface temperature beginning in the year 1951 and ending in the year indicated on the x-axis within the distribution of 108 climate model simulated trends for the same period. The 50th percentile is the median trend simulated by the collection of climate models.

After peaking at the 42nd percentile (still below the median model simulation which is the 50th percentile) during the period 1951-1998, the observed trend has steadily fallen in the percent rank, and currently (for the period 1951-2013) is at its lowest point ever (14th percentile) and is continuing to drop. Clearly, as anyone can see, this trend is looking bad for the models as the level of agreement with observations is steadily decreasing with time.

In statistical parlance, if the observed trend drops beneath the 2.5th percentile, it would be widely considered that the evidence was strong enough to indicate that the observations were not drawn from the population of model results. In other words, statistician would describe that situation that the models disagree with the observations with “very high confidence.” Some researchers use a more lax standard and would consider that falling below the 5th percentile would be enough to consider the observations not to be in agreement with the models. We could consider that case to be described as “high confidence” that the models and observations disagree with one another.

So, just how far away from either of these situations?

It all depends on how the earth’s average surface temperature evolves in the near future.

We explore three different possibilities (scenarios) between now and the year 2030.

Scenario 1: The earth’s average temperature during each year of the period 2014-2030 remains the same as is average temperature observed during the first 13 years of this century (2001-2013). This scenario represents a continuation of the ongoing “pause” in the rise of global temperatures.

Scenario 2: The earth’s temperature increases year-over-year at a rate equal to the observed rise in the temperature observed during the period 1951-2012 (a rate of 0.0107°C/decade). This represents a continuation of the observed trend.

Scenario 3: The earth’s temperature increases year-over-year during the period 2014-2030 at a rate equal to that observed during the period 1977-1998—the period often identified as the 2nd temperature rise of the 20th century. The rate of temperature increase during this period was 0.17°C/decade. This represents a scenario in which the temperature rises at the most rapid rate observed during the period often associated with an anthropogenic influence on the climate.

Figure 4 shows how the percentile rank of the observations evolves under all three scenarios from 2013 through 2030. Under Scenario 1, the observed trend would fall below the 5th percentile of the distribution of model simulations in the year 2018 and beneath the 2.5th percentile in 2023. Under Scenario 2, the years to reach the 5th and 2.5th percentiles are 2019 and 2026, respectively. And under Scenario 3, the observed trend (starting in 1951) would fall beneath the 5th percentile of model simulated trends in the year 2020 and beneath the 2.5th percentile in 2030.

clip_image008

Figure 4. Percent rank of the observed trend within the distribution of model simulations beginning in 1951 and ending at the year indicated on the x-axis under the application of the three scenarios of how the observed global average temperature will evolve between 2014 and 2030. The climate models are run with historical forcing from 1951 through 2006 and the RCP4.5 scenario thereafter.

It is clearly not a good situation for climate models when even a sustained temperature rise equal to the fastest observed (Scenario 3) still leads to complete model failure within two decades.

So let’s review.

1) Examining 108 climate model runs spanning the period from 1951-2012 shows that the model-simulated trends in the global average temperature vary by a factor of three—hardly a high level of agreement as to what should have taken place among models.

2) The observed trend during the period 1951-2012 falls at the 16th percentile of the model distribution, with 18 model runs producing a smaller trend and 90 climate model runs yielding a greater trend. Not particularly strong agreement.

3) The observed trend has been sliding farther and farther away from the model median and towards ever-lower percentiles for the past 15 years. The agreement between the observed trend and the modeled trends is steadily getting worse.

4) Within the next 5 to 15 years, the long-term observed trend (beginning in 1951) will more than likely fall so far below model simulations as to be statistically recognized as not belonging to the modeled population of outcomes. This disagreement between observed trends and model trends would be complete.

So with all this information in hand, we’ll give you a moment to you revisit your initial response to this question:

On a scale of 1 to 5, or rather, using these descriptors, “very low,” “low,” “medium,” “high,” or “very high,” how would you describe your “confidence” in this statement:

The long-term climate model simulations show a trend in global-mean surface temperature from 1951 to 2012 that agrees with the observed trend.

Got your final answer?

OK, let’s compare that to the IPCC’s assessment of the agreement between models and observations.

The IPCC gave it “very high confidence”—the highest level of confidence that they assign.

Do we hear stunned silence?

This in a nutshell sums up the IPCC process. The facts show that the agreement between models and observations is tenuous and steadily eroding and will be statistically unacceptable in about a decade, and yet the IPCC assigns its highest confidence level to the current agreement between models and observations.

If the models are wrong (predict too much warming) then all the impacts from climate change and the urgency to “do something” about it are lessened. The “crisis” dissipates.

This is politically unacceptable.

So the IPCC does not seek to tell the truth, but instead to further the “climate change is bad” narrative. After all, governments around the world have spent a lot of effort in trying to combat climate change based upon previous IPCC assessments. The IPCC can’t very well go back and say, oops, we were wrong, sorry about that! So they continue to perpetuate the myth and lead policymakers astray.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
125 Comments
Inline Feedbacks
View all comments
April 17, 2014 7:15 am

This suggests with a high confidence that 97.3% (111/114 models) of climate scientists were grossly overpaid grant money for their “work.”

Rod Everson
April 17, 2014 7:47 am

evanmjones says:
April 16, 2014 at 8:18 pm
(a rate of 0.0107°C/decade).
Typo here. You mean 0.107, of course?

Mr. Jones points out what appears to be a typo. It should be corrected if that is what it is. It appears as the assumption for Scenario 2 near the end of the report. As stated, it’s little different than the zero assumption of Scenario 1, when it apparently should be more like mid-way between the assumptions for Scenarios 1 and 3.

ttn
April 17, 2014 7:55 am

In freshman physics lab, we teach students that for a collection of N independent determinations of some quantity, with only random error, the uncertainty is given by the standard deviation of the mean: SDOM = SD/sqrt(N).
This provides another trivially simple but illuminating perspective on Figure 2 here. Assume that the 108 model runs constitute independent predictions, with exclusively random errors. By eyeball, looking at Fig. 2, the mean of the runs is about .145 and the standard deviation is at most about .05. So the standard deviation of the mean (the standard deviation divided by the square root of the number N of independent determinations) is at most about .005. So the observations should be within about .005 of .145. But they aren’t. Not even close. The observations are instead about *seven* .005’s away.
In freshman physics lab, it’s more common to have a single unambiguous theoretical prediction, and then measure the thing N times independently. Here the roles of prediction and observation are reversed, but the statistics don’t care about that. If my students compare .11 to .145 in a situation where the SDOM is .005 and say there is excellent agreement, they lose points. To get full credit they should instead conclude that either the prediction is simply wrong, or the uncertainty has been grossly underestimated (probably because the assumption of “exclusively random errors” is wrong, i.e., probably because there are significant systematic errors in their measurements — or, here, in the models).
This isn’t rocket science.

Rod Everson
April 17, 2014 8:02 am

I’m surprised at the lack of comment (or questions) regarding the end result of the analysis, as presented in the last figure showing all three of the proposed scenarios falling below the 2.5 percentile mark over time, including the one that assumes a resumption of the .17C per decade increase of the 80’s and 90’s.
Surprised, because the result surprised me. When the scenarios were presented, I assumed that applying Scenario 3 would eventually make the models look more reasonable. Instead, the falling away that one would expect to occur under Scenario 1 occurs in all three scenarios, just at a somewhat slower pace.
Question: Is this because the models are, on average, forecasting an even greater than .17C increase over the next decade or so? That would seem to have to be the case, but the numbers in the paper’s figure 2 wouldn’t seem to support that.
Another question: Why the sudden divergence between the results for Scenarios 2 and 3 in the year 2019, only to converge again a year later? This would seem to be nearly impossible given the assumptions for the two scenarios. (Either that, or I don’t understand the assumptions–I’m assuming a steady annual increase in temps at the rates specified, nothing more.) Similarly, why does a similar divergence suddenly appear in 2024/25? This sort of result makes me skeptical of the underlying analysis, so it would be helpful to have it explained.
If the results presented are indeed accurate, however, the next few years should be quite interesting.

April 17, 2014 8:04 am

evanmjones (April 16, 2014 at 8:18 p),
Yeah, typo. We gave the correct value earlier in the post. It should be 0.107 as you point out.
Sorry about that,
-Chip

April 17, 2014 8:09 am

Joe Born (April 17, 2014 at 3:53 am):
Your initial reading is correct. In each scenario, a constant increment is added to the previous year’s value. The curves in Figure 4 jump around a little because of the distribution of the model pdf (which is not smooth).
-Chip

April 17, 2014 8:11 am

markx (April 17, 2014 at 6:52 am);
The model data are forecasts post 2006. The forecast scenario is the RCP4.5 (which is generous to the models in this comparison).
-Chip

Robert W Turner
April 17, 2014 8:12 am

IPCC graphs are becoming as ridiculous as their statements. Learning how to adjust the axis on your charts to properly represent the data is elementary. IPCC scientists should start over in grade one, Billy Madison style.

April 17, 2014 8:40 am

Rod Everson (April 17, 2014 at 8:02 am):
Good questions!
The model trend continues to increase. For the period 1951-2030, the mean model trend is 0.175°C/dec. The observed trend is slow to respond to new data points, even when added at an incremental rate of 0.017°C/yr between now and 2030 (our scenario 3). Under scenario 3, the observed trend becomes 0.117°C/dec for the 1951-2030 period. It will take a much longer time before Scenario 3 starts to catch up with a model trend of 0.175, but in the meantime, the models are continuing to run away. So what is required to bring the observations back in line with model expectations is a fairly prolonged observed warming rate in excess of anything yet observed.
The details of Figure 4 (the behavior of the scenarios) is largely dependent on the details of the distribution of model runs—and where the observed trend falls within it (i.e., at what percentile). Since the model pdf is not smooth, the lines in Figure 4 jump around a bit.
-Chip

JJ
April 17, 2014 8:59 am

Pat,

We explore three different possibilities (scenarios) between now and the year 2030.

I think that evaluation of a fourth scenario would be very instructive:
4. How fast would the GAST have to rise between now and 2030 for the observed 1951-2030 trend to end up in the center of the distribution of the 108 model runs?

April 17, 2014 9:10 am

JJ says (April 17, 2014 at 8:59 am):
The observed temperatures would have to rise at a rate of 0.64°C/dec between 2014 and 2030 to reach a trend of 0.175°C/dec (the average model trend) for the period 1951-2030.
-Chip

Matt
April 17, 2014 9:14 am

The most important part of all of this is that a large majority of the time period where the models are “consistent” with the observed is the time period where the models were calibrated (i.e. forced to match historical records by tweaking the forcings). From my understanding, that is how climate models are built. So, is it really surprising if you say, “Hey, my climate model, which I forced to equal an observed historical trend, is pretty close to that observed historical trend!”

Rod Everson
April 17, 2014 9:14 am

Thanks Chip, I think I understand it now. I do have a suggestion.
A good summary graph to add to the end of the presentation would be a reproduction of figure 2, but with three red lines indicating the 2030 endpoint of Scenarios 1, 2, and 3, and with the blue model results all shifted to their 2030 endpoints. I think that would make it clear how the divergence is occurring, and its extent. I would also suggest adding a fourth red line, centered at the 50% point of the model results along with the required temperature increase to get there (a derived Scenario 4, effectively.) Do you know what that number would be, by any chance?

Rod Everson
April 17, 2014 9:24 am

Thinking on this further, if Scenario 3 does occur the modelers will eventually claim success by simply shifting the beginning point to 2015 or so, after which the models would accurately reflect reality. And frankly, if we go through a couple of decades of real warming again, it would be difficult to fight the political momentum that would inevitably develop in that event. I realize that the models, having failed to this point, would still be worthless, but people are still paying attention to them today, even after having failed. After 15 years of actual warming in line with the models’ annual predictions, grant money would be flowing heavily again despite the analysis here.

April 17, 2014 9:26 am

You know, when they cool the past in their models, they would naturally balance out against reality over the entire period. However, If their models reflected reality (Panel b would line up and not under estimating in the models), then the 62 year period – on average – would be hotter in the models than in reality. What is also funny, Panel a shows statistically zero heating on Earth.
Face it, the IPCC is left to playing games with data to hide the decline. 15 years later and that is still their only trick.

Rod Everson
April 17, 2014 9:29 am

Chip, I see that you had already answered one of my questions while I was in the process of asking it, that being the 0.64C/decade increase required to get to the middle of the model distribution

William Astley
April 17, 2014 10:00 am

In Chess, there is what is called forced moves, where the player must move (to defend against checking attacks) or where the player must move to avoid a large loss of material. The planet is now cooling at both poles (initially more apparent in the Antarctic) in response to the sudden reduction in the solar magnetic cycle which is not surprising as there are in the paleo record cycles of warming (at high latitude regions both poles) in every case followed by cooling that correlate with solar magnetic cycle changes. We will now have a chance by observation to decipher how solar magnetic cycle changes modulate planetary temperature. It appears based on the underlying mechanisms and what has happened before that the cooling will accelerate.
http://nsidc.org/data/seaice_index/images/daily_images/S_stddev_timeseries.png
The complication for the climategate cabal scientists if the planet cools, will be that will be forced to defend the charge of what appears to be almost deliberate fabrication or at least ignoring evidence that unequivocally shows their models are incorrect. For example (one of roughly a half dozen issues and observations that support the same assertion), the General Circulation models (GCMs) use a 2% increase in evaporation per degree Celsius change in ocean surface warming (tropics) when theoretical calculations support 6.5% to 7% (no reduction in wind speed) and satellite measurements (tropics) indicate there is a 10% increase in precipitation per degree Celsius change in ocean surface temperature (which indicates wind speeds slightly increase, likely due to differential temperatures in the region in question caused by cloudy and clear sky). That one ‘error’ reduces the increase in warming due to doubling of atmospheric CO2 to less than a degree Celsius.
Discussions of the evaporation issue:
http://wattsupwiththat.com/2014/04/15/major-errors-apparent-in-climate-model-evaporation-estimates/
http://typhoon.atmos.colostate.edu/Includes/Documents/Publications/gray2012.pdf
The Physical Flaws of the Global Warming Theory and Deep Ocean Circulation Changes as the Primary Climate Driver
Implications
http://climateclash.com/files/2011/02/PetT1b.jpg
Observation support for the assertion the GCMs’ are incorrect.
The following satellite observational data and analysis from Roy Spencer’s blog site supports the assertion that the increase in evaporation for a 1C rise in ocean surface temperature is at least 6% rather than the IPCC’s assumed 2%
http://www.drroyspencer.com/2014/04/ssmi-global-ocean-product-update-increasing-clouds-with-a-chance-of-cooling/
http://www-eaps.mit.edu/faculty/lindzen/236-Lindzen-Choi-2011.pdf
On the Observational Determination of Climate Sensitivity and Its Implications

April 17, 2014 11:30 am

Thanks, Theo. My manuscript was submitted twice to JGR-Atmospheres, and twice rejected. The experience has been unexpectedly informative.
There were five separate reviewers, who were evidently climate modelers. The rejections were based pretty much entirely on two reviewer objections. The first one was that the “+/-” confidence intervals produced by propagated error are unphysical and amount to an assertion that the model is rapidly oscillating between a green-house and an ice-house climate.
The second was that projections are referenced to an 1850 base-state climate simulation. This base-state simulated climate already has all the errors. Subtracting the base-state climate from the projected climates removes all the error, leaving physically correct trends.
The first reviewer objection is a freshman undergraduate mistake; showing no understanding of the meaning of uncertainty. It became pretty clear that the reviewers were completely unfamiliar with propagated error. That seems to be a pretty basic hole in their education, but may explain a lot about the state of consensus climatology.
The second reviewer objection relies on linear perturbation theory (LPT). All of science, especially Physics, requires testing a theory against observation. But the case for a reference base-state climate cannot have been tested, much less verified, because there are no observables for the 1850 climate. So, no one knows what the base-state errors look like. How, then, is anyone able to say the errors remain constant and subtract away? The modelers are apparently putting a blind trust in LPT.
I pointed all this out in my responses, but the editor was apparently unmoved. So, now I’m on to journal #2. We’ll see how that goes.
Given the prior reviewer objections, the Auxiliary Material document submitted along with the manuscript now also explains the meaning of uncertainty and of confidence intervals for our climate modeler friends.

Reply to  Pat Frank
April 17, 2014 12:17 pm

Your experience suggests that modelers do not have to be statisticians? Surely they must test the robustness or sensitivity of their models by varying the starting conditions or starting year?

Marlo Lewis
April 17, 2014 12:35 pm

Bravo Chip and Pat. How does scenario #3 (resumption of warming at 0.17°C/decade) compare to Michael Mann’s prediction – http://www.scientificamerican.com/article/earth-will-cross-the-climate-danger-threshold-by-2036/ – that global mean surface temperature will reach 2°C above pre-industrial temperatures by 2036 or, at the latest, 2046?

April 17, 2014 12:42 pm

Some participants in this thread are under the impression that the IPCC climate models make predictions. I’m not aware of any of them. If anyone here is aware of some, I’d appreciate a citation to where they are described.

Popper
April 17, 2014 12:51 pm

The analysis of the IPCC modelling done here is all too generous when it includes historic time series. Making a modell based on existing, historic, data is a no brainer to even the moderatley skilled scientist and the strength of any model instead lies in its ablity to predict future values. This is the only way to determine if a model is good or not, and the models IPCC base their hypothsesis on are clearly not good at predicting the future (as in scenario a in Figure 1).

Colin.A
April 17, 2014 1:06 pm

Sorry if this has been covered before; the observed decadel trend for 1951-2012 as shown in fig 2 is betwwen .108/.113 deg and in the text this value is given as .107. Which is correct? If it’s .107 then the graph (fig2) would show that only 14, not 18, model runs had lower values than the observed rate of warming.

April 17, 2014 2:01 pm

Colin.A (April 17, 2014 at 1:06 pm):
Good question.
The histogram was done in Excel and the x-axis label refers to the upper bound of the bin it is under. So, the bin labeled .108 includes all model runs with a trend >0.103 and <=0.108. It turns out that all four of the members of this bin have a trend less than .1076.
So, the number stands at 18.
I tried to indicate that on the Figure by placing the observed trend on the x-axis as if it were continuous (rather than indicating bin values). Not perfect, I know, but I think it gets the point across.
Thanks,
-Chip

April 17, 2014 2:08 pm

Popper (April 17, 2014 at 12:51 pm):
See this post for our analysis that primarily examines model forecasts (instead of hindcasts):
http://judithcurry.com/2013/09/19/peer-review-the-skeptic-filter/
In the current post, we examine hindcasts because we were trying to show that even playing the IPCC’s own game, they are losing.
-Chip

April 17, 2014 2:14 pm

Marlo Lewis (April 17, 2014 at 12:35 pm):
Mann’s statement is based in the pre-industrial period, our analysis starts in 1951. From the data at hand, your answer is not robustly attainable.
-Chip