Has the Met Office committed fraud?

Guest post by Christopher Monckton of Brenchley

The truth is out. No amount of hand-wringing or numerical prestidigitation on the part of the usual suspects can any longer conceal from the world the fact that global warming has been statistically indistinguishable from zero for at least 18 years. The wretched models did not predict that.

When I told the December 2012 UN climate summit in Doha that there had been no warming for at least 16 years, the furious delegates howled me down.

The UN later edited the videotape to remove the howling. The delegates were furious not because I was speaking out of turn (they did not know that at the time) but because the truth was inconvenient.

The Guardian carried a sneer-story about my intervention. When a reader sent in a politely-worded comment to the effect that, objectively speaking, it was true that over the relevant period the least-squares linear-regression trend on the Hadley/CRU global surface temperature data was as near flat as makes no statistical difference, within two minutes The Guardian deleted the comment from its misleadingly-titled “Comment Is Free” website.

The determined reader resubmitted the comment. This time it was gone in 45 seconds, and – what is more – the stub indicating that he had commented disappeared as well. Just 28 years after George Orwell’s 1984, the hard Left are still dumping the inconvenient truth down the memory-hole.

The Met Office, as WattsUpWithThat revealed recently, has noticeably downshifted its lurid warming prediction for the rest of this decade.

When it predicted a “barbecue summer” (wrong: that summer was exceptionally cold and wet), and then a record warm winter (wrong: that was the second-coldest December in central England since records began in 1659); and then, this spring, a record dry summer for the UK (wrong again: 2012 proved to be the second-wettest on record: not for nothing is it now known as the “Wet Office”), it trumpeted its predictions of impending global-warming-driven climate disaster from the rooftops.

And the scientifically-illiterate politicians threw money at it.

If the Met Office’s new prediction is right, by 2017 the global warming rate will have been statistically indistinguishable from zero for two full decades.

So, did the bureaucrats call a giant press conference to announce the good news? Er, no. They put up their new prediction on an obscure corner of their website, on Christmas Day, and hoped that everyone would be too full of Christmas cheer to notice.

That raises – again – a question that Britain can no longer afford to ignore. Has the Wet Office committed serious fraud against taxpayers?

Let us examine just one disfiguring episode. When David Rose of the Mail on Sunday wrote two pieces last year, several months apart, saying there had been no global warming for 15 years, the Met Office responded to each article with Met Office in the Media blog postings that, between them, made the following assertions:

1. “… [F]or Mr. Rose to suggest that the latest global temperatures available show no warming in the last 15 years is entirely misleading.”

2. “What is absolutely clear is that we have continued to see a trend of warming …”.

3. “The linear trend from August 1997 (in the middle of an exceptionally strong El Niño) to August 2012 (coming at the tail end of a double-dip La Niña) is about 0.03 C°/decade …”.

4. “Each of the top ten warmest years have occurred in the last decade.”

5. “The models exhibit large variations in the rate of warming … so … such a period [15 years without warming] is not unexpected. It is not uncommon in the simulations for these periods to last up to 15 years, but longer periods are unlikely.”

Each of the assertions enumerated above was calculated to deceive. Each assertion is a lie. It is a lie told for financial advantage. M’lud, let me take each assertion in turn and briefly outline the evidence.

1. The assertion that Mr Rose was “entirely misleading” to say there had been no global warming for 15 years is not just entirely misleading: it is entirely false. The least-squares linear-regression trend on the global temperature data is statistically indistinguishable from zero for 18 years (HadCRUt4), or 19 years (HadCRUt3), or even 23 years (RSS).

2. What is absolutely clear is that the assertion that “it is absolutely clear that we have continued to see a trend of warming” is absolutely, clearly false. The assertion is timescale-dependent. The Met Office justified it by noting that each of the last n decades was warmer than the decade that preceded it. A simple heuristic will demonstrate the dishonesty of this argument. Take a two-decade period. In each of years 1-2, the world warms by 0.05 Cº. In each of years 3-20, the world does not warm at all. Sure, the second decade will be warmer than the first. But global warming will still have stopped for 18 years. By making comparisons on timescales longer than the 18 years without warming, what we are seeing is long-past warming, not a continuing “trend of warming”.

3. In August 1997 global temperatures were not “in the middle of an exceptionally strong El Niño”: they were in transition, about halfway between La Niña (cooler than normal) and El Niño (warmer than normal) conditions. Likewise, temperatures in August 2012 were not “at the tail-end of a double-dip La Niña”: they were plainly again in transition between the La Niña of 2011/12 and the El Niño due in a year or two.

4. The Met Office’s assertion that each of the past ten years has been in the top ten is dataset-dependent. On most datasets, 1998 was the warmest year on the global instrumental record (which only began 160-odd years ago). Therefore, on these datasets, it cannot have been possible for each of the last ten years to be among the warmest on record.

5. Finally, the Met Office shoots itself in the foot by implicitly admitting that there has been a 15-year period without warming, saying that such a period is “not unexpected”. Yet that period was not “expected” by any of the dozens of lavishly-funded computer models that have been enriching their operators – including the Met Office, whose new computer cost gazillions and has the carbon footprint of a small town every time it is switched on. The NOAA’s State of the Climate report in 2008 said this: “Near-zero and even negative trends are common for intervals of a decade or less in the simulations, due to the model’s internal climate variability. The simulations rule out (at the 95% level) zero trends for intervals of 15 years or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.”

In short, the Met Office lied repeatedly to do down a journalist who had uttered the inconvenient truth that there had been no global warming for at least 15 years.

The Fraud Act 2000 defines the serious imprisonable offence of fraud as dishonestly making an express or implied representation that the offender knows is or may be untrue or misleading, intending to gain money or other property (here, grant funding) or to cause loss or risk of loss to another ($30 billion a year of unnecessary “green” taxes, fees and charges to the British public).

So I reported the Met Office to the Serious Fraud Office, which has a specific remit to deal with frauds that involve large sums (here, tens of billions) and organized crime (here, that appreciable fraction of the academic and scientific community that has been telling similar porkies.

Of course, there is one law for us (do the crime, do the time) and quite another for Them (do the crime, make a mint, have a Nobel Peace Prize). The Serious Fraud Office is not interested in investigating Serious Fraud – not if it might involve a publicly-funded body making up stuff to please the corrupt politicians who pay not only its own salaries but also those of the Serious Fraud Office.

The Met Office’s fraud will not be investigated. “Why not try your local police?” said the Serious Fraud Office.

So here is my question. In the specific instance I have sketched out above, where a journalist was publicly named and wrongly shamed by a powerful taxpayer-funded official body telling lies, has that body committed a serious fraud that forms part of a pattern of connected frauds right across the governing class worldwide?

Or am I going too far in calling a fraud a fraud?

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
348 Comments
Inline Feedbacks
View all comments
mpainter
January 16, 2013 9:37 am

joel shore says 8:06am
This is also false. Because Tiljander noted that they believed their proxies to be corrupted in the 20th century, in the supplementary materials of the paper, Mann noted this potential issue with the proxies and repeated the analysis leaving out these proxies.
=============================
Mann published with the corrupted data. He has not retracted his paper.
Joel Shore, James Hansen needs you. He’s starting to backslide.

joeldshore
January 16, 2013 11:23 am

Graham W: Most of your comment is simply a tirade. (E.g., you are unhappy because I try to anticipate objections that you might have and address them proactively.)
I’ll try to comment on what little substance I can find.

You assume your interpretation of the NOAA’s statement MUST be correct, since you have said it. You are essentially arguing from your own authority! Lol.

No…I have explained why I think it is correct. I have also explained how there is no possible way to interpret the words that they wrote to mean what Richard Courtney and Monckton are interpreting them to mean. I have also noted that interpreting them in this way leads to a ridiculous conclusion regarding empirical trends that would be in contradiction to the models.

1) It is extremely unlikely that you will “just compute the trend by linear regression” and get a result of exactly zero.

I think it is pretty obvious that if a trend of zero lies outside the 95% confidence window for the models then a negative trend does too. Perhaps they could have said “trends of zero or less” instead of “zero trends” to be most clear but I think this is really picking nits.

“We’ve found this exact zero trend, but ignore the whole 95% confidence thing – that’s not important according to Sir Joel D Shore’s interpretation of your statement…yeah, yeah I know that it’s standard scientific practice to include it, but don’t…because Joel said so”.

They tell you to look at one number: the trend. They don’t talk about the uncertainty in the trend. You are not justified in replacing the one number you get when you calculate a trend with the value zero just because the trend observed does not rule out an underlying trend of zero (plus weather noise) with 95% confidence. There are lots of underlying trends that are not ruled out with 95% confidence. I could say the trend is 0.15 C per decade because that is not ruled out with 95% confidence either, but that would not be correct. The least squares trend is what you compute it to be…and the uncertainty in the trend is a different issue.
Let me explain to you the reason why they chose to define their criterion in this way: Getting the uncertainty in the trend line for the actual temperature data is not trivial. You have to assume some model for the correlations in the data and will get different uncertainties with different models. There is still some argument about what is the best model. However, getting the uncertainty (or even the whole distribution of trends) that the models produce is easy: You just run the model many times with slightly different initial conditions and you see the distribution of different trends that you get over a 15-year period. That is why they chose to compare the actual trend one gets from the empirical data to the distribution of trends one gets from the models: If 19 out of 20 of the model runs get a trend over 15-years that is greater than zero, then an observed empirical trend of zero or less falls outside of the 95% confidence range of the models.
And, of course, as they explain, all of this (both the empirical trend and the trend in the models) is measured AFTER one corrects for ENSO in the way discussed in the paper referenced.

joeldshore
January 16, 2013 11:25 am

mpainter says:

Mann published with the corrupted data. He has not retracted his paper.

He did what one does when there is controversy over a particular piece of data: He showed the results both ways, including and excluding that piece of data.

mpainter
January 16, 2013 12:39 pm

joeldshore says: January 16, 2013 at 11:25 am
mpainter says:
Mann published with the corrupted data. He has not retracted his paper.
He did what one does when there is controversy over a particular piece of data: He showed the results both ways, including and excluding that piece of data.
============================
Wrong again. For those who wish to learn more about “Upside-down Tiljander”, the whole episode is chronicled at Climate Audit. Steve McIntyre and others strip Mann’s pretentions to science down to the bare bones of fabrication.
Michael Mann’s “Upside-down Tiljander” is one reason why the hockey stick has disappeared as an icon of the global warners.

Editor
January 16, 2013 1:35 pm

joeldshore says:
January 16, 2013 at 11:25 am

mpainter says:

Mann published with the corrupted data. He has not retracted his paper.

He did what one does when there is controversy over a particular piece of data: He showed the results both ways, including and excluding that piece of data.

Joel, thanks as always for your thoughts. Actually, what Mann did was a bit more subtle. You’ve got to watch the pea under the shell very carefully. I’ve analyzed his proxies using cluster analysis. This clearly shows from whence his hockeystick ariseth. I repeat my Figure 2 here:

Figure 2. Left column shows average signals of the clusters of proxies shown in Figure 1, from the year 1001 to 1980. Averages are of the cluster to which each is connected by a short black line.
This is much more valuable than Mann’s “leave one out” kind of analysis. The hockeystick is contained in three groups of proxies. The first and by far the largest group of proxies is the bristlecone and closely related “strip bark” pines from the western US. As you can see, they are wildly overweighted already in the proxy selection. Mann’s work features these bristlecones heavily, despite numerous authors and authorities having warned against their use.
In addition, the fact that the main bristlecone group is at the top of the correlogram means that it is the most dissimilar of all the groups.
The second group is a few Asian tree rings, particularly the problematic “Tornetrask” series.
The third group is the upside-down Tiljander series. Garbage.
Finally, the method Mann uses mines for hockeystick shapes. So as long as upside-down Tiljander or any of these are in the mix, they will be overweighted.
What Mann has done is to establish (as you point out) that his hockeystick mining algorithm works whether or not the upside-down Tiljander data is included … yawn. That means nothing. That’s why you have to watch the pea. As long as any hockystick shaped clusters of data are present, Mann’s mining method will find them and produce a hockeystick.
Joel, you’re a good scientist who does good work. But in this case, you are supporting the very poor work of a very bad scientist. Look at the results of the cluster analysis above. The majority of the proxy clusters show no hockeystick shape. Some, like the speleothems and lake sediments, actually go down.
And despite that, Mann’s patented hockeystick mining algorithm successfully comes up with a hockeystick. Can you truly tell me with a straight face that bristlecones (known bad) and Tornetrask (known unrepresentative) and Tiljander (known upside-down) plus a hockeystick-mining algorithm creates a valid historical temperature reconstruction?
Mann is such a bad scientist that when people pointed out that he had used the Tiljander series upside-down, his response was “it doesn’t matter” … but that wasn’t the capper. The capper was that he used it upside-down a second time.
So no, Joel. As you know, I’ve supported you often, so this is not a knee-jerk reaction. I’m saying, you’re betting on the wrong horse. Mann is not only a bad scientist, he is a crooked scientist, a man who knowingly destroyed evidence under an FOI request and advised his co-conspirators to do likewise. As his re-use of the upside-down Tiljander data demonstrates, he is monomaniacally focused on showing that his fatally flawed and long discredited original Hockeystick paper is only “pining for the fjords”.
So in addition to his proxy reconstructions being laughable, I warn you in friendship, you soil yourself by any association with him.
w.

Werner Brozek
January 16, 2013 2:13 pm

joeldshore says:
January 16, 2013 at 11:23 am
If 19 out of 20 of the model runs get a trend over 15-years that is greater than zero, then an observed empirical trend of zero or less falls outside of the 95% confidence range of the models.
And, of course, as they explain, all of this (both the empirical trend and the trend in the models) is measured AFTER one corrects for ENSO in the way discussed in the paper referenced.

And what if there is a 50% chance of warming and a 50% chance of cooling over more than 15 years? See the details on the graph below. A combination of the two satellite data sets shows a slope of 0 for over 15 years.
As for your objection about ENSO, the 1998 El Nino was cancelled out by the La Ninas that followed since the slope is also 0 for more than 12 years after the ENSO events, as shown below.
http://www.woodfortrees.org/plot/rss/from:1997/plot/rss/from:1997.9/trend/plot/uah/from:1997/plot/uah/from:1997.9/trend/plot/rss/from:1997.9/trend/detrend:-0.0735/offset:-0.080/plot/esrl-co2/from:1997.9/normalise/offset:0.68/plot/esrl-co2/from:1997.9/normalise/offset:0.68/trend/plot/rss/from:2000.9/trend/plot/uah/from:2000.9/trend

richardscourtney
January 16, 2013 2:14 pm

joeldshore:
I write to thank you for the laugh you gave me when you wrote at January 16, 2013 at 11:23 am to Graham W saying

I have also explained how there is no possible way to interpret the words that they wrote to mean what Richard Courtney and Monckton are interpreting them to mean.

The words of NOAA are clear. The only person “interpreting” them is you!
Clearly, you have not done as I suggested in my post at January 16, 2013 at 7:13 am where I wrote
Try saying this to yourself 100 times, “RULE OUT, RULE OUT, RULE OUT”, and then reality may manage to penetrate your skull.
It may help if you read the NOAA statement as you do it. I remind that it says
The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.
The “95% confidence” applies to whether a trend differs from zero: it does NOT apply to the proportion of model runs as you have managed to delude yourself.
I repeat
If the models showed such 15 year periods for 5% of the model runs then they would not “rule out” such 15 year periods. The simulations would indicate that 1 in 20 runs provided such periods. Even you should be able to understand that if you try.
So, I again advise you to try. Give it a go. Let me help you get started.
Say with me, ““RULE OUT, RULE OUT, RULE OUT”.
OK, now keep going.
Richard

Graham W
January 16, 2013 2:48 pm

Joel: OK let’s examine what we have here.
“Graham W: Most of your comment is simply a tirade. (E.g., you are unhappy because I try to anticipate objections that you might have and address them proactively.)”
Not true. In fact you addressed me, and continue to address me, as though I’m a simpleton and you are my superior. I found and still find this offensive. OK I don’t put across my ideas as accurately as you do. I’m not a scientist so don’t have that continued requirement to be precise with my words. I think I get my points across OK though. I think it was acceptable to me to note the things you were doing that were offensive, and that was the first two paragraphs.
“I’ll try to comment on what little substance I can find.”
Thanks. Super-condescending as ever.
“No…I have explained why I think it is correct. I have also explained how there is no possible way to interpret the words that they wrote to mean what Richard Courtney and Monckton are interpreting them to mean. I have also noted that interpreting them in this way leads to a ridiculous conclusion regarding empirical trends that would be in contradiction to the models.”
In your opinion. In Richard Courtney’s and Monckton’s view, there is clearly a possible way to interpret the words as they have done. If not, they would not have done so. If you disagree, take it up with them. I note that you haven’t yet responded to Richard Courtney’s last comment.
I think we can both agree that the statement is ambiguous; if not why would so many people be discussing different interpretations of the meaning? Actually, you will disagree with me here, the statement is not ambiguous at all in your opinion. It can only mean what you want it to mean and nothing else.
“I think it is pretty obvious [there’s that condescension again] that if a trend of zero lies outside the 95% confidence window for the models then a negative trend does too. Perhaps they could have said “trends of zero or less” instead of “zero trends” to be most clear but I think this is really picking nits.”
I don’t think it is nit-picking at all. It’s fundamental to your interpretation of their statement. Instead of thinking that perhaps the fact they have specified a “zero trend” instead of “zero trends and trends less that zero” might indicate that your interpretation may be incorrect, you have simply assumed again that you must be right, so they “must have meant” zero trends and trends less than zero. But it is not what was said. Why didn’t they specify? Perhaps it is because they are not saying that at all, and what they are actually saying is that a trend that is statistically indistinguishable from zero, a zero trend, is required to create a discrepancy. NOT a trend that just happens to be absolutely perfectly at zero, meanwhile let’s just completely ignore the margin of error.
“They tell you to look at one number: the trend. They don’t talk about the uncertainty in the trend. You are not justified in replacing the one number you get when you calculate a trend with the value zero just because the trend observed does not rule out an underlying trend of zero (plus weather noise) with 95% confidence. There are lots of underlying trends that are not ruled out with 95% confidence. I could say the trend is 0.15 C per decade because that is not ruled out with 95% confidence either, but that would not be correct. The least squares trend is what you compute it to be…and the uncertainty in the trend is a different issue.”
Once more you are explaining yourself again as if I don’t understand what your point is. You are only correct if your interpretation of their statement is the right one. They do talk about 95% confidence in the brackets preceding the words “zero trends”. Your understanding of how the sentences “ought” to be constructed is that the bracketed words should apply to the words prior to the brackets and not after. However, others disagree. Your justification for choosing your interpretation is full of implied “it has to be this”, “it’s obvious that they must mean this” etc etc. In other words you think you’re right and that’s it.
“Let me explain to you the reason why they chose to define their criterion in this way: Getting the uncertainty in the trend line for the actual temperature data is not trivial. You have to assume some model for the correlations in the data and will get different uncertainties with different models. There is still some argument about what is the best model. However, getting the uncertainty (or even the whole distribution of trends) that the models produce is easy: You just run the model many times with slightly different initial conditions and you see the distribution of different trends that you get over a 15-year period. That is why they chose to compare the actual trend one gets from the empirical data to the distribution of trends one gets from the models: If 19 out of 20 of the model runs get a trend over 15-years that is greater than zero, then an observed empirical trend of zero or less falls outside of the 95% confidence range of the models.
And, of course, as they explain, all of this (both the empirical trend and the trend in the models) is measured AFTER one corrects for ENSO in the way discussed in the paper referenced”.
FIne, if you say so. If you say so it must be true. Is there something I’m missing here? Do you KNOW why they define their criterion this way or that way? Do you represent the NOAA? If so I’ll believe what you’re saying. Now if you’re going to respond, please can you do it in some way that does not imply you are talking to a lesser human being. Thanks.

richardscourtney
January 16, 2013 3:51 pm

Graham W:
re the final paragraph in your post at January 16, 2013 at 2:48 pm
Please be assured that your posts demonstrate you are both a better man and the intellectual superior of Joel.
Richard

mpainter
January 16, 2013 5:03 pm

Well, Joel, looks like you have a lot on your plate. If you wish to take a recess, you will be excused, I’m sure.

Gail Combs
January 17, 2013 3:20 am

How can Joel interpret this as other than stated?
The NOAA’s State of the Climate report in 2008 said this:

“Near-zero and even negative trends are common for intervals of a decade or less in the simulations, due to the model’s internal climate variability. The simulations rule out (at the 95% level) zero trends for intervals of 15 years or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.”

“Near-zero and even negative trends are common for intervals of a decade or less in the simulations, due to the model’s internal climate variability.

That seems easy to understand. The models will produce near zero or even negative trends for ten years or for a run of years that are less than ten years. This is due to “the model’s internal climate variability.”
So they are saying they have climate variability built in to the model (negative and positive forcings) that will cause near zero or negative trends.

The simulations rule out (at the 95% level) zero trends for intervals of 15 years or more,

Since the first sentence goes to the trouble of using the words ‘near zero’ and then ‘even negative’ trends one must conclude
near zero trends = zero trends at the 95% level
‘even negative’ does not need such a qualifier because it is not implying some exact number where as a trend of zero does.

suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.”

So that would be zero trends at the 95% level, or even negative trends = an observed absence of warming.
The other method of looking at it is from the statistical point of view. If we make measurements (observations) we are taking ESTIMATES of the true value. There is an error associated with those measurements so a true scientist, knowing this always includes that estimate of error in his thinking about measurements if he has been educated properly. (NOT using/thinking with error bars is a sure sign the person is not a scientist.)
This NOAA graph shows with bars their estimate of error in the global mean temperature. Graph
This graph indicates the CRU computed sampling (measurement) error in C graph That graph indicates the error starts at 0.5°C, and for most of the data is in the +/- 1°C range.
To put it bluntly ALL and I do mean ALL measurement data has error bars. The fact those error bars are not usually conveyed to the public by reporters (most of whom are not scientists) does not mean those error bars are not there.
All Monckton is doing is using the error estimate 95% confidence as he should since that is the convention in Climate Science.

richardscourtney
January 17, 2013 5:14 am

Gail Combs:
re your post at January 17, 2013 at 3:20 am.
Yes, you are right in all you say.
As I said to joeldshore at January 16, 2013 at 2:14 pm
The words of NOAA are clear. The only person “interpreting” them is you!
But in this case – as he always does – Joel is pretending the facts are what he wants them to be instead of what they are.
Richard

joeldshore
January 17, 2013 6:58 am

Graham W says:

I think we can both agree that the statement is ambiguous; if not why would so many people be discussing different interpretations of the meaning? Actually, you will disagree with me here, the statement is not ambiguous at all in your opinion. It can only mean what you want it to mean and nothing else.

richardscourtney says:

The words of NOAA are clear. The only person “interpreting” them is you!

So, Graham, if you think that there are multiple interpretations and one actually has to use coherent arguments to support which interpretation makes sense, you should be on my side, not Richard’s. Richard doesn’t even think his interpretation is an interpretation…and hence he doesn’t even justify it. He just claims the words are clear and that is the only way one could possibly interpret it.
Let me summarize the reasons why I believe my interpretation has much more support:
(1) The authors chose to put the parenthetical expression after “rule out” rather than after “zero trends”. Why would they do this if they wanted it to modify “zero trends”? I could actually see the reverse happening, i.e., I could see them writing “The simulations rule out zero trends (at the 95% level)” when they really meant it to modify “rule out”…This would be a little sloppy, but certaintly reads more naturally than sticking the parenthetical expression in the middle. The fact that the authors explicitly avoided that more natural-sounding construction suggests they wanted to make it clear that the parenthetical expression modifies “rule out” and not “zero trends”.
(2) If you adopt my interpretation, there is no real ambiguity in the statement. We know what is meant by “rule out” and we don’t have to worry about questions of how one determines the uncertainty in the trend estimate on the empirical data because no such estimate is required. With Richard and Monckton’s interpretation, we don’t know what “ruled out” means (to what level of certainty?) and we don’t know what sort of model to use for the correlations in the empirical data in order to arrive at an uncertainty estimate for the trend.
(3) With my interpretation, there is a straightforward way to explain how they came to the conclusion that they did based on what they discuss regarding the simulations that they performed using the climate models: They looked at all the independent periods of a certain length in these multiple simulations and found that one had to make the length 15 years long in order that fewer than 5% of the simulations had trends less than zero. That is what it means to rule out at the 95% confidence level a zero trend. With Richard and Monckton’s interpretation, it is not clear what they did. How did they get from their simulations to their conclusion?
(4) As I noted previously, the SkepticalScience trend calculator (the only one that I know of that is available online) shows 15-year trends of temperature data to have an uncertainty of about 0.14 C per decade. (As I have noted, this is using some model for the correlations in the data, as this is necessary to get such an estimate.) That means that the “borderline” case of a trend that would not rule out a zero trend at 95% confidence is a trend of 0.14 C per decade, which would be compatible with the underlying trend lying anywhere between 0 and 0.28 C per decade with 95% confidence. Does it really make sense whatsoever that the models, which predict trends on average of about 0.20 C per decade would rule out an empirical trend whose 95% confidence interval goes from 0 and 0.28 C per decade?

joeldshore
January 17, 2013 7:08 am

Graham W says:

In your opinion. In Richard Courtney’s and Monckton’s view, there is clearly a possible way to interpret the words as they have done. If not, they would not have done so.

Yes…That is what Monckton is truly gifted in…and I mean that seriously and sincerely. The man has a gift for creatively misinterpreting statements, graphs (such as the IPCC graph showing the temperature structure of the warming due to various forcings), and other such things. And, interestingly, that interpretation always seems to allow him to reach conclusions that scientists in the field have not reached and that just happen to correspond to what he wants to believe.
Don’t even get me started on Richard. The man is agnostic on the question of whether the rise ***in CO2*** is due to humans. That is sort of like being agnostic on whether the Earth is more like 6000 or 4.6 billion years old.

joeldshore
January 17, 2013 7:30 am

Willis: Thanks for your comment. I always wade into any discussion of the proxy temperature reconstructions with trepidation because it is not something I have been particularly interested in (not much physics in it), not what I believe to be one of the more compelling pieces of evidence in support of AGW (partly because of the issues regarding the proxies and partly because at the end of the day, even if the the temperatures are unusual over the past one or two millenia, that is only circumstantial evidence in support of AGW), and hence something that I have not studied in much detail.
However, my point is that it is not right to throw around accusations of fraud on the basis of “facts” that one has read elsewhere and clearly has heard only one side of (as Chris Wright did). So, the tree rings from the Western U.S. play an uncomfortably large role in the reconstructions? Who might have thunk this? Perhaps the scientists who said: “Positive calibration / variance scores for the NH series cannot be obtained if this indicator is removed from the network (in contrast to post-AD 1400 reconstructions for which a variety of indicators are available which correlate against the instrumental record). Though, as discussed earlier, ITRDB PC#1 represents a vital region for resolving hemispheric temperature trends, the assumption that this relationship hold up over time nonetheless demand circumspection. Clearly, a more widespread network of quality millenial proxy indicators will be required for more confident inferences.” ( http://www.ltrr.arizona.edu/webhome/aprilc/data/my%20stuff/MBH1999.pdf )
Since you warn me of something in friendship, let me return the favor: If you think that Mann’s work is bad then criticize it; that is fine. Have a spirited disagreement with him. However, when you call a scientist who is respected in the field (and has won multiple honors from his peers) and has been investigated by fellow scientists / academics and found innocent of accusations made against him, your probably don’t want to go around calling him a fraud and crooked if you want to gain any sort of foothold for your points of view within the scientific community. When I see things like this, it just makes me believe that the AGW movement has no real desire to earn the respect of the scientific community at large…that they are rather out there just to win over the non-science public. And, it is sad because I respect you for actually taking on some of the nonsense one sees in the movement (there’s no greenhouse effect, Nikolov and Zeller have a reasonable alternative theory of climate, the observed rise in CO2 is not anthropogenic, …) that is preventing the AGW community from being taken seriously by the broader scientific community.

Editor
January 17, 2013 8:50 am

joeldshore says:
January 17, 2013 at 7:30 am

… Since you warn me of something in friendship, let me return the favor: If you think that Mann’s work is bad then criticize it; that is fine. Have a spirited disagreement with him. However, when you call a scientist who is respected in the field (and has won multiple honors from his peers) and has been investigated by fellow scientists / academics and found innocent of accusations made against him, your probably don’t want to go around calling him a fraud and crooked if you want to gain any sort of foothold for your points of view within the scientific community.

Truly, Joel, I had expected better of you. If you think that Mann has been investigated, then I officially take away your scientists badge. He has not been investigated in any sense of the word, except by the same people and with the same thoroughness as when they investigated Jerry Sandusky and found nothing … and if you think Penn State actually investigated either of them, I will describe you as naive beyond belief and credulous beyond measure.
Mann also flat out admitted that he asked his friends to destroy emails subject to an FOI request. So your claim that he is somehow lily-white and pure fails from his own testimony. Now, on my planet, when someone does that, I call them crooked and a fraud. He has to hide his actions, just as he hid the fact that he knew the pre-1400 data would throw his original claims in the trash. Oh, yes, he later admitted it, as you point out … you obviously think that means something.
And did Mann himself also destroy emails, the ones he recommended that his fellow conspirators destroy? Hey, the Joel D. Shore Memorial Investigation didn’t get an answer to that question. Care to guess why? Well … they never asked him, because they were friends of his.
Never even asked him one of the central questions regarding his actions … and you call that an investigation? That is pathetic. My grandma could have done a better investigation, and she’s been dead for decades.
Hell, Joel, Mann flat-out lied to the Senate Committee when he said he had not calculated the R^2 of the reconstruction, which was a major point. Flat. Out. Lied. Right there in print.And you think that he is not a fraud and a cheat? What rock are you living under?
Finally, Joel, you threaten, and I believe you are correct, that if I don’t stop calling Mann a fraud, that my ideas will not gain credence among you and fellow AGW supporters, which you mistake for a scientific community. And I agree with you.
Now consider for a minute about what that says about how your vaunted “scientific” community, Joel. Your community is deciding the worth of my scientific ideas … on the basis of whether I think that Mann is a cheat and fraud.
Is that scientific? I think not … Joel, the one losing support for their scientific ideas here is the community of AGW supporting scientists. They’ve seen a huge drop in their poll numbers, to the point where some 60% of Americans think they are just making up the data. (I don’t so, I’m just pointing out who is losing support.)
In fact, the failure of the AGW supporters to do even the most simple of investigations, instead conducting pal-reviews, is well documented, and is one of the reasons for the precipitous drop in the credibility of you and your fellows, Joel. I could provide cites to the investigations, but the fact that you actually believe that Jerry Sandusky and Michael Mann were significantly investigated by Penn State kinda makes that a waste of time … if you believe that, you’re beyond hope.
I’m truly sorry to see this, Joel. I had actually thought of you as one of the few AGW supporters who was actually willing to look under the rocks and honestly report what you found. Foolish me. Instead, you’re babbling about how Mann was a good guy who was seriously investigated and cleared, and all of those claims are demonstrably false. Mann is a bad actor who has never been investigated, and sorry to say, he is dragging good men like yourself down with him. There’s not many clear things in climate science, but Michael Mann’s lack of ethics and morals is one of them. He’s demonstrated that lack publicly, both in his words, his science, and his actions, and he is self-confessed in his own words in the Climategate emails.
Curiously, Joel, if you read the Climategate emails, the other conspirators thought he’d gone off of the rails. I didn’t have to say anything, they already consider his science a joke. It’s hilarious to read the stuff that they say about him … so if you think that the “scientific community” thinks Mann is a good decent honest scientist, think again, because at this rate you are one of the few people left who thinks so.. According to the climategate emails, the position of the AGW movers and shakers about Mann is a hell of a lot closer to mine than to yours, they don’t like him one bit more than I do. So I doubt I’ll lose much cred by pointing out the obvious, that Mann is a charlatan … but you, they could exhibit as the last living man fooled by the Mann.
So I’m surprised, and not pleasantly. I had thought better of you. Jeez, Joel, he’s got you defending using data upside down, not just once, but a second time after he was notified of the error. You sure you want to go down that path? Because you’ve just started traversing it, and you already look like an idiot.
Look, you are smart enough to start your most recent post by saying you deal with paleoclimate reconstructions carefully, because it is something that you haven’t studied in detail. Obviously, the same is true in spades about your knowledge of Micael Mann, and it is costing you dearly. I call Mann a charlatan and a crook because he is one, and as climategate showed, his fellow scientists (present company excepted) are well aware of that fact. It’s not news to anyone save apparently you, so clearly you haven’t studied it in detail.
Quit while you are behind, my friend. You support Mann as you are doing and you will end up covered in mud, and sadly, it’s already happening to you. Quit while you are behind.
w.

Graham W
January 17, 2013 9:51 am

Joel: Genuine thanks for the polite response. Now we are talking. Unfortunately I think I’ve already said most of what I have to say on the matter, and that was mostly when I was kind of ticked off, so apologies for the tone. I have a tendency to over-react if I feel I am being talked down to. Well there you go.
Anyway, I hear all that you are saying and you make a good argument, I just think the bottom line is they made a bit of a sloppy statement. As such all interpretations are just that, interpretations. They should have been more precise with what they said. I get the feeling if you or Richard or Gail or many others who post here had written the statement we wouldn’t be having this debate, since you would have done a better job! But there it is. And though you make a good argument, I think your interpretation is stretching things a bit and I personally find what Gail and Richard have said more compelling, but that’s just my opinion. I think the meaning of “rule out” is perfectly clear, it can only mean 100% just that – ruling something out. It is certainly not a term you would expect to see in such an important statement (is it even very scientific to RULE OUT things?) and that’s part of what makes me think the writing is sloppy.
My point really is that they have made the statement and now they have to live with it, basically. You can’t really argue on their behalf after the event, the statement is the statement. It’s all there in black and white. As you will know only too well I’m sure, this is the reason for the need for absolute precision in scientific statements.

joeldshore
January 17, 2013 12:08 pm

Graham W says:

They should have been more precise with what they said. I get the feeling if you or Richard or Gail or many others who post here had written the statement we wouldn’t be having this debate, since you would have done a better job!

I certainly appreciate your vote of confidence, but I am not sure I could have done a much better job than they did. If one accepts my interpretation, they were actually pretty careful in their writing: They were sure to put the parenthetical phrase right after what it modified, they explained the simulations, … It is only if you don’t accept my interpretation and accept Richard and Monckton’s instead that one ends up with the interpretation that they were sloppy because then you conclude that they didn’t define what they meant by “ruled out” (even though they had a parenthetical phrase immediately after it!) and they didn’t define how the determined the error bars on the trend in the empirical data.
I really think it is difficult to write (or speak) in a way in which you can’t possibly be misinterpreted. I am always impressed at how well my students can misinterpret what I have said or what their lab manual says…and this is a case where the students really want to interpret it correctly. Now, you add in the fact that Monckton and Richard want to interpret it not necessarily in the way it was meant, but rather in the way that is most favorable to their argument, and I think you have a pretty tall order to try to write so that no such interpretation is possible! Scientific articles are not legal documents…and I think they would suffer if we had to write them all as if they were.
In particular, I don’t agree that “rule out (at the 95% level)” is a particularly sloppy statement. Here is a challenge for you: If you think they expressed things sloppily, could you provide me with alternate wording, i.e., what they could have said if they wanted to say what I interpret them as saying, that you would find to be more clear? I would honestly be curious to see it.
[By the way, writing scientific papers in a way that is clear and complete is unfortunately not as common as you might expect. I have a colleague who recently gave a talk and was emphasizing how great another physicist’s papers were (in the field of modeling of fluids on surfaces and other such things) because he said (roughly), “You do exactly what she says she did and you get the result that she got, which is really not something that can be said for many papers.”]

Graham W
January 17, 2013 12:38 pm

I’ll just say one more thing, though it’s been said already: strictly speaking “..rules out (at the 95% level)” is an oxymoron. Whereas “…rules out (at the 95% level) zero trends…” is not an oxymoron so long as you take it to mean that the bracketed phrase applies to “zero trends”. Now I said that they were sloppy in their statement, but not so sloppy as to write an oxymoron into it.
On a side note I’m not even sure why brackets were even necessary – there were much clearer ways to write it, full stop.

joeldshore
January 17, 2013 12:49 pm

Graham: I don’t think the usage of “rules out” and a 95% confidence level is all that unusual. Here is a blog post on CERN’s announcement of the Higg’s boson ( http://blog.vixra.org/2011/12/13/the-higgs-boson-live-from-cern/ ):

When the black line descends below the red horizontal line at 1.0 on the vertical axis, people sometimes say that the Higgs Boson has been ruled out at 95% confidence level at this mass.

(He goes on to say that this statement is not strictly true, but the problem that he discusses is not with using such a statement in principle but rather technical issues with determining the confidence level.)

Graham W
January 17, 2013 1:07 pm

Joel: Didn’t see your response there, I think I must have been writing my “one last point”. So OK actually it will be one more one last point! Sorry a bit pushed for time so I will have to be brief. Re your point at the end of your first paragraph, I assumed that the way the error bars for a trend in empirical data were calculated was a standard procedure, so wouldn’t need to be explicitly stated. That’s what I assumed, I’ll be honest I don’t really know because I’m not a scientist! But for the models it might require explanation, but then if you take Richard’s and Monckton’s and Gail’s interpretation, then it is implied that the 95% level does not apply to the models anyway, so no explanation required.
Sorry again got to go, I will certainly give your challenge some thought, but right now I must dash. Thanks for your time and thoughts everybody.

richardscourtney
January 17, 2013 1:32 pm

joeldshore:
You have sunk to the level of being silly.
The NOAA statement is clear and makes sense.
Your misinterpretation of it makes no sense and disagrees with what was written.
Please try to think about it. The NOAA statement says,
“The simulations rule out (at the 95% level) zero trends for intervals of 15 years or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.”
How could
“observed absence of warming of this duration”
create
“a discrepancy with the expected present-day warming rate”
if the simulations show “observed absence of warming of this duration” during 1 in 20 simulations?
But there is no problem understanding that if you accept the statement literally;
i.e. the 95% confidence applies to “zero trends” and not “intervals of 15 years or more”.
If they had meant (at the 95% level) to apply to the “intervals” then they would have applied it to the “intervals”: they did not, they put “(at the 95% level)” adjacent to “zero trends”.
And that is all I will say on the matter because I agree with what the statement says and you choose to “interpret” it in a manner which makes no sense and disagrees with what was written.
Richard

joeldshore
January 17, 2013 6:11 pm

richardscourtney says:

How could
“observed absence of warming of this duration”
create
“a discrepancy with the expected present-day warming rate”
if the simulations show “observed absence of warming of this duration” during 1 in 20 simulations?

Ah…Because that is what the 95% confidence level means. It means that the observed thing happens infrequently enough that it is unlikely to occur by chance. You could make the same argument about your interpretation…If the 95% confidence interval does not include a trend of zero, that does not strictly mean that it is impossible that the underlying trend is zero. It just means it is unlikely…That there is less than 5% chance that the underlying trend is zero when the observed trend is what it is.

But there is no problem understanding that if you accept the statement literally;
i.e. the 95% confidence applies to “zero trends” and not “intervals of 15 years or more”.
If they had meant (at the 95% level) to apply to the “intervals” then they would have applied it to the “intervals”: they did not, they put “(at the 95% level)” adjacent to “zero trends”.

You’ve created and then demolished a strawman. Nobody is arguing that their parenthetical statement modifies “intervals”. It modifies “rule out”, which just happens to be the words right before the parenthetical expression. It’s use is exactly the same as in this sentence, “When the black line descends below the red horizontal line at 1.0 on the vertical axis, people sometimes say that the Higgs Boson has been ruled out at 95% confidence level at this mass” in this blog post: http://blog.vixra.org/2011/12/13/the-higgs-boson-live-from-cern/

joeldshore
January 17, 2013 6:27 pm

Graham W says:

Re your point at the end of your first paragraph, I assumed that the way the error bars for a trend in empirical data were calculated was a standard procedure, so wouldn’t need to be explicitly stated.

If the data are uncorrelated, so each data point can be considered an observation that is completely independent of the previous one, then the procedure is standard. However, temperature data is strongly correlated…If last month was unusually warm (due, e.g., to an El Nino) then this month is likely to be so too. In that case the number of “degrees of freedom” is less than what you calculate based on the number of data points and that affects (increases) the uncertainty in the trend over what you would expect if you assume the data to be uncorrelated. Without knowing exactly how the data is correlated, one doesn’t know exactly how the uncertainty is affected…So, you have to use some model for the correlations that hopefully represent how the correlations actually behave. I am not exactly sure how much uncertainty this introduces into the uncertainty of the trend estimate, but my impression is that it is at least enough that people argue about which model best represents the correlations.

But for the models it might require explanation, but then if you take Richard’s and Monckton’s and Gail’s interpretation, then it is implied that the 95% level does not apply to the models anyway, so no explanation required.

No…You have it backwards. For the climate models, one can figure out the distribution of trends just by running the models again and again (a technique called “Monte Carlo” in many contexts) and seeing what distribution of trends you get. For the empirical data you can’t do that. So, it is actually more difficult to get the uncertainty in trends for the data than for the climate models.
Note that I have used the “term” model in two contexts. In answering your previous point, when I used “model”, I was talking not about climate models but of having to have a model of the correlations in the actual data. If you don’t understand how the data is correlated, then you can’t get the uncertainty in the trend line. So, in other words, it is impossible to get the uncertainty in the trend of empirical data without a model (of the correlations). Whereas, for the climate models themselves, you don’t need a model of the correlations because you can always do the “brute force” technique of just running the model many times (or running many different models or some combination of these), as they did. Or, in other words, there is only one Earth to experiment on in the real world but there are infinitely-many to experiment on in the climate model world.

richardscourtney
January 18, 2013 2:15 am

joeldshore:
I have read your post addressed to me at January 17, 2013 at 6:11 pm.
It only says you are choosing to be an idiot and nothing else.
Richard