Has the Met Office committed fraud?

Guest post by Christopher Monckton of Brenchley

The truth is out. No amount of hand-wringing or numerical prestidigitation on the part of the usual suspects can any longer conceal from the world the fact that global warming has been statistically indistinguishable from zero for at least 18 years. The wretched models did not predict that.

When I told the December 2012 UN climate summit in Doha that there had been no warming for at least 16 years, the furious delegates howled me down.

The UN later edited the videotape to remove the howling. The delegates were furious not because I was speaking out of turn (they did not know that at the time) but because the truth was inconvenient.

The Guardian carried a sneer-story about my intervention. When a reader sent in a politely-worded comment to the effect that, objectively speaking, it was true that over the relevant period the least-squares linear-regression trend on the Hadley/CRU global surface temperature data was as near flat as makes no statistical difference, within two minutes The Guardian deleted the comment from its misleadingly-titled “Comment Is Free” website.

The determined reader resubmitted the comment. This time it was gone in 45 seconds, and – what is more – the stub indicating that he had commented disappeared as well. Just 28 years after George Orwell’s 1984, the hard Left are still dumping the inconvenient truth down the memory-hole.

The Met Office, as WattsUpWithThat revealed recently, has noticeably downshifted its lurid warming prediction for the rest of this decade.

When it predicted a “barbecue summer” (wrong: that summer was exceptionally cold and wet), and then a record warm winter (wrong: that was the second-coldest December in central England since records began in 1659); and then, this spring, a record dry summer for the UK (wrong again: 2012 proved to be the second-wettest on record: not for nothing is it now known as the “Wet Office”), it trumpeted its predictions of impending global-warming-driven climate disaster from the rooftops.

And the scientifically-illiterate politicians threw money at it.

If the Met Office’s new prediction is right, by 2017 the global warming rate will have been statistically indistinguishable from zero for two full decades.

So, did the bureaucrats call a giant press conference to announce the good news? Er, no. They put up their new prediction on an obscure corner of their website, on Christmas Day, and hoped that everyone would be too full of Christmas cheer to notice.

That raises – again – a question that Britain can no longer afford to ignore. Has the Wet Office committed serious fraud against taxpayers?

Let us examine just one disfiguring episode. When David Rose of the Mail on Sunday wrote two pieces last year, several months apart, saying there had been no global warming for 15 years, the Met Office responded to each article with Met Office in the Media blog postings that, between them, made the following assertions:

1. “… [F]or Mr. Rose to suggest that the latest global temperatures available show no warming in the last 15 years is entirely misleading.”

2. “What is absolutely clear is that we have continued to see a trend of warming …”.

3. “The linear trend from August 1997 (in the middle of an exceptionally strong El Niño) to August 2012 (coming at the tail end of a double-dip La Niña) is about 0.03 C°/decade …”.

4. “Each of the top ten warmest years have occurred in the last decade.”

5. “The models exhibit large variations in the rate of warming … so … such a period [15 years without warming] is not unexpected. It is not uncommon in the simulations for these periods to last up to 15 years, but longer periods are unlikely.”

Each of the assertions enumerated above was calculated to deceive. Each assertion is a lie. It is a lie told for financial advantage. M’lud, let me take each assertion in turn and briefly outline the evidence.

1. The assertion that Mr Rose was “entirely misleading” to say there had been no global warming for 15 years is not just entirely misleading: it is entirely false. The least-squares linear-regression trend on the global temperature data is statistically indistinguishable from zero for 18 years (HadCRUt4), or 19 years (HadCRUt3), or even 23 years (RSS).

2. What is absolutely clear is that the assertion that “it is absolutely clear that we have continued to see a trend of warming” is absolutely, clearly false. The assertion is timescale-dependent. The Met Office justified it by noting that each of the last n decades was warmer than the decade that preceded it. A simple heuristic will demonstrate the dishonesty of this argument. Take a two-decade period. In each of years 1-2, the world warms by 0.05 Cº. In each of years 3-20, the world does not warm at all. Sure, the second decade will be warmer than the first. But global warming will still have stopped for 18 years. By making comparisons on timescales longer than the 18 years without warming, what we are seeing is long-past warming, not a continuing “trend of warming”.

3. In August 1997 global temperatures were not “in the middle of an exceptionally strong El Niño”: they were in transition, about halfway between La Niña (cooler than normal) and El Niño (warmer than normal) conditions. Likewise, temperatures in August 2012 were not “at the tail-end of a double-dip La Niña”: they were plainly again in transition between the La Niña of 2011/12 and the El Niño due in a year or two.

4. The Met Office’s assertion that each of the past ten years has been in the top ten is dataset-dependent. On most datasets, 1998 was the warmest year on the global instrumental record (which only began 160-odd years ago). Therefore, on these datasets, it cannot have been possible for each of the last ten years to be among the warmest on record.

5. Finally, the Met Office shoots itself in the foot by implicitly admitting that there has been a 15-year period without warming, saying that such a period is “not unexpected”. Yet that period was not “expected” by any of the dozens of lavishly-funded computer models that have been enriching their operators – including the Met Office, whose new computer cost gazillions and has the carbon footprint of a small town every time it is switched on. The NOAA’s State of the Climate report in 2008 said this: “Near-zero and even negative trends are common for intervals of a decade or less in the simulations, due to the model’s internal climate variability. The simulations rule out (at the 95% level) zero trends for intervals of 15 years or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.”

In short, the Met Office lied repeatedly to do down a journalist who had uttered the inconvenient truth that there had been no global warming for at least 15 years.

The Fraud Act 2000 defines the serious imprisonable offence of fraud as dishonestly making an express or implied representation that the offender knows is or may be untrue or misleading, intending to gain money or other property (here, grant funding) or to cause loss or risk of loss to another ($30 billion a year of unnecessary “green” taxes, fees and charges to the British public).

So I reported the Met Office to the Serious Fraud Office, which has a specific remit to deal with frauds that involve large sums (here, tens of billions) and organized crime (here, that appreciable fraction of the academic and scientific community that has been telling similar porkies.

Of course, there is one law for us (do the crime, do the time) and quite another for Them (do the crime, make a mint, have a Nobel Peace Prize). The Serious Fraud Office is not interested in investigating Serious Fraud – not if it might involve a publicly-funded body making up stuff to please the corrupt politicians who pay not only its own salaries but also those of the Serious Fraud Office.

The Met Office’s fraud will not be investigated. “Why not try your local police?” said the Serious Fraud Office.

So here is my question. In the specific instance I have sketched out above, where a journalist was publicly named and wrongly shamed by a powerful taxpayer-funded official body telling lies, has that body committed a serious fraud that forms part of a pattern of connected frauds right across the governing class worldwide?

Or am I going too far in calling a fraud a fraud?

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
348 Comments
Inline Feedbacks
View all comments
Graham W
January 18, 2013 11:22 am

Joel: Thanks for the detailed explanation. However there are many temperature trends reported in the 2008 NOAA state of the climate report as far as I can see, with their error bars reported (just look at the pages preceding the statement we’re discussing) and there is no reference accompanying them to explain which model was used to calculate them in any instance. Where would you normally find a statement describing the model used to calculate the confidence cone for a trend?

joeldshore
January 18, 2013 12:00 pm

Graham: I don’t know if there is anywhere in that report where they say how those uncertainties in the trends were calculated (and even if they are 1 sigma or 2 sigma uncertainties). One would hope…But, on the other hand, it was not as important to be specific in that case as it was for specifying their criterion for when the trend could be considered to have a statistically-significant deviation from what the models predict.
A discussion of the issues of calculating the uncertainties in the trends of the empirical temperature record is given in the Appendix of this paper: http://iopscience.iop.org/1748-9326/6/4/044022/pdf/1748-9326_6_4_044022.pdf

mpainter
January 18, 2013 12:20 pm

Everyone: From Monckton’s post, quoting the Met:
1. “… [F]or Mr. Rose to suggest that the latest global temperatures available show no warming in the last 15 years is entirely misleading.”
2. “What is absolutely clear is that we have continued to see a trend of warming …”.
=============================
These statements are unequivocal, and they are unequivocally wrong. No amount of argument can reverse the cooling trend of the last ten years. No amount of argument can turn the flat trend of the past sixteen years into a warming trend.

Graham W
January 18, 2013 4:36 pm

Joel, it will take me a while to read through that link and digest everything written. Thanks for the info! All I can say is, regarding this ongoing debate – like it or not, the NOAA report doesn’t seem to define how the uncertainty in the trends in empirical data is calculated, nor does it specifically state how they get the uncertainty that the models produce. You may think that you can infer how it must be done (with the models – predictive computer models that is) but they have not stated it directly in the text. Therefore all that is left for anyone to do is to take the statement as written. In doing so I can only conclude that the most logical interpretation is the one that excludes the oxymoron, and the one which doesn’t imply the requirement to observe a trend without its error bars (the zero or negative trend as discussed in previous comments). There doesn’t appear to be any direct evidence within the report to support your interpretation.
I conclude, with all due respect, that since the interpretation of Richard and Monckton et al requires no additional evidence (they are arguing only from what is directly expressed in the report), and since yours requires additional evidence that is lacking, that their argument is the more compelling.

Techno
January 19, 2013 4:49 pm

Has anyone here actually though to check the claim that the met said “Each of the top ten warmest years have occurred in the last decade”? Could I really be the only person who’s thought to check, in the week that this post has been up?
Because that’s not what the Met’s blog site says. It says “EIGHT of the top ten warmest years have occurred in the last decade”.
So … did CM just misread it? Or did the met change the text? It’s been up a while, since October. The claim of 8 years (previously 7) is a fairly common one, but nobody that I can find (online) has ever claimed that all 10 of the last years are in the top 10. If the Met DID post that, I’d be inclined to think it was a typo. But right now, that’s not what the blog site says at all.
Sheesh. Awkward.

Techno
January 19, 2013 5:52 pm

According to the google cache, the met’s blog was saying “eight of the top ten …” on the 10th of January.

joeldshore
January 20, 2013 4:23 pm

Graham: Thanks for the message. But, with all due respect, your statements for why you have concluded whaty you have might be more rationalizations on your part to believe what you want to believe. Let’s look at what you say in a little more detail:

You may think that you can infer how it must be done (with the models – predictive computer models that is) but they have not stated it directly in the text. Therefore all that is left for anyone to do is to take the statement as written.

I am the only one who has at least tried to infer how they concluded what they did. Monckton and Richard haven’t, because frankly it is unlikely that they would be able to provide any such explanation.

In doing so I can only conclude that the most logical interpretation is the one that excludes the oxymoron,

As I showed you, it is not an oxymoron. The idea of ruling out something at some level of confidence is a statement made about lots of things relying on statistical testing, including the Higgs Boson as the quote I gave you showed. Ruling out something of a statistical nature with 100% confidence is impossible, which is why Richard and Monckton’s interpretation does not any sense whatsoever.

and the one which doesn’t imply the requirement to observe a trend without its error bars (the zero or negative trend as discussed in previous comments).

The error bars issue is a red herring. It is common practice to compare an observed trend with the distribution of trends observed by the models since it is easier to determine the distribution for the models than the real world. What is absolutely not common practice because it is ridiculous is to say that something is completely ruled out rather than saying it is ruled out with some degree of confidence. (Not to say that people will not be sloppy sometimes…but even if they are, that does not mean that when they said “ruled out” that they meant with 100% certainty because that is impossible except I suppose for very special cases like rolling a “7” on a 6-sided die.)

I conclude, with all due respect, that since the interpretation of Richard and Monckton et al requires no additional evidence (they are arguing only from what is directly expressed in the report)

No they aren’t. They are ignoring all of the surrounding discussion about models and correcting for ENSO. They are assuming that a parenthetical statement modifies what comes after it rather than before it even though it is unlikely anybody would write it that way if they meant to modify the word “trend”. And, they can’t even explain how the authors could have possibly made that statement on the basis of the work that they did that they describe. Their interpretation is utterly without foundation…and Richard hasn’t even made any serious attempt to logically defend it, preferring instead to attack strawman arguments or ad hominem attacks on me. Can you honestly look at the last couple of posts that Richard has written and take anything he says seriously?

Graham W
January 20, 2013 5:48 pm

Hi Joel. In the section of the report we’re discussing, in the paragraphs before the infamous statement of controversy, they are mentioning that the models project a rate of continued temp increase of 0.2C per decade, which is also clarified further on when they state that the models show on average 2C warming for the 21st century. This leads directly into the debated quote.
Using the Skeptical Science trend calculator, you get (at 2008) a trend for the preceding 15 years (so 1993 – 2008) of 0.231 + or – 0.143 (HADCRUT 4). Compare that to the trend you get now, in the last fifteen years: 0.043 + or – 0.140. My point is, a lot has changed in the last five years, since this report came out. In 2008 the data for the preceding 15 years showed a trend over and above the models projections (0.2C per decade) at the 95% level. Now, only five years later, the trend is only 0.043, and due to the error bars, statistically indistinguishable from zero.
What I’m suggesting is, at the time it may not have seemed so ridiculous as you find it now to make their “falsification criterion”, or whatever you want to call it, be a trend that was statistically indistinguishable from zero. If you put it into that context and include those previous paragraphs when reading through to the end of the quote we’ve been discussing it does seem a lot clearer in my opinion.

joeldshore
January 20, 2013 7:59 pm

Graham: Yes, it would have been ridiculous. Because the notion that a trend of 0.13 +/- 0.14 C per decade would somehow completely incompatible with the model predictions of 0.2 C per decade is patently ridiculous. I don’t care what five more years of data shows…It does not change the ridiculousness of such a claim, which is independent of the data.
Furthermore, five more years of data does not change the fact that the authors inserted the parenthetical phrase after “rule out”, not after “zero trends”. And, it does not change the fact that Richard and Monckton have yet to come up with a credible picture of how their interpretation of the claim made in that report could plausibly follow from any of the discussion in the report.
[And, by the way, the fact that 5 years of data has changed the entire storyline of the 15 year trend from being on the high side of the predictions of the models to the models supposedly absolutely ruling out the result ought to set off alarm bells for anybody familiar enough with data analysis to know that 5 years of data in a system with considerable variability is never going to be enough to make that dramatic a change in the storyline. It just ain’t going to happen.]

Graham W
January 21, 2013 12:39 am

I’m not suggesting that the trend for the last five years has anything to do with anything. Yes of course there is too much noise in the data for a five year period to be meaningful. What I’m saying is that within the last five years the trends for preceding 15 year periods have utterly changed from significantly as projected to less than expected.
At the time of the report they were acknowledging that the rate of warming has decreased over the preceding ten years. This is acknowledged in the opening paragraphs of the section we’re discussing. Hence the need to define the criterion for falsification. It reads like they were confident enough in the projection of a steady increase of 0.2C per decade and hence an overall increase of 2C over the whole 21st century that a fifteen year period where the trend dropped below the level of noise for that period was a significant deviation from their projection.
It was an over-confident and short-sighted statement but it appears to be what they were saying, nonetheless. If the trend drops below the noise over 15 years (and in fact now 18 years in the same data set) then it has a long way to go to “catch up” with where it should be to get back to the projected 2C warming for the decade. When you look at it like that its not really so silly as you claim. 15 years is 15% of a century after all.

Graham W
January 21, 2013 12:46 am

Should read as 2C for the century in that last section. Sorry, in a hurry.

Graham W
January 21, 2013 5:09 am

P.S: With this quote here:
“Because the notion that a trend of 0.13 +/- 0.14 C per decade would somehow completely incompatible with the model predictions of 0.2 C per decade is patently ridiculous.”
I’m not sure where you’re getting the idea that the trend could be 0.13 + 0.14 i.e. 0.27C per decade and still be statistically indistinguishable from zero. I think this is the nail in the head moment of where you may be going wrong. The fact that you can make the statement “the trend is statistically indistinguishable from zero” means that it cannot be distinguished from the underlying noise in the data. Since earlier on in our discussion you yourself said:
“15-year trends of temperature data [tend] to have an uncertainty of about 0.14 C per decade”
Then it follows that when the trend is said to be statistically indistinguishable from zero it cannot possibly be greater than this 0.14C per decade (or whatever the exact figure is for the uncertainty over the time period you’re analysing); since if it were then the trend would be “visible” from the noise and you would in fact register there is a positive trend.
With this understood then my and everybody else’s argument becomes clearer still.

joeldshore
January 21, 2013 7:06 am

Graham W says:

If the trend drops below the noise over 15 years (and in fact now 18 years in the same data set) then it has a long way to go to “catch up” with where it should be to get back to the projected 2C warming for the decade.

It does not have a long way to go to catch up. The trend over the last 28 years (from 1975) follows almost exactly the same line as the trend from 1975 to 1997): http://www.woodfortrees.org/plot/hadcrut4gl/from:1975/plot/hadcrut4gl/from:1975/trend/plot/hadcrut4gl/from:1975/to:1997.5/plot/hadcrut4gl/from:1975/to:1997.5/trend [And, note that now that we have been around ENSO-neutral-conditions, the temperatures are back very close to the trendline.]

I’m not sure where you’re getting the idea that the trend could be 0.13 + 0.14 i.e. 0.27C per decade and still be statistically indistinguishable from zero.

A trend of 0.13 +/- 0.14 C per decade is statistically-indistinguishable from a zero trend at the 95% confidence level since 0.13 – 0.14 = -0.01: The 95% confidence interval includes zero. It is also statistically-indistinguishable from a trend of, say, 0.26 C per decade.
Hence my point: A 15-year trend that is statistically-indistinguishable from zero (at the 95% confidence level) can also be statistically-indistinguishable from trends that are even larger than what the models on average predict. Hence, it is preposterous to suggest that a trend that is statistically-indistinguishable from zero over 15 years is ruled out by the models.

With this understood then my and everybody else’s argument becomes clearer still.

No…You have just managed to run yourself in circles.

Graham W
January 21, 2013 9:19 am

No Joel I think you’re missing the point of what I’m saying. OK let’s take your example of 0.13 +/- 0.14. Yes it is statistically indistinguishable from zero and yes that means exactly what I said it means. You can’t consider the trends above the level of noise over the 15 years, 0.14 C per decade, as being part of the 95% confidence cone. Since if it was any one of the trends greater than 0.14 and up to 0.27 C per decade then it would no longer be statistically indistinguishable from zero, it would be a positive trend with 95% confidence.
This is why for example with the trend I pointed out that existed in 2008 for the preceding 15 years, it is clearly a positive trend since even deducting the full amount of the negative error bar this potential trend is still above zero.
It is you that has been arguing in circles and you have been from the very beginning.

January 21, 2013 10:14 am

Graham W:
With genuine respect to you, I think you are wasting your time.
Joel is clearly spouting nonsense. He often does. But he has convinced himself that the nonsense is reality. He often does that, too.
Many, many past examples show there is nothing anybody can do to correct his misunderstanding when – as in this case – he convinces himself of something which is obviously plain wrong. He builds a shield in his mind which protects his mistaken notion from reason, logic and evidence. And when he has done that then all one can do is to benefit onlookers by explaining the truth – as you have repeatedly in this case – and to leave him the ‘last word’. Others will assess his nonsense for themselves.
Richard

Graham W
January 21, 2013 10:57 am

Richard, thanks for the advice. Given that you have been contributing to this blog far longer than I have, and from what I’ve seen just from this relatively short debate with Joel, I’m inclined to agree with you. With respect to Joel, who is clearly an intelligent man and educated to a much higher level than myself, the only possible reason I can suggest for his continued ignorance of reason and logic is that he has been brainwashed in some way. I’m being genuine here, I’m not suggesting any “crazy conspiracy” but there is literally no other explanation that I can think of that makes any sense.
Anyway, there is little point taking this any further as I don’t think I can explain anything clearer than I have done, and others will decide for themselves, like you say, so that’s that for me on this issue…and I really do mean it this time!

January 21, 2013 11:04 am

richardcourtney,
You have joelshore pegged. And you said it better than I could have.

joeldshore
January 21, 2013 11:30 am

Graham W says:

You can’t consider the trends above the level of noise over the 15 years, 0.14 C per decade, as being part of the 95% confidence cone. Since if it was any one of the trends greater than 0.14 and up to 0.27 C per decade then it would no longer be statistically indistinguishable from zero, it would be a positive trend with 95% confidence.

That makes no sense. If I measure an empirical trend of 0.14 C +/- 0.13 per decade where the error bars are the the 95% confidence (2-sigma) error bars, then this measurement tells me that with 95% confidence, the underlying trend in the data is somewhere between -0.01 and +0.27 C per decade. By your, Richard, and Monckton’s bogus interpretation of the NOAA report, this means that this result has been “ruled out” by the models because it is statistically-indistinguishable from zero at the 95% confidence level.
However, it is also statistically-indistinguishable from a trend of, say, 0.25 C per decade, which is larger than the models predict. Hence, you are telling me that a result that is compatible with such a trend is nonetheless ruled out by the models.
As for the comments from the peanut gallery that you have echoed, you should realize that this is a very insular place. Richard Courtney and D B Stealey may represent a widespread opinion here and myself a minority opinion…but if we had this discussion in the scientific community, the situation would be entirely reversed. Basically, Courtney and Monckton and Stealey have either consciously or unconsciously decided that they don’t care what scientists think about what they say and are focusing their attention (either consciously or unconsciously) on confusing a wider audience. As such, they are as much as admitting that they are never going to win over the scientific community…which indeed they won’t with such sophistry as presented by them here.
Basically, the fact that the “skeptic community” is unwilling to listen to scientific critiques of their nonsense and just keeps repeating the same tired arguments shows how little it has to do with real science.

January 21, 2013 12:12 pm

Graham W:
Following my post addressed to you at January 21, 2013 at 10:57 am, joeldshore provided his post at January 21, 2013 at 11:30 am.
Darn! I’m now in the position of ‘I told you so’ and nobody gets forgiven for that.
Richard

Graham W
January 23, 2013 6:00 am

Actually I will just say this, though it’s not in response to anyone in particular, it’s just because I think it’s an important point generally:
Joel D Shore states that with an observed result of 0.13 C per decade and an upper and lower limit to the 95% confidence interval of 0.14 C per decade, the total range of potential “trends” is -0.01 to 0.27 C per decade. He then states that since the range of potential “trends” includes zero the result is statistically indistinguishable from zero (N.B: nothing below the level of noise over the period is actually a trend, it is in fact the statistical absence of a trend, hence all the quotation marks I’m using). However, by stating that the range of potential “trends” includes trends from 0.14 – 0.27 C per decade, the observed result of 0.13 C per decade is also statistically significant under this (false) interpretation; since any potential trend above 0.14 C per decade would be discernible from the noise in the data. So he is actually stating that the observed result of 0.13 C per decade is both statistically indistinguishable from zero and statistically distinguishable from zero simultaneously, which is clearly impossible. In fact it is far simpler (and correct) to say that since the observed result of 0.13 C per decade is less than the noise in the data it is not a trend at all and therefore has no accompanying error bars.
The above describes the mistake that Joel and everyone who proposes similar ideas to him appear to be making in their analysis of trends. Whereas, in fact, the entire purpose of analysis of this kind is to determine whether or not there is a statistical trend in the data examined in the first place. If the result of the analysis is above the level of noise, i.e. greater than 0.14 C per decade, then you can say that there is a trend. If the observed trend is a positive value greater than 0.14 C per decade then it is a positive trend and if it is of an equivalent negative value then it is a negative trend. For example, if the observed trend is -0.15 C per decade then the total range of potential trends is from -0.01 to -0.28 C per decade, none of the potential trends include zero, and hence the trend is statistically distinguishable from zero (statistically significant).
One possible source for this confusion that I can think of which might apply to Joel is the Skeptical Science Trend Calculator (and those who were wondering why I’m posting after saying that I would not post again may now realise the point of this comment). This tool always shows error bars even if the result is statistically indistinguishable from zero. Is this the mechanism by which Joel and others are being “brainwashed” as I suggested earlier?

joeldshore
January 23, 2013 7:53 am

Graham,
I appreciate the parody in your last post. It is indeed amusing to contemplate how “AGW skeptics” might create a whole new way of doing statistical analysis solely to get the result that they desire! But frankly, I think it is a little over the top…Surely even such folks would not go that far into nonsense just because they want a certain answer.
[Hint: If you try to write up your novel ideas on statistical analysis for publication, you may find a few statisticians who balk at the idea that a trend of 0.139 +/- 0.140 is simply zero while a trend of 0.141 +/- 0.140 is compatible with any trend from 0.001 to 0.281.]

Graham W
January 23, 2013 10:04 am

No Joel the “trend” is not zero, it is never and can never be exactly zero. The “trend” could be anything up to the level of the noise in the data but never beyond it. That’s all you can say when the “trend” (result) is statistically indistinguishable from zero.
Let’s try again: The existence of a trend is entirely dependent on it being discernible from the noise in the data.
This is not a novel interpretation of statistical analysis it is the correct one. Thanks for mocking what you seem incapable of comprehending.

joeldshore
January 23, 2013 11:31 am

Okay, I stand corrected…And will amend my statement: If you try to write up your novel ideas on statistical analysis for publication, you may find a few statisticians who balk at the idea that a trend of 0.139 +/- 0.140 is simply really just 0 +/- 0.140 while a trend of 0.141 +/- 0.140 is compatible with any trend from 0.001 to 0.281. So, basically, there is this bizarre jump that occurs in the upper bound as you pass through a trend of 0.140.
And, there is this bizarre special significance attached to a trend of zero, whereby trends within the noise of it jump to being zero plus or minus the noise. And, why this special case? Why not say that a trend whose uncertainty includes the long term trend that one has seen can be considered to simply be at that long term trend to within noise? Oh, I know why not…Because that doesn’t lead to the desired answer!
Science is not built-up by affording some special privilege to the answer that you want to get.

Graham W
January 23, 2013 12:35 pm

Oh Joel you are a funny one. Why would that be the desired answer?. Surely if I was arguing as an “AGW skeptic” I would say “a result that is statistically indistinguishable from zero is exactly that – zero”…but I’m not saying that. So get over your preconceptions of what you think my argument is and try reading what I’m writing.
With your example you believe you have observed a “trend” of 0.139 +/- 0.140 but are still missing the crucial point of what I’m saying. Your observed “trend” is simply not a trend because it is below the level of noise in the data you are analysing. Therefore it cannot be discerned as a trend from the background noise in the data, statistically speaking.
There can only be said to be “a trend” once the observed trend is greater than the noise in the data. At this point you can recognise the trend as a trend, ie a trend with confidence intervals! Which is the same thing. Below the noise in the data it is NOT a trend.
Whereas with the way you look at it a “trend” which is actually not discernible from the background noise has error bars which include the possibility of the trend being statistically significant! So a trend that is statistically indistinguishable from zero suddenly has this amazing potential to be as high as 0.27 C per decade. So in your opinion there is no difference between zero and 0.27 C per decade. Great – then statistical analysis is fundamentally useless.

January 23, 2013 2:44 pm

Joeldshore and Graham W:
It seems you may be talking past each other and, therefore, I am writing in hope of breaking the impasse. It seems the confusion arises from the phrase “indistinguishable from zero”.
I offer the following.
A measured datum can have a determined statistical significance.
95% confidence limits indicate that there is a 20:1 probability that a determined value will lie within the stated range of the limits. This true for any statistically determined datum; e.g. a sample mean, or a trend, or …
For example, a random sample of pebbles may be collected from a beach, each weighed and their total weight divided by the number of weighed samples. The resulting datum is the sample mean for the pebbles. But it is very unlikely that any individual pebble has a weight equal to the sample mean. However, the sample standard deviation at a given confidence will provide a range of weight values within which (at the given confidence) any one of the pebbles will probably be.
So, if a trend is measured to be [X +a -b] at 95% confidence then there is a 20:1 probability that the trend lies somewhere between (X+a) and (X-b). And there is equal probability that the trend is any value within that range. In other words, any value within that range cannot be distinguished from any other value within that range – including X – but a value outside that range differs from X (at 95% confidence). This is true whether or not X is equal to zero.
Hence, if the error bars include zero then the observed trend is not distinguishable from zero with the stated confidence.. This is because the trend is too near to zero for it to be discerned as being different from zero at the stated confidence level.
This is important to the present discussion because as Gail Combs says at January 17, 2013 at 3:20 am

The NOAA’s State of the Climate report in 2008 said this:

Near-zero and even negative trends are common for intervals of a decade or less in the simulations, due to the model’s internal climate variability. The simulations rule out (at the 95% level) zero trends for intervals of 15 years or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.

“Near-zero and even negative trends are common for intervals of a decade or less in the simulations, due to the model’s internal climate variability.”
That seems easy to understand. The models will produce near zero or even negative trends for ten years or for a run of years that are less than ten years. This is due to “the model’s internal climate variability.”

But (at 95% confidence) over the last 16 years there has been a global temperature trend so near to zero that it cannot be distinguished from zero. This creates a “discrepancy” which shows reality has done what the models “rule out”: i.e. the models are falsified.
Richard