Guest post by Christopher Monckton of Brenchley
The truth is out. No amount of hand-wringing or numerical prestidigitation on the part of the usual suspects can any longer conceal from the world the fact that global warming has been statistically indistinguishable from zero for at least 18 years. The wretched models did not predict that.
When I told the December 2012 UN climate summit in Doha that there had been no warming for at least 16 years, the furious delegates howled me down.
The UN later edited the videotape to remove the howling. The delegates were furious not because I was speaking out of turn (they did not know that at the time) but because the truth was inconvenient.
The Guardian carried a sneer-story about my intervention. When a reader sent in a politely-worded comment to the effect that, objectively speaking, it was true that over the relevant period the least-squares linear-regression trend on the Hadley/CRU global surface temperature data was as near flat as makes no statistical difference, within two minutes The Guardian deleted the comment from its misleadingly-titled “Comment Is Free” website.
The determined reader resubmitted the comment. This time it was gone in 45 seconds, and – what is more – the stub indicating that he had commented disappeared as well. Just 28 years after George Orwell’s 1984, the hard Left are still dumping the inconvenient truth down the memory-hole.
The Met Office, as WattsUpWithThat revealed recently, has noticeably downshifted its lurid warming prediction for the rest of this decade.
When it predicted a “barbecue summer” (wrong: that summer was exceptionally cold and wet), and then a record warm winter (wrong: that was the second-coldest December in central England since records began in 1659); and then, this spring, a record dry summer for the UK (wrong again: 2012 proved to be the second-wettest on record: not for nothing is it now known as the “Wet Office”), it trumpeted its predictions of impending global-warming-driven climate disaster from the rooftops.
And the scientifically-illiterate politicians threw money at it.
If the Met Office’s new prediction is right, by 2017 the global warming rate will have been statistically indistinguishable from zero for two full decades.
So, did the bureaucrats call a giant press conference to announce the good news? Er, no. They put up their new prediction on an obscure corner of their website, on Christmas Day, and hoped that everyone would be too full of Christmas cheer to notice.
That raises – again – a question that Britain can no longer afford to ignore. Has the Wet Office committed serious fraud against taxpayers?
Let us examine just one disfiguring episode. When David Rose of the Mail on Sunday wrote two pieces last year, several months apart, saying there had been no global warming for 15 years, the Met Office responded to each article with Met Office in the Media blog postings that, between them, made the following assertions:
1. “… [F]or Mr. Rose to suggest that the latest global temperatures available show no warming in the last 15 years is entirely misleading.”
2. “What is absolutely clear is that we have continued to see a trend of warming …”.
3. “The linear trend from August 1997 (in the middle of an exceptionally strong El Niño) to August 2012 (coming at the tail end of a double-dip La Niña) is about 0.03 C°/decade …”.
4. “Each of the top ten warmest years have occurred in the last decade.”
5. “The models exhibit large variations in the rate of warming … so … such a period [15 years without warming] is not unexpected. It is not uncommon in the simulations for these periods to last up to 15 years, but longer periods are unlikely.”
Each of the assertions enumerated above was calculated to deceive. Each assertion is a lie. It is a lie told for financial advantage. M’lud, let me take each assertion in turn and briefly outline the evidence.
1. The assertion that Mr Rose was “entirely misleading” to say there had been no global warming for 15 years is not just entirely misleading: it is entirely false. The least-squares linear-regression trend on the global temperature data is statistically indistinguishable from zero for 18 years (HadCRUt4), or 19 years (HadCRUt3), or even 23 years (RSS).
2. What is absolutely clear is that the assertion that “it is absolutely clear that we have continued to see a trend of warming” is absolutely, clearly false. The assertion is timescale-dependent. The Met Office justified it by noting that each of the last n decades was warmer than the decade that preceded it. A simple heuristic will demonstrate the dishonesty of this argument. Take a two-decade period. In each of years 1-2, the world warms by 0.05 Cº. In each of years 3-20, the world does not warm at all. Sure, the second decade will be warmer than the first. But global warming will still have stopped for 18 years. By making comparisons on timescales longer than the 18 years without warming, what we are seeing is long-past warming, not a continuing “trend of warming”.
3. In August 1997 global temperatures were not “in the middle of an exceptionally strong El Niño”: they were in transition, about halfway between La Niña (cooler than normal) and El Niño (warmer than normal) conditions. Likewise, temperatures in August 2012 were not “at the tail-end of a double-dip La Niña”: they were plainly again in transition between the La Niña of 2011/12 and the El Niño due in a year or two.
4. The Met Office’s assertion that each of the past ten years has been in the top ten is dataset-dependent. On most datasets, 1998 was the warmest year on the global instrumental record (which only began 160-odd years ago). Therefore, on these datasets, it cannot have been possible for each of the last ten years to be among the warmest on record.
5. Finally, the Met Office shoots itself in the foot by implicitly admitting that there has been a 15-year period without warming, saying that such a period is “not unexpected”. Yet that period was not “expected” by any of the dozens of lavishly-funded computer models that have been enriching their operators – including the Met Office, whose new computer cost gazillions and has the carbon footprint of a small town every time it is switched on. The NOAA’s State of the Climate report in 2008 said this: “Near-zero and even negative trends are common for intervals of a decade or less in the simulations, due to the model’s internal climate variability. The simulations rule out (at the 95% level) zero trends for intervals of 15 years or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.”
In short, the Met Office lied repeatedly to do down a journalist who had uttered the inconvenient truth that there had been no global warming for at least 15 years.
The Fraud Act 2000 defines the serious imprisonable offence of fraud as dishonestly making an express or implied representation that the offender knows is or may be untrue or misleading, intending to gain money or other property (here, grant funding) or to cause loss or risk of loss to another ($30 billion a year of unnecessary “green” taxes, fees and charges to the British public).
So I reported the Met Office to the Serious Fraud Office, which has a specific remit to deal with frauds that involve large sums (here, tens of billions) and organized crime (here, that appreciable fraction of the academic and scientific community that has been telling similar porkies.
Of course, there is one law for us (do the crime, do the time) and quite another for Them (do the crime, make a mint, have a Nobel Peace Prize). The Serious Fraud Office is not interested in investigating Serious Fraud – not if it might involve a publicly-funded body making up stuff to please the corrupt politicians who pay not only its own salaries but also those of the Serious Fraud Office.
The Met Office’s fraud will not be investigated. “Why not try your local police?” said the Serious Fraud Office.
So here is my question. In the specific instance I have sketched out above, where a journalist was publicly named and wrongly shamed by a powerful taxpayer-funded official body telling lies, has that body committed a serious fraud that forms part of a pattern of connected frauds right across the governing class worldwide?
Or am I going too far in calling a fraud a fraud?
Thanks Richard. So regarding your quote:
“A trend is measured to be [X + a – b] at 95% confidence”.
1) So your result will be a 95% confidence that a trend lies between (X + a) and (X – b)
.
[The problem for me is not here, nor is it in any part of the statistical process. Statisticians need not balk at any ideas, as Joel would have it. The problem, in my view, is in the interpretation of the result, which is of course down to the scientist interpreting the result; whether they have found it for themselves, or they are reading it in another’s paper, or they are checking the output from the Skeptical Science Trend Calculator, etc]
2) If the existence of a trend (X) is determined by whether or not it is above the value of a or b then:
3) When the result of the statistical analysis is such that X < a or b, X cannot be said to exist at all.
4) Without X, you have only (+ a) or (- b) – the noise in the data.
Still, you know more than you would know interpreting this result (a result where X < a or b) any other way. You know that there is not a positive or negative trend greater than a or b over the specific period of time analysed.
Graham W:
I am only responding to your post at January 24, 2013 at 5:38 am for clarity because I agree.
You say
Yes. That is what I meant when I wrote
And, as you say
Indeed so. That is why in my post I returned the discussion to understanding of the NOAA falsification criterion. An isolated and esoteric debate about “noise”, trend analysis, probabilities and statistics has no value: it would be better to refresh one’s memory by re-reading a statistics text book.
The purpose of this discussion was to evaluate Joel’s mistaken claim that the NOA falsification criterion was not applicable.
I hope my interruption of your discussion with Joel has been helpful to him, or to you, or to onlookers.
Richard
richardscourtney: Most of your post ( http://wattsupwiththat.com/2013/01/14/has-the-met-office-committed-fraud/#comment-1207095 ) is not wrong until you get to the last paragraph, which does not follow from anything that has come before. The models do not rule out the possibility of a trend over 15 years or more whose 95% confidence cone includes zero. That claim comes by a clear misreading of what they have said. Rather, the models rule out a trend that is actually MEASURED to be zero over 15 years (and that is only after correcting for El Nino in the way referenced in that report).
Yes, the trend cannot be distinguished from zero at the 95% confidence level. It also can’t be distinguished from the post-1975 long term trend of ~0.17 C per decade. That does not mean that I can simply say that the trend is 0.17 C per decade any more than I can simply say that the trend is zero.
[There is another error in your post when you say, “And there is equal probability that the trend is any value within that range.” There is no reason that I can see why the distribution of probable values has to be perfectly flat…and, in fact, we know it can’t be flat forever. It just has SOME probability distribution where 95% of the probability distribution lies within the range.]
Yes sorry Richard, I could see you were trying to get the conversation back on track, but I couldn’t resist. I’m guilty of over-explaining a point there and overall have just generally gone off on one a bit. Whereas I think a while back we were getting somewhere.
Now, anyway…the NOAA. So for the sake of argument let’s use Joel’s interpretation of their statement, even though it’s quite specific and none of the specifics are mentioned in the report; we have this trend here from the RSS dataset (the result is statistically indistinguishable from zero but I did say “for the sake of argument” so let’s ignore all that for now) which is over 16 years (1 extra year for luck) and is negative:
-0.003 +/- 0.229 C per decade (source: Skeptical Science Trend Calculator)
So there you go. By his own interpretation of their criterion
, Joel must admit that something has happened which the models didn’t project. If Joel says “look at the size of the error bars, the trend could be as high as x” well that goes against what he specified in earlier posts where he said that in his opinion they must have meant a zero or negative trend without its error bars. If Joel accuses me of “cherry-picking” a dataset, well, the NOAA never specified which dataset had to be used in their criterion.
If Joel says “but you have to correct for ENSO first” then…well, I’ll leave that to people who know more about ENSO (which to be frank is most people on here other than me). Many people have already made arguments relating to that issue though, throughout the whole thread.
Plus this is all just pandering to a fairly selective and specific interpretation of the statement which as I pointed out before requires additional evidence from within the report to back it up (and the evidence is lacking).
I don’t think most rational-minded people on either side of the argument doubt that something has changed in the rate of warming, yet you still get some people (deniers?) claiming that it’s business as usual and the gradual descent of the Earth into a hellish inferno with fire-breathing dragons and demons poking at only the “AGW skeptics” with their tridents, is continuing. There are even some contesting that the rate of warming is increasing…and at such a rate that you can only conclude the Earth will eventually simply explode from heat. The amount going into the oceans is clearly unparalleled, and even though there’s no reason why the heat would be going into the oceans now any more than before (as opposed to raising surface temperatures), and even though the technology to record this deep ocean-bound hell-heat hasn’t been around for very long in the grand scheme of things, the IPCC can still claim with 1000% certainty that everything is worse than everyone could have ever imagined before in the history of mankind – squared.
Which they can say with 100,000% scientific certainty. So listen up Barack Obama. If you don’t change the fate of the world by influencing your own personal 1% of its surface to crucify this satanic CO2 molecule, everyone on Earth and the surrounding solar system’s grandchildren will die – and it’s all the deniers fault. China? Shh, don’t be so racist. How dare you mention the Chinese? 20 billion climate scientists’ personal beliefs can’t be wrong. We proved it with a questionnaire.
joeldshore:
In your post to me at January 24, 2013 at 7:55 am you say
Ummmm. No.
Let me remind you of what NOAA wrote.
The first quoted sentence mentions “Near-zero and even negative trends”.
That is what they are talking about; i.e. “Near-zero and even negative trends”.
The next sentence mentions “(at the 95% level) zero trends”; i.e.
trends so near to zero that they cannot be discerned as differing from zero at 95% confidence.
NOAA does NOT talk about “a trend that is actually MEASURED to be zero”. Indeed, it would be ridiculous to do that because when the inherent measurement error is +/+ 0.01 then a trend of zero has only a one in two hundred chance of being “actually MEASURED to be zero”.
In other words, what NOAA wrote makes sense, and what you claim they intended to write is preposterous.
Then there is the issue of “and that is only after correcting for El Nino in the way referenced in that report”.
There is no agreed method to correct for El Nino but if one extrapolates back across or interpolates across the 1998 peak then the trend is still indiscernibly different from zero at 95% confidence.
Please provide the trend value you think exists “after correcting for El Nino in the way referenced in that report”.
Richard
Graham:
Let’s consider something more neutral than temperature trends. Suppose that I give you a cube (die) and tell you to write H or T on each face (for heads or tails) and that you can choose to make the number of faces with H and T equal (3 each) or unequal as you want.
We then roll this die several times and my goal is to determine if there is a “trend” or bias in the die (number of faces with H and T unequal) or whether there is none (i.e., you put H and T on 3 faces each). Let’s say that after a certain number of rolls, the results are that heads has come up 60% of the time…But given the number of rolls I have done, I can also compute an uncertainty associated with this. Let’s say that the 2-sigma uncertainty is 12%. So, my result is that the die comes up heads 60% +/- 12% of the time.
Conventional statistics would say that the result is compatible (at the 95% confidence) with both the possibility that there is no bias or trend (i.e., the die has H on 3 sides and T on 3 sides) AND also with the possibility that there is a bias…namely that the die has H on 4 sides and T on 2 sides (which means heads should come up about 67% of the time)..
However, it seems to me that your novel way of doing statistical analysis would say that, since there is no statistically-significant bias, we are forced to conclude that the die is unbiased and since the level of uncertainty of 12% when taken about the zero bias of 50% does not include the possibility that heads comes up 67% of the time, these experiments rule out the possibility (at least at the 95% confidence level) that the die has H on 4 sides and T on 2 sides.
Is this a correct interpretation of the your statistical analysis? If not, why not? If yes, does that really seem sensible to you…particularly given that the experimental result of 60% heads is somewhat closer to the what would be expected if the die has an H on 4 sides than it is to what would be expected if the die has an H on 3 sides?
No Joel, because the trend is greater than the uncertainty.
Given that you will probably want more of an explanation due to not once understanding the point I’ve been making, if I have written 6 Hs and 0 Ts then there will be a 100% chance of rolling an H (barring some unfortunate incident where the die flies out of an open window or something). If I have written 6 Ts and 0 Hs then there will be 0% chance of getting an H. Then there’s everything inbetween, for 1 H and 5 Ts and 2 Hs and 4 Ts, 3 of each, etc etc.
So the trend (X, see my comment to Richard a few comments up from this one) is greater than the noise (a or b, see same comment). So of course this means that the trend can be anywhere between 48% and 72% as you would know I accept if you had understood a word I’ve been saying. If we had observed a trend of 10% then I would (rightly) rule out the possibility of there being 0 Hs. Understand now?
And if you’re not sure why I would rule out 0 Hs then it’s because, exactly according to what I’ve been saying all along, if the trend is less than the noise as it is in my example of this 10% +/- 12%, by my understanding the trend of 10% cannot exist since it is below the level of noise in the data. This means the result you are left with is simply the noise in the data, in this example 12% (and not -12% since in the example you have chosen negative trends are impossible). A result of 12% rules out the possibility of there being 0 Hs.
Then if this result is not satisfactory and you want a statistically significant result (trend greater than uncertainty) then simply roll the die more.
Right, now that’s out of the way, can we focus on the actual discussion, there are many points raised by many people still to address. Richard, Gail, Terry Oldberg, Werner Brozek…I’m sure there’s people I’ve forgotten. Loads of questions unanswered. Or just accept you’re wrong (about this NOAA thing, please no more arguing about statistics).
People will vote for candied poop if they get a free cell phone with it. Banks figured this out years ago with toasters, and flour companies 50 years ago with a pretty goblet in each sack of brand name flour. The trouble now is that the very same people buying the trip down the river to get the freebie are being sold up the creek without a paddle.
Graham: Your claim that the trend exceeds the uncertainty for my example depends on how you define the trend. As I tried to imply in the discussion, a zero trend corresponds to an equal number of heads or tails. In particular, imagine that each time you roll a heads, you move 1 meter up and each time you roll a tail, you move one meter down. Then, if there are 3 H and 3 T on the die, the expected trend will be zero. (You move down as often as up.) And, the uncertainty in the trend will be greater than trend itself. (The trend when you roll 60% H and 40% T will be 0.2 meters per roll…And, the uncertainty of +/- 12% will correspond to 0.24 meters per roll.)
The point is that your statistical reasoning is nonsensical and leads to all sorts of contradictions, which is why statistics is done the way it is and not the way you want it to be.
As regards the RSS data set trend, there are at least two problems with that:
(1) The temperature fluctuations due to ENSO are even more pronounced in the satellite temperature record of the lower troposphere than they are in the surface record. So, it is even more important to correct for El Nino.
(2) You have cherrypicked the one analysis of the satellite temperature record that shows this. If one cherrypicks enough, something that is ruled out at a 95% confidence level in one record is going to be more common once you have more records to choose from. Something that has a 1 in 20 chance of occurring becomes more and more likely to occur, the more trials I try. It is also amusing to see the “skeptics” flock to the RSS analysis now that it shows something that they like when they spent many years deriding it when the UAH analysis was more to their liking.
No…The sentence says “The simulations rule out (at the 95% level) zero trends…” What would be the purpose of putting the modifier “at the 95% level” before what it was modifying rather than after? It is a more awkward construction, but a construction that they chose exactly so people could not possibly interpret “at the 95% level” to modify the wrong thing. (They apparently underestimated the ability and will of people to misinterpret!)
The point that they are making is that for 15-year trends, a trend of zero is on or slightly outside the 95% confidence interval of trends seen in the models. This is a perfectly reasonable thing for them to say. What would not be reasonable is for them to say something is “ruled out” and not define what they mean by “ruled out”, which can’t be done with 100% confidence in a system where things are inherently statistical.
…Which is irrelevant because that is not the correct criterion to use. And, the agreed upon method for correcting for El Nino is the one that they define (with reference to a paper in the literature) in their discussion. While it may not be perfect, it was the one that they used in formulating their criterion so, like it or not, it is the one that you have to use if you want to use their criterion.
I have no idea. I am not the one seeking to apply their criterion to see if it has been violated or not…You are. You are the one who has to do the work to apply it.
Nope Joel I disagree…we’re looking at the number of times you roll an H and trying to determine the bias in the die. So if you roll the die 1000 times and never get an H, you know there is 6 Ts and 0 Hs. This is the “zero trend”. If you roll the die 1000 times and always get an H, you know there is 6 Hs and 0 Ts. This is a trend of 100%. Every other permutation of Hs and Ts inbetween you can attempt to work out using statistical analysis and the 12% 2-sigma confidence (or whatever it would be for 1000 rolls). There are no contradictions in what I’m saying because I’m not talking about “doing statistics” in any remotely different way to how it’s always been done. All I’m talking about is *how you interpret the result of the analysis*. Now please stop going on about it.
As for the comments on the actual subject at hand…I’ve never personally gone on about how RSS was “bad” before and UAH was “good” but now the reverse is true, or whatever it is you’re trying to imply. I’ve literally never been aware of this discussion. I’m not representative of every “denier” Joel I have my own mind, and I’ve come to this discussion relatively late so others may remember this debate about RSS but personally I don’t. I don’t know what you are talking about. Didn’t I say I would be accused of cherry-picking!? And didn’t I say, that nowhere in the report do they actually specify which data set must be used!? And didn’t I say all the stuff about not knowing enough about ENSO to argue the point but that many others here do?
joeldshore:
Your post at January 24, 2013 at 6:21 pm evades my reasonable request to you; viz.
by replying
No! Absolutely not! How dare you!?
I have clearly and repeatedly stated that
You – ONLY YOU – is claiming there is some unspecified method which you – ONLY YOU – say must be applied. And you are making your claim in support of your preposterous assertions that the NOAA falsification is other than NOAA says it does.
If you think the application of the unspecified method would result in the models not being falsified then provide the result of the correction method which you – ONLY YOU – says needs to be applied.
In other words, put up or shut up.
Richard
Joel: Actually it’s not fair to ask you to stop mentioning it. I brought it up in the first place and then kept on about it myself. I apologise. I’m not going to carry on talking about that side of things though because ultimately it’s not getting us anywhere.
About this:
Richard: The next sentence mentions “(at the 95% level) zero trends”; i.e.
trends so near to zero that they cannot be discerned as differing from zero at 95% confidence.
Joel: No…The sentence says “The simulations rule out (at the 95% level) zero trends…” What would be the purpose of putting the modifier “at the 95% level” before what it was modifying rather than after? It is a more awkward construction, but a construction that they chose exactly so people could not possibly interpret “at the 95% level” to modify the wrong thing. (They apparently underestimated the ability and will of people to misinterpret!)
I will offer this: They would not have written “The simulations rule out zero trends for intervals of 15 years or more (at the 95% level) because then the “at the 95% level” could apply to the simulations, the zero trends, or the intervals of 15 years or more. Far too vague. They would not have written “The simulations rule out zero trends (at the 95% level) for intervals of 15 years of more” because then it could still apply to zero trends or to the simulations – still not clear enough. If they had written “The simulations (at the 95% level) rule out zero trends for intervals of 15 years or more” then we’d all be as sure as Joel that they were talking about the simulations and not the zero trends. But instead they wrote “the simulations rule out (at the 95% level) zero trends for intervals of 15 years or more”. So what else is left for that to apply to but the zero trends, in light of everything I’ve just said?
Graham W:
Your post at January 25, 2013 at 5:50 am concludes asking
The answer, of course, is nothing.
Importantly, in his attempt to obfuscate, Joel also disputes the meaning of that sentence by ignoring its context. As I said to him at January 24, 2013 at 10:41 am
Frankly, in this thread Joel is (again) making an utter fool of himself.
Richard
richardscourtney says:
No…It is not only me. It is right there in the NOAA report:
and then in the part where they spell out the criterion you are so interested in:
They are talking only about ENSO-adjusted data sets. See that phrase “ENSO-adjusted” modifying both the data sets and the temperature record from the simulations? If you don’t like how ENSO is adjusted for, then fine, go and find some other criterion to falsify the models because you can’t use this one. You can’t just pick and choose what parts of the spelled-out criterion you are going to incorporate and what parts you are going to ignore and then make the claim that the models are falsified by the NOAA folks criterion. I would have thought you would understand such basic issues of scientific integrity.
Graham W says:
But…You are creating a “strawman”. I am not saying that the parenthetical phrase modifies “The simulations”. I am saying it modifies “rule out”. Your phrasing is definitely not optimal because I don’t know what “The simulations (at the 95% level)” means by itself. It is a meaningless statement. I suppose people could probably manage to figure out what they are saying, but it would be by reading it and saying, “What does that mean? Oh, I guess, they must mean ‘at the 95% level’ to apply to ‘rule out’.”
Ah…The words immediately before the parenthetical phrase, namely “rule out”! In fact, as I have noted before, without a qualifier, the phrase “rule out” is nonsense when discussing something that is statistical in nature.
Graham W says:
I have explained to you a process by which those die rolls are used to drive a random walk. And, the trend in that random walk will be zero if the die has 3 H’s and 3 T’s on it and non-zero otherwise. Are you saying that the result depends on the nature of what you are talking about related to this experiment: If you are talking about just the die rolls, then you agree that the 4 H’s and 2 T’s is not ruled out at a 95% confidence level but if you are talking about a random walk based on that process, then the result of the trend that would occur if there were 4 H’s and 2 T’s would be ruled out?
Fine. Then it is your interpretation of statistics that is wrong.
Anticipating objections to your argument is great…but if you don’t actually explain why those objections are incorrect, then we are still left with the objections. It is not like anticipating them makes them null and void.
Ha! joelshore says: “I would have thought you would understand such basic issues of scientific integrity.” And the Devil quotes Scripture.
joeldshore:
re your post at January 25, 2013 at 4:55 pm.
Congratulations!
You have managed to find a reference to the method you say is the only method applicable for removing effect of ENSO.
We are getting somewhere!
Unfortunately, the quotation you provide – as seems to be usual with you – doesn’t say what you claim. It does NOT say that trends can ONLY have ENSO contributions deleted by the method of Thompson et al. (2008) (Fig. 2.8a). It says ENSO and non-ENSO contributions can be separated by that method. It is the only method mentioned in the report and you are claiming it is the only acceptable method.
I choose to use extrapolation and interpolation to remove the 1998 peak. And my method is as good as any other when effects of ENSO are not understood. I stand by that point.
However, for the sake of argument, let us decide if you have a point at all.
You claim to have a point so to make it now all you need to do is apply the method of Thompson et al. You will then have shown that either
(a) applying that method increases the global temperature trend to be different from zero (at 95% confidence) over the last 16 years
or
(b) it doesn’t.
You claim the recent stasis does not “rule out” the models as being valid.
OK. Do the compensation you say is the only acceptable compensation and see if you are ‘blowing smoke’ according to your claim.
If (a) then we can discuss the method you applied. If (b) then you never had a point.
I await your chagrin with eager anticipation.
Richard
I’m not trying to demonstrate that the “method increases the global temperature trend to be different from zero (at 95% confidence) over the last 16 years” because that is not the correct criterion.
Let me help you with the thread of the logic here:
Joel: The clear statement of the NOAA folks is that the one has to adjusted for ENSO and then if the resulting trend is zero or less for 15 years then you are outside of the 95% confidence window of the models.
Graham: But, the RSS data shows a very slightly negative trend [over a carefully cherrypicked] interval of about 16 years, so even under this interpretation, it would be outside what the models predict.
Joel: But (among other issues), that is not ENSO-adjusted and ENSO is even a bigger factor in the satellite temperatures of the lower troposphere than it is for the surface temperature record.
Richard: Even if you corrected for ENSO, the empirical trend would still have a 95% confidence window that included zero.
Do you see the problem with your logic or do I have to spell it out for you?
Joel: If I was still going to talk about the statistics I would probably mention that first you would need to prove to me that a result of 60% +/- 12% or 0.2m per roll +/- 0.24m per roll (if we do choose to look at it in this unnecessary way) is mathematically possible given the laws of probability and the fact that we are discussing determining which of six discrete probabilities it is over a number of rolls. You would need to state the number of rolls, show how you calculated the 12%/0.24m per roll 2-sigma confidence interval and show the raw data where the trend line of 60% or 0.2 per roll comes from. Instead of arbitrarily just picking results out of thin air and saying “lets say the 2 sigma confidence interval is 12%”. But I’m not going to talk about that any more, so I won’t.
The sentence fragment “the simulations (at the 95@ur momisugly level)” doesn’t make sense on its own because its a sentence fragment. You have to look at entire sentences to determine their meaning. For instance if I say “Joel is being” you have no idea what is meant until I conclude with “completely ridiculous”.
I anticipated the arguments you may have against the RSS trend I offered and in doing so also explained why your arguments (other than the ENSO objection which I said I don’t know enough about to argue) were not valid, but you just refused to process that information in your brain. That’s not my problem. Are we done here, because I would like to get on with my life? Thanks.
P.S: Joel, you will need to successfully rebut (you haven’t so far) the points made at the end of my comment of January 25th 5:50 am before we even need to look at whether you “have to have an exact zero or negative trend”. The fact is that as my comment clearly shows, the 95% qualifier can only apply to the zero trends. If you disagree, please show me where in the sentence you would put the bracketed phrase for it to clearly apply to zero trends AND ONLY zero trends as I had the courtesy to do for you (answering an earlier challenge you put to me, by the way); showing you where it would need to go to suggest what you have been implying. Instead of thanking me for completing your challenge you instead insulted me by suggesting I had created a straw man argument. I await your apology for this and many other things.
It’s just that in your last reply to Richard you seem to be implying that I am agreeing with your absurd interpretation of the NOAA’s statement when in fact I only offered the RSS trend *for the sake of argument* as I made perfectly clear at the time. I do NOT think it is even necessary for this exactly zero or negative trend to exist.