
This article in the January/February edition of WIRES Climate Change doesn’t surprise me at all. With the uncertainty of the surface temperature record in question, the Met Office’s Peter Thorne and NCDC’s Tom Peterson, who once wrote a ghost authored attack against the surfacestations project, take aim to elicit controversy (their word) over Christy and Spencer’s satellite derived temperature record.
Personally, I have little trust of NCDC’s motives, and especially Peterson, after his ghost authored attack on me and the surfacestations project. A true scientist doesn’t need to write ghost articles to discredit the work of others. I’ve put my name on every criticism I ever made of the surface temperature record and NCDC. I thought it was the ultimate cheap shot that Peterson and NCDC didn’t, and then posted it to the NCDC main web page. Remember, this is the same NCDC that used photoshopped flooded houses in government reports. But I digress.
I’ve posted a figure below, along with the abstract and concluding remarks from the article, it is well worth a read.

Tropospheric temperature trends: history of an ongoing controversy
Peter W. Thorne, John R. Lanzante, Thomas C. Peterson, Dian J. Seidel and Keith P. Shine
Changes in atmospheric temperature have a particular importance in climate
research because climate models consistently predict a distinctive vertical profile
of trends. With increasing greenhouse gas concentrations, the surface and
troposphere are consistently projected to warm, with an enhancement of that
warming in the tropical upper troposphere. Hence, attempts to detect this distinct
‘fingerprint’ have been a focus for observational studies. The topic acquired
heightened importance following the 1990 publication of an analysis of satellite
data which challenged the reality of the projected tropospheric warming. This
review documents the evolution over the last four decades of understanding
of tropospheric temperature trends and their likely causes. Particular focus
is given to the difficulty of producing homogenized datasets, with which to
derive trends, from both radiosonde and satellite observing systems, because of
the many systematic changes over time. The value of multiple independent
analyses is demonstrated. Paralleling developments in observational datasets,
increased computer power and improved understanding of climate forcing
mechanisms have led to refined estimates of temperature trends from a wide
range of climate models and a better understanding of internal variability. It is
concluded that there is no reasonable evidence of a fundamental disagreement
between tropospheric temperature trends from models and observations when
uncertainties in both are treated comprehensively.
…
CONCLUDING REMARKS
There is an old saying that a person with one watch always knows what time it is, but with two watches one is never sure. The controversy over surface and tropospheric temperature trends started in 1990 when the first satellite upper air ‘watch’ was produced
and it was na¨ıvely assumed that it told the correct time. Over the subsequent years, with the advent of not just two but multiple watches from different ‘manufacturers’ and using two distinct ‘technologies’, a more accurate measure of the structural uncertainty
inherent in estimating what the ‘time’ truly is has emerged.
The state of the observational and model science has progressed considerably since 1990. The uncertainty of both models and observations is currently wide enough, and the agreement in trends close enough, to support a finding of no fundamental discrepancy between the observations and model estimates throughout the tropospheric column. However, the controversy will undoubtedly continue because some estimates of tropospheric warming since 1979 are less than estimates of surface warming, or fall outside of the range of analogous model estimates (e.g., Figure 8).
There are several key lessons for the future:
1. No matter how august the responsible research group, one version of a dataset cannot give a measure of the structural uncertainty inherent in the information.
2. A full measure of both observational uncertainty and model uncertainty must be taken into consideration when assessing whether there is agreement or disagreement between theory (as represented by models) and reality (as represented by observations).
3. In addition to better routine observations, underpinning reference observations are
required to allow analysts to calibrate the data and unambiguously extract the true climate signal from the inevitable nonclimatic influences inherent in the routine observations.
================================================================
#3 What? The “true climate signal” hasn’t been extracted? And, “inevitable nonclimatic influences”? What, noise and uncertainty? What a concept! I agree though, that better routine and reference observations are needed. Problem is, we don’t have much of that that extends back 100+ years. The Climate Reference Network in the USA was only recently completed, and many countries have no equivalent. We really have very little surface data that is free of “inevitable nonclimatic influences inherent in the routine observation”. Are we getting better at pulling the signal from the noise? Yes. Have we got it right yet? I’m doubtful.
I also find lesson #2, “observational uncertainty” quite interesting, given that we’ve just shown the high level of “observational uncertainty” in the US Historical Climate Network with Fall et al 2011. We all need to get a better handle on this, as well as the “observational uncertainty” of the Global HCN, which NCDC’s Tom Peterson just happens to manage.
The full article is here Thorne_etal_2011 h/t to Dallas Staley
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
. It is concluded that there is no reasonable evidence of a fundamental disagreement
between tropospheric temperature trends from models and observations when
uncertainties in both are treated comprehensively.
=====================================================
Our uncertainty bars are so large for both, that we can’t tell squat……………….
Still can’t find the hot spot, but we can justify not finding it because the uncertainty bars are larger than the computer games and the measured temperatures…………
But, we can program the computer games to show it and justify it, and make weather predictions with it
Theo Goodwin says:
May 13, 2011 at 7:14 am
Their final sentence and conclusion asserts that there is no “fundamental disagreement” between models and observations.
=========================================================
nope
They said their error bars were so large they over powered the trends…….
“The uncertainty of both models and observations is currently wide enough, and the agreement in trends close enough, to support a finding of no fundamental discrepancy between the observations and model estimates throughout the tropospheric column. “
“there is no reasonable evidence of a fundamental disagreement
between tropospheric temperature trends from models and observations when
uncertainties in both are treated comprehensively”
This is a wonderful example of sciency bafflegab. Note the weasel words “reasonable” “fundamental” “comprehensively”. Note the ambiguities “uncertainties in both” “treated”.
Model “uncertainties” are of a very different kind from temperature measurement uncertainties. I hope some statistically educated person will take a very close look at the “treatment” they have given the apples of Model “uncertainties” in order blend them with the oranges of temperature measurement uncertainties to make their “comprehensively” “fundamentally” “reasonable” fruit punch and comment here.
Two days, two papers with statements to the effect that ‘we don’t know what we are doing and don’t know what to do about it’ but we still feel all warm and fuzzy about our zillion dollar climate science.
That chart shows the no temperature trend from 1979 to 1998.
The “true climate signal” is code speak for what they filter out of the noise using our highly secret data processing methodologies. The basic tools consist of Principal Component Analysis (PCA) and Regularized Expectation Maximization (RegEM) infilling to deal with noisy data plagued with data dropouts, data transmission/logging errors, sensor drift, transcription errors, calibration errors, superimposed environmental signals (think UHI effect), low signal to noise ratio borne out of instrumentation that is itself noisy, homogenization, and a host of other data biasing problems (the artifacts on top of the real signal).
Let us not forget that the hockey stick was produced from PCA tuned to favor noise components that had the signature of a hockey stick – which certainly random noise would contain. In noise you can find anything you want if you filter for that one thing.
So the “true climate signal” is what they will find as a result of devising data processing algorithms that filter according to the models. The result will necessarily agree with the models. The models are always right. They are the created reality around which we must construct all inquiry and the standard by which all things will compare. Observation and empirical evidence is such old school science. /sarc
The null hypothesis is (despite what Trenberth claims) “Increasing greenhouse gas concentrations do not impact tropospheric temperatures”
versus the
alternative hypothesis that “With increasing greenhouse gas concentrations, the surface and troposphere are consistently projected to warm, with an enhancement of that warming in the tropical upper troposphere. ”
What they are saying is that they failed to reject the null hypothesis. It does not matter whether they blame this on poor measurement quality or other such issues. The scientific method demands they continue to accept the null hypothesis.
They are Earthers. Their mind is made up. Nothing can change their mind.
All these comments demonstrate that the “evidence” is not clear and that models and computers are required to tease out the signal from noise and natural variations. Which is to say that there is no “evidence” but suggestions. Despite warnings of tomorrow’s destruction since 1988 (or earlier) we are no closer to detecting the approaching end-time than we were 23 years ago.
Good grief. Consensus, 97% certainty among people about a 95% certainty … about what? That that is the opinion of a bunch of people who’s careers are based in CAGW.
What a great weight of responsibility that paper by Santer et al. bears. Thorne’s review goes through all the hemming and hawing and chin-stroking and then, predictably, puts it all down to Santer. Just look at the money quote from page 79:
Footnote 2 refers to the CCSP report that concluded there was a “potentially serious inconsistency” between models and observations in the tropical troposphere. No joy there for Thorne, despite the CCSP attempt to spin the conclusion by suggesting the models are right and the data are wrong.
Footnote 191 refers to Santer et al. IJOC 2008, that responded to Douglass et al (2007) by arguing that even though the trends are different, if you deal with autocorrelation using the bang up-to-date 1930s-era statistical methodology popular in climatology then the error bars between models and observations overlap a wee bit, so the discrepancy is not statistically significant.
The Santer et al. paper was one of the fig leafs used by the US EPA in its endangerment finding, and you can bet that whole sections of the next IPCC report will be printed on the same fig leaf.
Trouble is, Santer et al. used data ending in 1999, just after the big El Nino, and their testing methodology is not robust to persistency known to exist in the temperature data. Using updated data, robust methods and the full suite of models,
the discrepancy between models and observations in the tropical troposphere is statistically significant. Steve, Chad and I showed that in our ASL paper last year. The data sets we used have been updated since then, and in every case (including RICH and RSS) the observed trends have been reduced, making the discrepancy even bigger than what we reported.
I guess Thorne et al. didn’t see our paper, but I wait with fascination to see how the IPCC gets around it. This time it will be someone else’s job to “redefine what the peer-reviewed literature is”.
Latitude says:
May 13, 2011 at 8:10 am
“They said their error bars were so large they over powered the trends…….”
OK, but I reject their analytical framework. Given their view of matters, all they have to do is bring down (together) the error bars. Wrong! Without physical hypotheses to explain the temperature measurements, it does not matter where the error bars are or where the trends are. Just ask Briffa.
Warning tin foil hat response:-
Is Rome finally falling as these gentleman show the corruption(?) of the Goverment and scientific communties?
When Rome fell was the final stage of corruption the gladiatorial arena?
http://www.skysports.com/story/0,19528,19494_6926440,00.html
Tin foil hat off, the question is what can a man/women do?
“1. No matter how august the responsible research group, one version of a dataset cannot give a measure of the structural uncertainty inherent in the information.”
————————————————————————————
I hate to be cynical – but this sounds like mid/high level management speak for “please dont cut my funding, we really do need all these seemingly redundant measurement systems”
I call to the WUWT readers’ collective attention Tom Peterson
and his personal contribution to the discussion of the Climategate
e-mails on WUWT at:
http://wattsupwiththat.com/2011/01/17/ncdcs-dr-tom-peterson-responds/
Sorry, nikfromnyc says: May 13, 2011 at 4:37 am, I don’t
recall any “shrill press releases coming out of
Anthony Watts, or the co-authors of the surfacestation report.
That picture of the acitve Stevenson screen with the antelope horns
and rusty rake head on top was a real screen. So was the shot
of a different screen mounted over the tombstone.
Anthony did get a bit loud when folks grabbed the data
from the server and wrote a pre-emptory paper…
nikfromnyc is simply repeating inaccurate talking points he read at one of the thinly-trafficked alarmist blogs. He couldn’t even spell Hansen’s name correctly, so it’s apparent that he’s not up to speed on the issue.
Here’s a quote that puts the entire debate in perspective:
“When you look closely at the climate change issue it is remarkable that the only actual evidence ever cited for a relationship between CO2 emissions and global warming is climate models.” [source]
Climate models can’t predict their way out of a wet paper bag. Empirical observations are the only valid metric. UAH shows declining temperatures even as harmless, beneficial CO2 rises.
Climate model projections diverge from real world measurements. So which should we accept? The computer models? Or the raw data? Because they can’t both be right.
of course after 4 years of arguing that the ISSUE is the UNCERTAINTY, and not the BIAS, I’ll welcome the support for my position.
REPLY: The bias still figures in for absolute readings, such as high and low record temperatures. There’s a number of people who argue that increased frequency of temperature records is an indicator of global warming. Problem is, records are determined by COOP stations and airports, not the Climate Reference Network. For example this train wreck in Honolulu due to a defective ASOS station, and the biased record remains.
http://wattsupwiththat.com/2009/06/19/more-on-noaas-fubar-honolulu-record-highs-asos-debacle-plus-finding-a-long-lost-giss-station/
My view is that equipment differences, UHI, and heat sinks help elevate night-time lows. I agree though we need to get a good handle on the uncertainty too. – Anthony.
“My view is that equipment differences, UHI, and heat sinks help elevate night-time lows. I agree though we need to get a good handle on the uncertainty too. – Anthony.”
I don’t mean to foist my views on Anthony but I do believe our reasoning is parallel. When Anthony points out the absurdity of having a measurement station close to a window air conditioner, in my terminology he is pointing to the absurdity of the assumption that the thermometers are located in some “natural environment” where all the physical regularities are understood and can be taken for granted. If climate science is to have a measurement regime that is not worthless, climate scientists must identify some aspects of the natural environment that would be important to measure and whose natural regularities are genuinely understood and can be taken as nature’s baseline. The same applies to all temperature measurements including satellites , water-born, whatever. Without serious agreement about the natural regularities that make up the environment in which thermometers are located, the readings from those thermometers are as worthless as the readings from Briffa’s tree rings after the yet unexplained decline began in 1960.
My view is that equipment differences, UHI, and heat sinks help elevate night-time lows. I agree though we need to get a good handle on the uncertainty too. – Anthony.
==================================================
As you’ve pointed out, it’s the night time lows that have been increasing and well known for decades.
And, as you’ve also pointed out, when only 1 in 10 stations are decent, and that one good station is not even spread evenly across the map. Once those 9 bad stations are averaged/homogenized/blended and beat into submission into that one good station…
…what was the point of that one good station in the first place
NCDC and GISS — which share the same data source — are outliers relative to both UAH and RSS, and even CRU. RSS is a commercial outfit and has no ‘dog in the fight.’ The fact that RSS and UAH are quite close suggests that UAH has little if any bias. And CRU, despite all the ClimateGate business, is not that far off from either of the satellite data sets. There may be some upward bias in CRU but not the wholesale manipulation we see in NCDC and GISS. See Klotzbach, P. J., R. A. Pielke, Sr., R. A. Pielke, Jr., J. R. Christy, and R. T. McNider (2009), An alternative explanation for differential temperature trends at the surface and in the lower troposphere, J. Geophys. Res., 114, D21102, doi:10.1029/2009JD011841.
Ross McKitrick says on May 13, 2011 at 9:42 am:
“I guess Thorne et al. didn’t see our paper, but I wait with fascination to see how the IPCC gets around it. ”
Send them a reprint. That way they can’t claim they were unware of the paper. You should have a mailing list of the important honchos and always send them reprints.
Unfortunately, postage is so expensive these days. Nevertheless, it is money well spent.
Thorne appears to have been one of the adversarial reviewers of our submission to IJC, as he emailed Jones gloating about its rejection before this was reported at CA. Thorne as reviewer did not contest our findings; Thorne thus knows of the failings of Santer et al.
Thorne and Peterson, both referred to here, both used the defamatory term “fraudit” in connection with Climate Audit. It’s rather cheeky that Peterson complains about alleged “shrill”-ness at WUWT.
How did this paper pass peer-review? It includes no discussion of Ross McKitrick, Stephen McIntyre and Chad Herman’s paper,
Panel and multivariate methods for tests of trend equivalence in climate data series (PDF)
(Atmospheric Science Letters, Volume 11, Issue 4, pp. 270–277, October/December 2010)
– Ross McKitrick, Stephen McIntyre, Chad Herman
Is this a joke?
I have a theory that some of the problems we are seeing with the ‘quality’ (or lack there of) of the analysis and therefore the final figures is due to increased computational resources.
Basically it goes like this; in the good old days when computers took up whole rooms and you would have to wait hours if not days for a result – running a program was actually an expensive thing on a case by case basis. Also the slowness of the overall cycle to perform the processing would by its nature leave you more time to think about and ensure correctness in your processing. Also given how ‘inaccurate’ computers were then (in a decimal precision case) you were very ware of accumulative rounding errors etc – and designed your code to minimize them.
Now as time has gone on the amount of CPU power available at a given cost point has increased dramatically. Also underlying mathematical precision has increased. So you can do more processing in the same time frame and they do so; but there are few problems:
1) the error rate in the underlying data cannot be ‘processed away’. In fact as you do more processing on it, it becomes increasingly important to keep track of its impact.
2) the availability of more CPU power, has to me made it very attractive to ‘tune’ to a target result, i.e. the fact you can fast cycle your processing means you can easily pick the processing steps that lead you towards the result you want (whether known or subconscious)
3) this fast cycle also means you can easily apply new ways of processing not tried before; i.e. borrow from other domains. Which can be a good or bad thing, depending if you fully understand the other domain.
Basically, I’m very skeptical of any ‘over’ processing of climatic data at the moment; as the amount of due diligence applied has not kept pace with the amount of processing being performed; just because you can do more processing does not make it safe to do so.
Steve McIntyre says:
May 13, 2011 at 3:01 pm
“Thorne appears to have been one of the adversarial reviewers of our submission to IJC, as he emailed Jones gloating about its rejection before this was reported at CA. Thorne as reviewer did not contest our findings; Thorne thus knows of the failings of Santer et al.”
Another of Jones’ gossipy teenage clique. There was a time when men outgrew this kind of thing.
“Thorne and Peterson, both referred to here, both used the defamatory term “fraudit” in connection with Climate Audit. It’s rather cheeky that Peterson complains about alleged “shrill”-ness at WUWT.”
How kind you are, Mr. McIntyre. There is not a sentence you have written that either of them could criticize successfully.
Bill Illis says:
May 13, 2011 at 5:50 am
You make an astute observation about the ROC of LT vs. surface temps in the satellite era. Every region for which unbiased estimates of surface temps can be made shows them rising at a considerably GREATER rate during 1980-2000 than the LT. The boys in Asheville plainly refuse to learn about physical reality, preferring to adjust the data to match model expectations.