
This article in the January/February edition of WIRES Climate Change doesn’t surprise me at all. With the uncertainty of the surface temperature record in question, the Met Office’s Peter Thorne and NCDC’s Tom Peterson, who once wrote a ghost authored attack against the surfacestations project, take aim to elicit controversy (their word) over Christy and Spencer’s satellite derived temperature record.
Personally, I have little trust of NCDC’s motives, and especially Peterson, after his ghost authored attack on me and the surfacestations project. A true scientist doesn’t need to write ghost articles to discredit the work of others. I’ve put my name on every criticism I ever made of the surface temperature record and NCDC. I thought it was the ultimate cheap shot that Peterson and NCDC didn’t, and then posted it to the NCDC main web page. Remember, this is the same NCDC that used photoshopped flooded houses in government reports. But I digress.
I’ve posted a figure below, along with the abstract and concluding remarks from the article, it is well worth a read.

Tropospheric temperature trends: history of an ongoing controversy
Peter W. Thorne, John R. Lanzante, Thomas C. Peterson, Dian J. Seidel and Keith P. Shine
Changes in atmospheric temperature have a particular importance in climate
research because climate models consistently predict a distinctive vertical profile
of trends. With increasing greenhouse gas concentrations, the surface and
troposphere are consistently projected to warm, with an enhancement of that
warming in the tropical upper troposphere. Hence, attempts to detect this distinct
‘fingerprint’ have been a focus for observational studies. The topic acquired
heightened importance following the 1990 publication of an analysis of satellite
data which challenged the reality of the projected tropospheric warming. This
review documents the evolution over the last four decades of understanding
of tropospheric temperature trends and their likely causes. Particular focus
is given to the difficulty of producing homogenized datasets, with which to
derive trends, from both radiosonde and satellite observing systems, because of
the many systematic changes over time. The value of multiple independent
analyses is demonstrated. Paralleling developments in observational datasets,
increased computer power and improved understanding of climate forcing
mechanisms have led to refined estimates of temperature trends from a wide
range of climate models and a better understanding of internal variability. It is
concluded that there is no reasonable evidence of a fundamental disagreement
between tropospheric temperature trends from models and observations when
uncertainties in both are treated comprehensively.
…
CONCLUDING REMARKS
There is an old saying that a person with one watch always knows what time it is, but with two watches one is never sure. The controversy over surface and tropospheric temperature trends started in 1990 when the first satellite upper air ‘watch’ was produced
and it was na¨ıvely assumed that it told the correct time. Over the subsequent years, with the advent of not just two but multiple watches from different ‘manufacturers’ and using two distinct ‘technologies’, a more accurate measure of the structural uncertainty
inherent in estimating what the ‘time’ truly is has emerged.
The state of the observational and model science has progressed considerably since 1990. The uncertainty of both models and observations is currently wide enough, and the agreement in trends close enough, to support a finding of no fundamental discrepancy between the observations and model estimates throughout the tropospheric column. However, the controversy will undoubtedly continue because some estimates of tropospheric warming since 1979 are less than estimates of surface warming, or fall outside of the range of analogous model estimates (e.g., Figure 8).
There are several key lessons for the future:
1. No matter how august the responsible research group, one version of a dataset cannot give a measure of the structural uncertainty inherent in the information.
2. A full measure of both observational uncertainty and model uncertainty must be taken into consideration when assessing whether there is agreement or disagreement between theory (as represented by models) and reality (as represented by observations).
3. In addition to better routine observations, underpinning reference observations are
required to allow analysts to calibrate the data and unambiguously extract the true climate signal from the inevitable nonclimatic influences inherent in the routine observations.
================================================================
#3 What? The “true climate signal” hasn’t been extracted? And, “inevitable nonclimatic influences”? What, noise and uncertainty? What a concept! I agree though, that better routine and reference observations are needed. Problem is, we don’t have much of that that extends back 100+ years. The Climate Reference Network in the USA was only recently completed, and many countries have no equivalent. We really have very little surface data that is free of “inevitable nonclimatic influences inherent in the routine observation”. Are we getting better at pulling the signal from the noise? Yes. Have we got it right yet? I’m doubtful.
I also find lesson #2, “observational uncertainty” quite interesting, given that we’ve just shown the high level of “observational uncertainty” in the US Historical Climate Network with Fall et al 2011. We all need to get a better handle on this, as well as the “observational uncertainty” of the Global HCN, which NCDC’s Tom Peterson just happens to manage.
The full article is here Thorne_etal_2011 h/t to Dallas Staley
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
“What? The “true climate signal” hasn’t been extracted?”
Perhaps there isn’t one to extract, perhaps there is only ‘Noise’ after all.
1. No matter how august the responsible research group
Let me be the first one to welcome Mr Peterson, his co-authors and NCDC to the scientific method.
“Personally, I have little trust of NCDC’s motives, and especially Peterson, after his ghost authored attack on me and the surfacestations project.”
Gee, why *might* the surfacestations project be subject to antagonism?
Could it be due to your issuing of what amounted to shrill press releases prior to actually publishing, just like was the case for Cold Fusion?
You spent the last two years *implying* that mere pictures of a BBQ next to a temperature station or two proved that all evident warming in the global average was due to urban heat island effect even though averages are computed using anomalies instead of absolute values so once a site is in place the BBQ would be mostly a constant error that would not show up in an anomaly, unless of course the city doubled in size in the last decade.
When someone took your preliminary data which was online and did some Saturday afternoon math on it, no difference was found between a sample of your top rated and lowest rated stations in terms of warming trend. So then you suddenly HID YOUR DATA for over a year, with the thin soup weak tea excuse that it was only 80% instead of 180% complete. Something like that. Some dude posted a video of a little kid “proving” that urban vs. rural was a big deal. Fine. But where’s a primary journal article that shows that urban heating is a big deal that cancels out all of Hanson’s work? The confusing between good/bad siting and rural/urban has not been addressed very well.
Basically you found NO overall effect of station siting despite higher highs and lower lows or the other way around, which cancelled out. You must have known that even after 20% of sites were rated. A sample size of 20% is surely enough to draw strong conclusions. It’s as if you wanted to test every cancer patient in the world to see if your drug cured cancer since only testing 1 in 5 of them didn’t prove that it did.
You have done skepticism no favor with your handling of the Surface Stations Project.
REPLY: It is certainly easy to criticize, anyone can do it without expending much effort. You certainly have. Explain how you can get a valid sample at 30%, when people said I should drop the project and do the analysis, when there were so few good stations. One of the critics using the argument at that time was John Neilsen-Gammon, co-author now. Finding the best stations was the goal, because the bad ones are everywhere. Only 1 in 10 are acceptable.
Bear in mind we’ve just started drilling into the data, another paper is coming. Sure there will be the tendency for some to shout case closed because it is convenient. That’s fine. We don’t think so.
And, there is a second paper looking at the data differently.
Some points:
If NOAA had paid attention to their own simplest siting requirement, the 100 foot rule, I’d have no argument at all. They didn’t.
If NCDC had not closed the metadata system access when I first started the project, I’d have no initial reason to distrust them. They boobed instead.
If NCDC had done this survey themselves, I would have had no traction. They didn’t.
If NCDC had published the talking points memo like any other scientific rebuttal, I’d have no reason to distrust them a second time, they didn’t.
NCDC is the one that used preliminary, non quality controlled data to write a paper to pre-empt mine. They could have waited for the full data set so that their rebuttal was even stronger, they didn’t. There’s quite a backstory to that actually.
NCDC had a chance to help me when I asked for help when they invited me to present my preliminary findings at NCDC headquarters in April 2008. They chose not to, and took the low road instead.
The criticisms I’ve levied are valid. The USHCN and COOP network is a mess, and NCDC has proven as much by their own actions of creating the Climate Reference Network. While that’s something I support, it shows they recognize the problem and take it seriously.
So forgive me if I don’t have a lot of trust in NCDC. Be as upset as you wish, but holding the data until publication is my right. The SI will be published today. You and other armchair critics can have at it.
Anthony
That’s a very odd way of saying that no useful conclusion could be drawn. Absense of proof is not a “finding of no fundamental discrepancy”.
When I once crossed swords with a greenie who trolls the Independent newspaper in the UK, I suggested he check out some real information on WUWT…he came back to me with a personal attack on Anthony. Namely that he was in the pay of a Pacific Island Property Company!!! and that the Surfacestations project was an abandoned shambolic failure!!! I realized that the troll in question was not intelligent or widely informed and therefore concluded that this was an attack line he had picked up, pre scripted from one of the pro AGW sites.
Not once did he address the information from NASA NOAA JAXA Illinois etc etc.
Sometimes, rarely in my case, it feels nice just being on the moral high ground, so to speak.
If I may summarize the Concluding Remarks:
1) The data are wrong.
2) The models are wrong.
3) We need noise-free measurements from a noisy system.
4) [Bonus] We don’t know what we’re doing either.
A full measure of both observational uncertainty and model uncertainty must be taken into consideration when assessing whether there is agreement or disagreement between theory (as represented by models) and reality (as represented by observations).
‘The science is settled’ was by all accounts then a tad ambitious! Uncertainty rules the climate as ever and pushes the science back to maybe not infancy, more it’s formative years
The usual obfuscation from Thorne and Peterson (the guys who are really in charge of the world-wide temperature record).
But there is something interesting in the paper which finally puts a number to something which has not been clear before – a measure for the Tropical Troposphere hotspot.
… in Figure 7, they show that all the climate models they used expect/predict that the Tropical 2LT Troposphere (the UAH and RSS measure) should be increasing at 1.272 times the Surface.
So we have a number for this now: 1.272 times.
Its actually increasing at about 0.55 times the Surface in the Tropics so there is actually more a Tropical Coolspot than a hotspot but that is reality versus theory again.
“Since the earliest attempts to mathematically model the climate system’s response to human-induced increases in greenhouse gases, a consistent picture
of resulting atmospheric temperature trends has emerged.”
If you program in the same forcing, you get the same result. GIGO.
NCDC has provided a textbook example of the use of FUD (fear, uncertainty, doubt) to try to shore up a weakening position.
Our institutions have all been taken over by teenagers. Our government, our media, our universities, our science institutions, the adults have all gone missing.
For those at the top, everyone can see your brittle defensiveness. Stop it. If you feel some strong emotion when you see something contrary to your publicly-stated opinion, let that emotion subside before you consider your response.
A publicly stated opinion is extremely difficult to back away from, and we always feel the need to jump in quickly when publicly challenged. Don’t do that. It’s much easier to adjust your opinion if you find out early that it needs adjustment. If you automatically dismiss any and all criticism, that’s childish, and you will dig yourself in deeper and it will be much harder to back out later if you find you were wrong. And none of are 100% right all the time.
I wonder about this “true climate signal”, the white whale of climate science. It appears to mean “if I think its the real variation in climate, then no amount of criticism will convince me otherwise”.
“Paralleling developments in observational datasets,
increased computer power and improved understanding of climate forcing
mechanisms have led to refined estimates of temperature trends from a wide
range of climate models and a better understanding of internal variability. It is
concluded that there is no reasonable evidence of a fundamental disagreement
between tropospheric temperature trends from models and observations when
uncertainties in both are treated comprehensively.”
This sounds like another “The Models are right” quote.
If over 90% of the surface station data is badly contaminated, how on earth can they formulate a model that matches the contaminated data?
GPlant
“Hence, attempts to detect this distinct ‘fingerprint’ have been a focus for observational studies.”
Holy cow, put the cart before the horse. Model says it should happen, so let’s find supporting evidence. At all costs. It will be a travesty if we can’t. That is just so blatantly wrong. FAIL.
Well just eyeballing the graph two things stand out to me. 1) the UAH dataset trendline and published estimates converge overtime suggesting the data/analysis is getting better as one would expect. 2) after a “bump” about 2004 rates are decreasing dispite the increase in CO2.
Perhaps Mr. Peterson can explain that one….
I think the AGW idea that the stratosphere should cool with increasing CO2 is poorly reasoned. See section 2 of the following link, specifically figure 2.12 and the
related formula for an N layer atmosphere.
http://www.geo.utexas.edu/courses/387H/Lectures/chap2.pdf
Again referring to figure 2.12, what the CAGWers are assuming is a static atmosphere with a new layer of CO2 greenhouse gas suddenly being dumped just above layer 1.
If that were to happen, radiation above layer 1, presumably the stratosphere WOULD temporarily cool, and layer 1 WOULD TEMPORARILY warm, but the atmosphere would gradually adjust to an N+1 layer model, where the radiation at the top of the troposphere was restored to initial values.
In the real world, we’re not suddenly dumping a new layer of CO2 in the atmosphere. CO2 is inreasing very slightly each year, giving the atmosphere plenty of time to adjust. What we’d get in real life is NO measurable increase in troposphere temperature and NO measurable decrease in stratosphere temperature due to increased CO2- there could be changes due to Ozone, which could be affected by the solar magnetic cycle, but that’s a completely different matter.
If we had 2 Earth’s, a control Earth with constant CO2 and one with increasing CO2, I suppose the earth with the increasing CO2 would be miniscually warmer than the control earth, and the stratosphere would be miniscually cooler than the control earth due to slight delays in reaching equilibrium , but no trend could be detected.
Obviously, they are on a path where the “true climate signal” will ultimately be revealed only by a climate model tuned to support their preconceived notions.
“What? The “true climate signal” hasn’t been extracted?”
Yet they are certain that huge taxes and Emissions Trading Schemes can fix everything and handing veto power to the EPA under the guise of sustainability.
I feel your pain. Frustrating.
Politicians, bureaucrats, and paid government scientists along with grant seekers, are the real problem with our society today. They all seem to only seek money or fame or both. Maybe someday we will reach a tipping point. The coming inflation because of government printing of money may be the start.
“It is concluded that there is no reasonable evidence of a fundamental disagreement between tropospheric temperature trends from models and observations when uncertainties in both are treated comprehensively.”
Such severely misguided proclamations – based on fundamentally untenable assumptions – preclude the possibility of sensible discourse.
so these clowns see the work of real scientists and point to the fact that they admit to uncertainty … wow … try looking in the mirror fools …
It’s the “true signal” VS the actual signal.
Confirmation bias.
The science is settled until the data doesn’t match. Then uncertainties grow and grow until they can be made to overlap.
The abstract states:
“Paralleling developments in observational datasets,
increased computer power and improved understanding of climate forcing
mechanisms have led to refined estimates of temperature trends from a wide
range of climate models and a better understanding of internal variability. It is
concluded that there is no reasonable evidence of a fundamental disagreement
between tropospheric temperature trends from models and observations when
uncertainties in both are treated comprehensively.”
The first sentence claims an “improved understanding of climate forcing
mechanisms.” If the claim is true then the authors have created some physical hypotheses that describe some natural regularities that govern the temperature changes that they are studying. If the claim is true then they can use these physical hypotheses to specify temperature data and make predictions about temperature changes. More important, the physical hypotheses will explain the data; that is, they will explain the natural regularities that caused particular data points to have their particular temperature readings. If the authors have these things then their paper is ground breaking. However, there is a big reason to doubt that they have them, namely, they continue to talk about models and the uncertainties with models. If you have physical hypotheses which explain forcings, you now have something real and will lose all interest in models. As soon as possible, I will read their paper to see if at least one physical hypothesis can be found there.
Their final sentence and conclusion asserts that there is no “fundamental disagreement” between models and observations. This is likely to prove to be a tautology; that is, something that is trivially true. Until the publication of this paper, no one has been able to create reasonably well-confirmed hypotheses that explain forcings. Without such hypotheses, the temperature data collected amounts to a series of numbers whose relationships to the real world are quite unknown. This is trivially easy to demonstrate. If you install thermometers in each room of a building and collect temperature readings from those thermometers for years, you learn nothing about what causes temperature changes in that building. For the readings to come alive, to be meaningful, you must know what system of heating and cooling is found in each room, how each room is ventilated, how each room is used, and so on. Without such physical hypotheses, just looking at diverging sets of temperature readings cannot tell you one thing about the rooms. So, divergent sets of temperature readings cannot disagree with one another. Only physical hypotheses which explain one or the other data set can disagree with one another. Roy Spencer wrote a book titled “The Great Global Warming Blunder” which explained quite clearly that, at this time, there are no physical hypotheses that explain forcings. So, he is aware of what he does not have. I will read the article under discussion here to see if they have advanced to the same level of awareness. I doubt it. If they had one or more such physical hypotheses, the news would dominate the news cycle for days and President Obama would address the world.
Come on Anthony, you and your cohorts just need to get with the program, sell your souls, and become one of the “enlightened.” You know, just like one of the Stepford “scientists”.
Keep up the good work, the only man-made warming they are feeling is the heat from you, the Mc’s, etc.
Ric Werme says:
May 13, 2011 at 5:03 am
You nailed it.