NCDC cites "controversy" with the UAH temperature record, and the search for a "true climate signal"

Dilbert.com
Embedded with permission from dilbert.com - click to see original

This article in the January/February edition of WIRES Climate Change doesn’t surprise me at all. With the uncertainty of the surface temperature record in question, the Met Office’s Peter Thorne and NCDC’s Tom Peterson, who once wrote a ghost authored attack against the surfacestations project, take aim to elicit controversy (their word) over Christy and Spencer’s satellite derived temperature record.

Personally, I have little trust of NCDC’s motives, and especially Peterson, after his ghost authored attack on me and the surfacestations project. A true scientist doesn’t need to write ghost articles to discredit the work of others. I’ve put my name on every criticism I ever made of the surface temperature record and NCDC. I thought it was the ultimate cheap shot that Peterson and NCDC didn’t, and then posted it to the NCDC main web page. Remember, this is the same NCDC that used photoshopped flooded houses in government reports. But I digress.

I’ve posted a figure below, along with the abstract and concluding remarks from the article, it is well worth a read.

FIGURE 10 | Evolution of estimates of observed trends in global-mean MT and surface temperatures during the satellite era (since 1979), based on satellite (blue), radiosonde (red) and land/SST (green) observations. Symbols show trends for 1979 to the year plotted, as reported in the literature, except for 1979–2008 trends which were calculated for this study (by Carl Mears or current authors). Blue line shows trends from the September 2009 version of UAH for each year. Differences between this line and the UAH published estimates (blue circles) illustrate the degree of change in the different versions of this dataset.

Tropospheric temperature trends: history of an ongoing controversy

Peter W. Thorne, John R. Lanzante,  Thomas C. Peterson, Dian J. Seidel and Keith P. Shine

Changes in atmospheric temperature have a particular importance in climate

research because climate models consistently predict a distinctive vertical profile

of trends. With increasing greenhouse gas concentrations, the surface and

troposphere are consistently projected to warm, with an enhancement of that

warming in the tropical upper troposphere. Hence, attempts to detect this distinct

‘fingerprint’ have been a focus for observational studies. The topic acquired

heightened importance following the 1990 publication of an analysis of satellite

data which challenged the reality of the projected tropospheric warming. This

review documents the evolution over the last four decades of understanding

of tropospheric temperature trends and their likely causes. Particular focus

is given to the difficulty of producing homogenized datasets, with which to

derive trends, from both radiosonde and satellite observing systems, because of

the many systematic changes over time. The value of multiple independent

analyses is demonstrated. Paralleling developments in observational datasets,

increased computer power and improved understanding of climate forcing

mechanisms have led to refined estimates of temperature trends from a wide

range of climate models and a better understanding of internal variability. It is

concluded that there is no reasonable evidence of a fundamental disagreement

between tropospheric temperature trends from models and observations when

uncertainties in both are treated comprehensively.

CONCLUDING REMARKS

There is an old saying that a person with one watch always knows what time it is, but with two watches one is never sure. The controversy over surface and tropospheric temperature trends started in 1990 when the first satellite upper air ‘watch’ was produced

and it was na¨ıvely assumed that it told the correct time. Over the subsequent years, with the advent of not just two but multiple watches from different ‘manufacturers’ and using two distinct ‘technologies’, a more accurate measure of the structural uncertainty

inherent in estimating what the ‘time’ truly is has emerged.

The state of the observational and model science has progressed considerably since 1990. The uncertainty of both models and observations is currently wide enough, and the agreement in trends close enough, to support a finding of no fundamental discrepancy between the observations and model estimates throughout the tropospheric column. However, the controversy will undoubtedly continue because some estimates of tropospheric warming since 1979 are less than estimates of surface warming, or fall outside of the range of analogous model estimates (e.g., Figure 8).

There are several key lessons for the future:

1. No matter how august the responsible research group, one version of a dataset cannot give a measure of the structural uncertainty inherent in the information.

2. A full measure of both observational uncertainty and model uncertainty must be taken into consideration when assessing whether there is agreement or disagreement between theory (as represented by models) and reality (as represented by observations).

3. In addition to better routine observations, underpinning reference observations are

required to allow analysts to calibrate the data and unambiguously extract the true climate signal from the inevitable nonclimatic influences inherent in the routine observations.

================================================================

#3 What? The “true climate signal” hasn’t been extracted?  And, “inevitable nonclimatic influences”? What, noise and uncertainty? What a concept! I agree though, that better routine and reference observations are needed. Problem is, we don’t have much of that that extends back 100+ years. The Climate Reference Network in the USA was only recently completed, and many countries have no equivalent. We really have very little surface data that is free of “inevitable nonclimatic influences inherent in the routine observation”. Are we getting better at pulling the signal from the noise? Yes. Have we got it right yet? I’m doubtful.

I also find lesson #2, “observational uncertainty” quite interesting, given that we’ve just shown the high level of “observational uncertainty” in the US Historical Climate Network with Fall et al 2011. We all need to get a better handle on this, as well as the “observational uncertainty” of the Global HCN, which NCDC’s Tom Peterson just happens to manage.

The full article is here Thorne_etal_2011 h/t to Dallas Staley

0 0 votes
Article Rating
67 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
May 13, 2011 4:13 am

“What? The “true climate signal” hasn’t been extracted?”
Perhaps there isn’t one to extract, perhaps there is only ‘Noise’ after all.

May 13, 2011 4:27 am

1. No matter how august the responsible research group
Let me be the first one to welcome Mr Peterson, his co-authors and NCDC to the scientific method.

nikfromnyc
May 13, 2011 4:37 am

“Personally, I have little trust of NCDC’s motives, and especially Peterson, after his ghost authored attack on me and the surfacestations project.”
Gee, why *might* the surfacestations project be subject to antagonism?
Could it be due to your issuing of what amounted to shrill press releases prior to actually publishing, just like was the case for Cold Fusion?
You spent the last two years *implying* that mere pictures of a BBQ next to a temperature station or two proved that all evident warming in the global average was due to urban heat island effect even though averages are computed using anomalies instead of absolute values so once a site is in place the BBQ would be mostly a constant error that would not show up in an anomaly, unless of course the city doubled in size in the last decade.
When someone took your preliminary data which was online and did some Saturday afternoon math on it, no difference was found between a sample of your top rated and lowest rated stations in terms of warming trend. So then you suddenly HID YOUR DATA for over a year, with the thin soup weak tea excuse that it was only 80% instead of 180% complete. Something like that. Some dude posted a video of a little kid “proving” that urban vs. rural was a big deal. Fine. But where’s a primary journal article that shows that urban heating is a big deal that cancels out all of Hanson’s work? The confusing between good/bad siting and rural/urban has not been addressed very well.
Basically you found NO overall effect of station siting despite higher highs and lower lows or the other way around, which cancelled out. You must have known that even after 20% of sites were rated. A sample size of 20% is surely enough to draw strong conclusions. It’s as if you wanted to test every cancer patient in the world to see if your drug cured cancer since only testing 1 in 5 of them didn’t prove that it did.
You have done skepticism no favor with your handling of the Surface Stations Project.
REPLY: It is certainly easy to criticize, anyone can do it without expending much effort. You certainly have. Explain how you can get a valid sample at 30%, when people said I should drop the project and do the analysis, when there were so few good stations. One of the critics using the argument at that time was John Neilsen-Gammon, co-author now. Finding the best stations was the goal, because the bad ones are everywhere. Only 1 in 10 are acceptable.
Bear in mind we’ve just started drilling into the data, another paper is coming. Sure there will be the tendency for some to shout case closed because it is convenient. That’s fine. We don’t think so.
And, there is a second paper looking at the data differently.
Some points:
If NOAA had paid attention to their own simplest siting requirement, the 100 foot rule, I’d have no argument at all. They didn’t.
If NCDC had not closed the metadata system access when I first started the project, I’d have no initial reason to distrust them. They boobed instead.
If NCDC had done this survey themselves, I would have had no traction. They didn’t.
If NCDC had published the talking points memo like any other scientific rebuttal, I’d have no reason to distrust them a second time, they didn’t.
NCDC is the one that used preliminary, non quality controlled data to write a paper to pre-empt mine. They could have waited for the full data set so that their rebuttal was even stronger, they didn’t. There’s quite a backstory to that actually.
NCDC had a chance to help me when I asked for help when they invited me to present my preliminary findings at NCDC headquarters in April 2008. They chose not to, and took the low road instead.
The criticisms I’ve levied are valid. The USHCN and COOP network is a mess, and NCDC has proven as much by their own actions of creating the Climate Reference Network. While that’s something I support, it shows they recognize the problem and take it seriously.
So forgive me if I don’t have a lot of trust in NCDC. Be as upset as you wish, but holding the data until publication is my right. The SI will be published today. You and other armchair critics can have at it.
Anthony

Ulf
May 13, 2011 4:39 am

That’s a very odd way of saying that no useful conclusion could be drawn. Absense of proof is not a “finding of no fundamental discrepancy”.

charles nelson
May 13, 2011 4:57 am

When I once crossed swords with a greenie who trolls the Independent newspaper in the UK, I suggested he check out some real information on WUWT…he came back to me with a personal attack on Anthony. Namely that he was in the pay of a Pacific Island Property Company!!! and that the Surfacestations project was an abandoned shambolic failure!!! I realized that the troll in question was not intelligent or widely informed and therefore concluded that this was an attack line he had picked up, pre scripted from one of the pro AGW sites.
Not once did he address the information from NASA NOAA JAXA Illinois etc etc.
Sometimes, rarely in my case, it feels nice just being on the moral high ground, so to speak.

Editor
May 13, 2011 5:03 am

If I may summarize the Concluding Remarks:
1) The data are wrong.
2) The models are wrong.
3) We need noise-free measurements from a noisy system.
4) [Bonus] We don’t know what we’re doing either.

Nigel Brereton
May 13, 2011 5:09 am

A full measure of both observational uncertainty and model uncertainty must be taken into consideration when assessing whether there is agreement or disagreement between theory (as represented by models) and reality (as represented by observations).
‘The science is settled’ was by all accounts then a tad ambitious! Uncertainty rules the climate as ever and pushes the science back to maybe not infancy, more it’s formative years

Bill Illis
May 13, 2011 5:50 am

The usual obfuscation from Thorne and Peterson (the guys who are really in charge of the world-wide temperature record).
But there is something interesting in the paper which finally puts a number to something which has not been clear before – a measure for the Tropical Troposphere hotspot.
… in Figure 7, they show that all the climate models they used expect/predict that the Tropical 2LT Troposphere (the UAH and RSS measure) should be increasing at 1.272 times the Surface.
So we have a number for this now: 1.272 times.
Its actually increasing at about 0.55 times the Surface in the Tropics so there is actually more a Tropical Coolspot than a hotspot but that is reality versus theory again.

Steve Keohane
May 13, 2011 5:50 am

“Since the earliest attempts to mathematically model the climate system’s response to human-induced increases in greenhouse gases, a consistent picture
of resulting atmospheric temperature trends has emerged.”

If you program in the same forcing, you get the same result. GIGO.

May 13, 2011 5:59 am

NCDC has provided a textbook example of the use of FUD (fear, uncertainty, doubt) to try to shore up a weakening position.
Our institutions have all been taken over by teenagers. Our government, our media, our universities, our science institutions, the adults have all gone missing.
For those at the top, everyone can see your brittle defensiveness. Stop it. If you feel some strong emotion when you see something contrary to your publicly-stated opinion, let that emotion subside before you consider your response.
A publicly stated opinion is extremely difficult to back away from, and we always feel the need to jump in quickly when publicly challenged. Don’t do that. It’s much easier to adjust your opinion if you find out early that it needs adjustment. If you automatically dismiss any and all criticism, that’s childish, and you will dig yourself in deeper and it will be much harder to back out later if you find you were wrong. And none of are 100% right all the time.

May 13, 2011 6:09 am

I wonder about this “true climate signal”, the white whale of climate science. It appears to mean “if I think its the real variation in climate, then no amount of criticism will convince me otherwise”.

GPlant
May 13, 2011 6:10 am

“Paralleling developments in observational datasets,
increased computer power and improved understanding of climate forcing
mechanisms have led to refined estimates of temperature trends from a wide
range of climate models and a better understanding of internal variability. It is
concluded that there is no reasonable evidence of a fundamental disagreement
between tropospheric temperature trends from models and observations when
uncertainties in both are treated comprehensively.”
This sounds like another “The Models are right” quote.
If over 90% of the surface station data is badly contaminated, how on earth can they formulate a model that matches the contaminated data?
GPlant

May 13, 2011 6:14 am

“Hence, attempts to detect this distinct ‘fingerprint’ have been a focus for observational studies.”
Holy cow, put the cart before the horse. Model says it should happen, so let’s find supporting evidence. At all costs. It will be a travesty if we can’t. That is just so blatantly wrong. FAIL.

bwanajohn
May 13, 2011 6:18 am

Well just eyeballing the graph two things stand out to me. 1) the UAH dataset trendline and published estimates converge overtime suggesting the data/analysis is getting better as one would expect. 2) after a “bump” about 2004 rates are decreasing dispite the increase in CO2.
Perhaps Mr. Peterson can explain that one….

Alan D McIntire
May 13, 2011 6:25 am

I think the AGW idea that the stratosphere should cool with increasing CO2 is poorly reasoned. See section 2 of the following link, specifically figure 2.12 and the
related formula for an N layer atmosphere.
http://www.geo.utexas.edu/courses/387H/Lectures/chap2.pdf
Again referring to figure 2.12, what the CAGWers are assuming is a static atmosphere with a new layer of CO2 greenhouse gas suddenly being dumped just above layer 1.
If that were to happen, radiation above layer 1, presumably the stratosphere WOULD temporarily cool, and layer 1 WOULD TEMPORARILY warm, but the atmosphere would gradually adjust to an N+1 layer model, where the radiation at the top of the troposphere was restored to initial values.
In the real world, we’re not suddenly dumping a new layer of CO2 in the atmosphere. CO2 is inreasing very slightly each year, giving the atmosphere plenty of time to adjust. What we’d get in real life is NO measurable increase in troposphere temperature and NO measurable decrease in stratosphere temperature due to increased CO2- there could be changes due to Ozone, which could be affected by the solar magnetic cycle, but that’s a completely different matter.
If we had 2 Earth’s, a control Earth with constant CO2 and one with increasing CO2, I suppose the earth with the increasing CO2 would be miniscually warmer than the control earth, and the stratosphere would be miniscually cooler than the control earth due to slight delays in reaching equilibrium , but no trend could be detected.

May 13, 2011 6:31 am

Obviously, they are on a path where the “true climate signal” will ultimately be revealed only by a climate model tuned to support their preconceived notions.

Jack
May 13, 2011 6:32 am

“What? The “true climate signal” hasn’t been extracted?”
Yet they are certain that huge taxes and Emissions Trading Schemes can fix everything and handing veto power to the EPA under the guise of sustainability.

jack morrow
May 13, 2011 6:40 am

I feel your pain. Frustrating.
Politicians, bureaucrats, and paid government scientists along with grant seekers, are the real problem with our society today. They all seem to only seek money or fame or both. Maybe someday we will reach a tipping point. The coming inflation because of government printing of money may be the start.

Paul Vaughan
May 13, 2011 6:43 am

“It is concluded that there is no reasonable evidence of a fundamental disagreement between tropospheric temperature trends from models and observations when uncertainties in both are treated comprehensively.”
Such severely misguided proclamations – based on fundamentally untenable assumptions – preclude the possibility of sensible discourse.

Jeff Carlson
May 13, 2011 6:46 am

so these clowns see the work of real scientists and point to the fact that they admit to uncertainty … wow … try looking in the mirror fools …

Scott Covert
May 13, 2011 7:02 am

It’s the “true signal” VS the actual signal.
Confirmation bias.

stan
May 13, 2011 7:12 am

The science is settled until the data doesn’t match. Then uncertainties grow and grow until they can be made to overlap.

Theo Goodwin
May 13, 2011 7:14 am

The abstract states:
“Paralleling developments in observational datasets,
increased computer power and improved understanding of climate forcing
mechanisms have led to refined estimates of temperature trends from a wide
range of climate models and a better understanding of internal variability. It is
concluded that there is no reasonable evidence of a fundamental disagreement
between tropospheric temperature trends from models and observations when
uncertainties in both are treated comprehensively.”
The first sentence claims an “improved understanding of climate forcing
mechanisms.” If the claim is true then the authors have created some physical hypotheses that describe some natural regularities that govern the temperature changes that they are studying. If the claim is true then they can use these physical hypotheses to specify temperature data and make predictions about temperature changes. More important, the physical hypotheses will explain the data; that is, they will explain the natural regularities that caused particular data points to have their particular temperature readings. If the authors have these things then their paper is ground breaking. However, there is a big reason to doubt that they have them, namely, they continue to talk about models and the uncertainties with models. If you have physical hypotheses which explain forcings, you now have something real and will lose all interest in models. As soon as possible, I will read their paper to see if at least one physical hypothesis can be found there.
Their final sentence and conclusion asserts that there is no “fundamental disagreement” between models and observations. This is likely to prove to be a tautology; that is, something that is trivially true. Until the publication of this paper, no one has been able to create reasonably well-confirmed hypotheses that explain forcings. Without such hypotheses, the temperature data collected amounts to a series of numbers whose relationships to the real world are quite unknown. This is trivially easy to demonstrate. If you install thermometers in each room of a building and collect temperature readings from those thermometers for years, you learn nothing about what causes temperature changes in that building. For the readings to come alive, to be meaningful, you must know what system of heating and cooling is found in each room, how each room is ventilated, how each room is used, and so on. Without such physical hypotheses, just looking at diverging sets of temperature readings cannot tell you one thing about the rooms. So, divergent sets of temperature readings cannot disagree with one another. Only physical hypotheses which explain one or the other data set can disagree with one another. Roy Spencer wrote a book titled “The Great Global Warming Blunder” which explained quite clearly that, at this time, there are no physical hypotheses that explain forcings. So, he is aware of what he does not have. I will read the article under discussion here to see if they have advanced to the same level of awareness. I doubt it. If they had one or more such physical hypotheses, the news would dominate the news cycle for days and President Obama would address the world.

Kevin Schurig
May 13, 2011 7:18 am

Come on Anthony, you and your cohorts just need to get with the program, sell your souls, and become one of the “enlightened.” You know, just like one of the Stepford “scientists”.
Keep up the good work, the only man-made warming they are feeling is the heat from you, the Mc’s, etc.

Theo Goodwin
May 13, 2011 7:33 am

Ric Werme says:
May 13, 2011 at 5:03 am
You nailed it.

Latitude
May 13, 2011 7:45 am

. It is concluded that there is no reasonable evidence of a fundamental disagreement
between tropospheric temperature trends from models and observations when
uncertainties in both are treated comprehensively.
=====================================================
Our uncertainty bars are so large for both, that we can’t tell squat……………….
Still can’t find the hot spot, but we can justify not finding it because the uncertainty bars are larger than the computer games and the measured temperatures…………
But, we can program the computer games to show it and justify it, and make weather predictions with it

Latitude
May 13, 2011 8:10 am

Theo Goodwin says:
May 13, 2011 at 7:14 am
Their final sentence and conclusion asserts that there is no “fundamental disagreement” between models and observations.
=========================================================
nope
They said their error bars were so large they over powered the trends…….
“The uncertainty of both models and observations is currently wide enough, and the agreement in trends close enough, to support a finding of no fundamental discrepancy between the observations and model estimates throughout the tropospheric column. “

JT
May 13, 2011 8:17 am

“there is no reasonable evidence of a fundamental disagreement
between tropospheric temperature trends from models and observations when
uncertainties in both are treated comprehensively”
This is a wonderful example of sciency bafflegab. Note the weasel words “reasonable” “fundamental” “comprehensively”. Note the ambiguities “uncertainties in both” “treated”.
Model “uncertainties” are of a very different kind from temperature measurement uncertainties. I hope some statistically educated person will take a very close look at the “treatment” they have given the apples of Model “uncertainties” in order blend them with the oranges of temperature measurement uncertainties to make their “comprehensively” “fundamentally” “reasonable” fruit punch and comment here.

John F. Hultquist
May 13, 2011 8:22 am

Two days, two papers with statements to the effect that ‘we don’t know what we are doing and don’t know what to do about it’ but we still feel all warm and fuzzy about our zillion dollar climate science.

MikeN
May 13, 2011 8:33 am

That chart shows the no temperature trend from 1979 to 1998.

Hank Hancock
May 13, 2011 8:42 am

The “true climate signal” is code speak for what they filter out of the noise using our highly secret data processing methodologies. The basic tools consist of Principal Component Analysis (PCA) and Regularized Expectation Maximization (RegEM) infilling to deal with noisy data plagued with data dropouts, data transmission/logging errors, sensor drift, transcription errors, calibration errors, superimposed environmental signals (think UHI effect), low signal to noise ratio borne out of instrumentation that is itself noisy, homogenization, and a host of other data biasing problems (the artifacts on top of the real signal).
Let us not forget that the hockey stick was produced from PCA tuned to favor noise components that had the signature of a hockey stick – which certainly random noise would contain. In noise you can find anything you want if you filter for that one thing.
So the “true climate signal” is what they will find as a result of devising data processing algorithms that filter according to the models. The result will necessarily agree with the models. The models are always right. They are the created reality around which we must construct all inquiry and the standard by which all things will compare. Observation and empirical evidence is such old school science. /sarc

tonyc
May 13, 2011 9:04 am

The null hypothesis is (despite what Trenberth claims) “Increasing greenhouse gas concentrations do not impact tropospheric temperatures”
versus the
alternative hypothesis that “With increasing greenhouse gas concentrations, the surface and troposphere are consistently projected to warm, with an enhancement of that warming in the tropical upper troposphere. ”
What they are saying is that they failed to reject the null hypothesis. It does not matter whether they blame this on poor measurement quality or other such issues. The scientific method demands they continue to accept the null hypothesis.

Toto
May 13, 2011 9:35 am

They are Earthers. Their mind is made up. Nothing can change their mind.

Doug Proctor
May 13, 2011 9:36 am

All these comments demonstrate that the “evidence” is not clear and that models and computers are required to tease out the signal from noise and natural variations. Which is to say that there is no “evidence” but suggestions. Despite warnings of tomorrow’s destruction since 1988 (or earlier) we are no closer to detecting the approaching end-time than we were 23 years ago.
Good grief. Consensus, 97% certainty among people about a 95% certainty … about what? That that is the opinion of a bunch of people who’s careers are based in CAGW.

May 13, 2011 9:42 am

What a great weight of responsibility that paper by Santer et al. bears. Thorne’s review goes through all the hemming and hawing and chin-stroking and then, predictably, puts it all down to Santer. Just look at the money quote from page 79:

Overall, there is now no longer reasonable evidence of a fundamental disagreement between models and observations with regard to the vertical structure
of temperature change from the surface through the troposphere.[2,191]

Footnote 2 refers to the CCSP report that concluded there was a “potentially serious inconsistency” between models and observations in the tropical troposphere. No joy there for Thorne, despite the CCSP attempt to spin the conclusion by suggesting the models are right and the data are wrong.
Footnote 191 refers to Santer et al. IJOC 2008, that responded to Douglass et al (2007) by arguing that even though the trends are different, if you deal with autocorrelation using the bang up-to-date 1930s-era statistical methodology popular in climatology then the error bars between models and observations overlap a wee bit, so the discrepancy is not statistically significant.
The Santer et al. paper was one of the fig leafs used by the US EPA in its endangerment finding, and you can bet that whole sections of the next IPCC report will be printed on the same fig leaf.
Trouble is, Santer et al. used data ending in 1999, just after the big El Nino, and their testing methodology is not robust to persistency known to exist in the temperature data. Using updated data, robust methods and the full suite of models,
the discrepancy between models and observations in the tropical troposphere is statistically significant. Steve, Chad and I showed that in our ASL paper last year. The data sets we used have been updated since then, and in every case (including RICH and RSS) the observed trends have been reduced, making the discrepancy even bigger than what we reported.
I guess Thorne et al. didn’t see our paper, but I wait with fascination to see how the IPCC gets around it. This time it will be someone else’s job to “redefine what the peer-reviewed literature is”.

Theo Goodwin
May 13, 2011 9:42 am

Latitude says:
May 13, 2011 at 8:10 am
“They said their error bars were so large they over powered the trends…….”
OK, but I reject their analytical framework. Given their view of matters, all they have to do is bring down (together) the error bars. Wrong! Without physical hypotheses to explain the temperature measurements, it does not matter where the error bars are or where the trends are. Just ask Briffa.

Shevva
May 13, 2011 9:49 am

Warning tin foil hat response:-
Is Rome finally falling as these gentleman show the corruption(?) of the Goverment and scientific communties?
When Rome fell was the final stage of corruption the gladiatorial arena?
http://www.skysports.com/story/0,19528,19494_6926440,00.html
Tin foil hat off, the question is what can a man/women do?

R.Connelly
May 13, 2011 10:18 am

“1. No matter how august the responsible research group, one version of a dataset cannot give a measure of the structural uncertainty inherent in the information.”
————————————————————————————
I hate to be cynical – but this sounds like mid/high level management speak for “please dont cut my funding, we really do need all these seemingly redundant measurement systems”

R.S.Brown
May 13, 2011 10:45 am

I call to the WUWT readers’ collective attention Tom Peterson
and his personal contribution to the discussion of the Climategate
e-mails on WUWT at:
http://wattsupwiththat.com/2011/01/17/ncdcs-dr-tom-peterson-responds/
Sorry, nikfromnyc says: May 13, 2011 at 4:37 am, I don’t
recall any “shrill press releases coming out of
Anthony Watts, or the co-authors of the surfacestation report.
That picture of the acitve Stevenson screen with the antelope horns
and rusty rake head on top was a real screen. So was the shot
of a different screen mounted over the tombstone.
Anthony did get a bit loud when folks grabbed the data
from the server and wrote a pre-emptory paper…

May 13, 2011 10:46 am

nikfromnyc is simply repeating inaccurate talking points he read at one of the thinly-trafficked alarmist blogs. He couldn’t even spell Hansen’s name correctly, so it’s apparent that he’s not up to speed on the issue.
Here’s a quote that puts the entire debate in perspective:
“When you look closely at the climate change issue it is remarkable that the only actual evidence ever cited for a relationship between CO2 emissions and global warming is climate models.” [source]
Climate models can’t predict their way out of a wet paper bag. Empirical observations are the only valid metric. UAH shows declining temperatures even as harmless, beneficial CO2 rises.
Climate model projections diverge from real world measurements. So which should we accept? The computer models? Or the raw data? Because they can’t both be right.

May 13, 2011 10:58 am

of course after 4 years of arguing that the ISSUE is the UNCERTAINTY, and not the BIAS, I’ll welcome the support for my position.
REPLY: The bias still figures in for absolute readings, such as high and low record temperatures. There’s a number of people who argue that increased frequency of temperature records is an indicator of global warming. Problem is, records are determined by COOP stations and airports, not the Climate Reference Network. For example this train wreck in Honolulu due to a defective ASOS station, and the biased record remains.
http://wattsupwiththat.com/2009/06/19/more-on-noaas-fubar-honolulu-record-highs-asos-debacle-plus-finding-a-long-lost-giss-station/
My view is that equipment differences, UHI, and heat sinks help elevate night-time lows. I agree though we need to get a good handle on the uncertainty too. – Anthony.

Theo Goodwin
May 13, 2011 12:30 pm

“My view is that equipment differences, UHI, and heat sinks help elevate night-time lows. I agree though we need to get a good handle on the uncertainty too. – Anthony.”
I don’t mean to foist my views on Anthony but I do believe our reasoning is parallel. When Anthony points out the absurdity of having a measurement station close to a window air conditioner, in my terminology he is pointing to the absurdity of the assumption that the thermometers are located in some “natural environment” where all the physical regularities are understood and can be taken for granted. If climate science is to have a measurement regime that is not worthless, climate scientists must identify some aspects of the natural environment that would be important to measure and whose natural regularities are genuinely understood and can be taken as nature’s baseline. The same applies to all temperature measurements including satellites , water-born, whatever. Without serious agreement about the natural regularities that make up the environment in which thermometers are located, the readings from those thermometers are as worthless as the readings from Briffa’s tree rings after the yet unexplained decline began in 1960.

Latitude
May 13, 2011 1:00 pm

My view is that equipment differences, UHI, and heat sinks help elevate night-time lows. I agree though we need to get a good handle on the uncertainty too. – Anthony.
==================================================
As you’ve pointed out, it’s the night time lows that have been increasing and well known for decades.
And, as you’ve also pointed out, when only 1 in 10 stations are decent, and that one good station is not even spread evenly across the map. Once those 9 bad stations are averaged/homogenized/blended and beat into submission into that one good station…
…what was the point of that one good station in the first place

Publius
May 13, 2011 2:18 pm

NCDC and GISS — which share the same data source — are outliers relative to both UAH and RSS, and even CRU. RSS is a commercial outfit and has no ‘dog in the fight.’ The fact that RSS and UAH are quite close suggests that UAH has little if any bias. And CRU, despite all the ClimateGate business, is not that far off from either of the satellite data sets. There may be some upward bias in CRU but not the wholesale manipulation we see in NCDC and GISS. See Klotzbach, P. J., R. A. Pielke, Sr., R. A. Pielke, Jr., J. R. Christy, and R. T. McNider (2009), An alternative explanation for differential temperature trends at the surface and in the lower troposphere, J. Geophys. Res., 114, D21102, doi:10.1029/2009JD011841.

Harold Pierce Jr
May 13, 2011 2:39 pm

Ross McKitrick says on May 13, 2011 at 9:42 am:
“I guess Thorne et al. didn’t see our paper, but I wait with fascination to see how the IPCC gets around it. ”
Send them a reprint. That way they can’t claim they were unware of the paper. You should have a mailing list of the important honchos and always send them reprints.
Unfortunately, postage is so expensive these days. Nevertheless, it is money well spent.

Steve McIntyre
May 13, 2011 3:01 pm

Thorne appears to have been one of the adversarial reviewers of our submission to IJC, as he emailed Jones gloating about its rejection before this was reported at CA. Thorne as reviewer did not contest our findings; Thorne thus knows of the failings of Santer et al.
Thorne and Peterson, both referred to here, both used the defamatory term “fraudit” in connection with Climate Audit. It’s rather cheeky that Peterson complains about alleged “shrill”-ness at WUWT.

May 13, 2011 3:49 pm

How did this paper pass peer-review? It includes no discussion of Ross McKitrick, Stephen McIntyre and Chad Herman’s paper,
Panel and multivariate methods for tests of trend equivalence in climate data series (PDF)
(Atmospheric Science Letters, Volume 11, Issue 4, pp. 270–277, October/December 2010)
– Ross McKitrick, Stephen McIntyre, Chad Herman

Is this a joke?

May 13, 2011 4:27 pm

I have a theory that some of the problems we are seeing with the ‘quality’ (or lack there of) of the analysis and therefore the final figures is due to increased computational resources.
Basically it goes like this; in the good old days when computers took up whole rooms and you would have to wait hours if not days for a result – running a program was actually an expensive thing on a case by case basis. Also the slowness of the overall cycle to perform the processing would by its nature leave you more time to think about and ensure correctness in your processing. Also given how ‘inaccurate’ computers were then (in a decimal precision case) you were very ware of accumulative rounding errors etc – and designed your code to minimize them.
Now as time has gone on the amount of CPU power available at a given cost point has increased dramatically. Also underlying mathematical precision has increased. So you can do more processing in the same time frame and they do so; but there are few problems:
1) the error rate in the underlying data cannot be ‘processed away’. In fact as you do more processing on it, it becomes increasingly important to keep track of its impact.
2) the availability of more CPU power, has to me made it very attractive to ‘tune’ to a target result, i.e. the fact you can fast cycle your processing means you can easily pick the processing steps that lead you towards the result you want (whether known or subconscious)
3) this fast cycle also means you can easily apply new ways of processing not tried before; i.e. borrow from other domains. Which can be a good or bad thing, depending if you fully understand the other domain.
Basically, I’m very skeptical of any ‘over’ processing of climatic data at the moment; as the amount of due diligence applied has not kept pace with the amount of processing being performed; just because you can do more processing does not make it safe to do so.

Theo Goodwin
May 13, 2011 4:34 pm

Steve McIntyre says:
May 13, 2011 at 3:01 pm
“Thorne appears to have been one of the adversarial reviewers of our submission to IJC, as he emailed Jones gloating about its rejection before this was reported at CA. Thorne as reviewer did not contest our findings; Thorne thus knows of the failings of Santer et al.”
Another of Jones’ gossipy teenage clique. There was a time when men outgrew this kind of thing.
“Thorne and Peterson, both referred to here, both used the defamatory term “fraudit” in connection with Climate Audit. It’s rather cheeky that Peterson complains about alleged “shrill”-ness at WUWT.”
How kind you are, Mr. McIntyre. There is not a sentence you have written that either of them could criticize successfully.

sky
May 13, 2011 5:00 pm

Bill Illis says:
May 13, 2011 at 5:50 am
You make an astute observation about the ROC of LT vs. surface temps in the satellite era. Every region for which unbiased estimates of surface temps can be made shows them rising at a considerably GREATER rate during 1980-2000 than the LT. The boys in Asheville plainly refuse to learn about physical reality, preferring to adjust the data to match model expectations.

Gneiss
May 13, 2011 5:26 pm

No need for scare quotes around “controversy,” the UAH data have been a source of controversy among climate scientists for many years.
First, because they showed something different than other data showed, and Spencer & Christy used that difference to criticize the others. Second, because when inconsistencies in the UAH data were pointed out (by Tamino, among others), S&C ignored them. Eventually others showed that the UAH measurements were biased, and needed to be corrected. S&C finally agreed, but if they’d been more conscientious they would have admitted these problems when they first became obvious, and corrected their own data and erroneous conclusions without prompting.
Controversy about other aspects of the UAH data continue. Most recently, shifting their baseline up to make the anomalies go down, and subsequently writing about “negative anomalies,” may have impressed nonscientists but did nothing good for their reputation among scientists.

May 13, 2011 5:53 pm

Gneiss says:
“…if they’d been more conscientious they would have admitted these problems when they first became obvious, and corrected their own data and erroneous conclusions without prompting.”
Like Michael Mann et al. admitted their problems in MBH98/99, and corrected their own cherry-picked data and erroneous conclusions?
…Oh, that’s right, it’s thirteen years later and Mann still stonewalls.
Gneiss is worried about a speck in someone else’s eye, when he has a beam in his own.

May 13, 2011 6:11 pm

Gneiss can you show me where Tamino (Grant Foster) points out to S&C that they needed to correct their diurnal drift adjustment?

rbateman
May 13, 2011 7:49 pm

How do you improve your understanding of climate forcing when you don’t yet have the climate modeled to a state where it can be forecast?
Either one or both are clearly not understood.

jorgekafkazar
May 13, 2011 8:45 pm

stan says: “The science is settled until the data doesn’t match. Then uncertainties grow and grow until they can be made to overlap.”
Well and succinctly put!
steven mosher says: “…after 4 years of arguing that the ISSUE is the UNCERTAINTY, and not the BIAS, I’ll welcome the support for my position.”
Given the AGW industry’s efforts to conceal uncertainty, you may have put your chips on the best number. But it’s not the only number, quite yet.

Roger Carr
May 13, 2011 8:47 pm

Road trip Update: Day 1 at NCDC
Posted on April 23, 2008 by Anthony Watts
I felt right at home when I walked into Dr. Bruce Baker’s office for the Climate Reference Network (CRN) …
I have never forgotten that story, Anthony, and how pleased I was to know I had donated towards it — and then how sour it became with their subsequent deplorable attitudes and actions.
Those attitudes and actions were contemptible, and I will never forget, nor forgive them.

Gary Hladik
May 13, 2011 9:54 pm

Ross McKitrick says (May 13, 2011 at 9:42 am): [snip]
Thanks for the references.

mike sphar
May 14, 2011 12:53 am

This is all part of the settled consensus, I have so often heard about, I presume.

stephen richards
May 14, 2011 1:27 am

‘The science is settled’ was by all accounts then a tad ambitious! Uncertainty rules the climate as ever and pushes the science back to maybe not infancy, more it’s formative years
what about embryonic

stephen richards
May 14, 2011 1:31 am

It is concluded that there is no reasonable evidence of a fundamental disagreement
between tropospheric temperature trends from models and observations when
uncertainties in both are treated comprehensively.”
“After we have adjusted the data that needs adjusting to fit our preconceived results”

stephen richards
May 14, 2011 1:44 am

Gneiss
Most recently, shifting their baseline up to make the anomalies go down, and subsequently writing about “negative anomalies,” may have impressed nonscientists but did nothing good for their reputation among scientists.
They brought their ‘climatic period length’ into line with the other datasets. 30 yrs. The satelite had been working 29 yrs in the preceding days.
Why do you trolls never ever read reports properly? Are you incapable of proof reading?

NikFromNYC
May 14, 2011 5:32 am

Thanks so much Anthony for a proper response. I’ll study it in due time. It was cathartic to get that out of me.

Gneiss
May 14, 2011 7:57 am

Potech writes,
“Gneiss can you show me where Tamino (Grant Foster) points out to S&C that they needed to correct their diurnal drift adjustment?”
The error in UAH shown by Tamino’s analysis involved a strong seasonal cycle in the UAH TLT, which had no physical explanation and was not present in RSS or other temperature records. Others also noticed and wrote about the problem with UAH well before the UAH team admitted or tried to correct it. Eli Rabbett (Jan 26 2008) noted the increasing divergence between RSS and UAH, which seemed to occur in steps,
“The most interesting thing here … is that there was a systematically increasing difference between RSS and UAH (shown by the trend line btw 1979 and 2005), but that appears to have decreased starting in 2002 and disappeared in ~2005. ”
Atmoz took the analysis several steps farther in an Apr 21 2008 post,
“There are two main things that jumped out at me from this simple graph. …
First, the earlier years in the UAH data are warmer than the RSS data. This is the same finding as Eli found in January.
But what I found is that there doesn’t appear to be a relatively slow decrease in the difference, but instead there appears to be a jump at around 1992. I’ve highlighted this on my plot by using the colors red and blue. From eyeballing the graph, it appears that on average pre-1992 years were around 0.5C warmer than post-1992 years.
The main thing that shocked me was that for most of the post-1992 time, there is a definite interannual signal. Keep in mind that both of these datasets are supposed to be monthly anomalies. That is, they are anomalies from the monthly average. So there should be no interannual signal at all in either of the time series.”
Tamino saw this cycle in his own analysis. From Open Mind, Oct 30 2008:
“I find that the annual cycle shown in recent UAH TLT data is implausibly large, is implausibly very strong in the tropics, is implausibly larger over NH ocean than land, and is implausibly of roughly the same phase in both hemispheres. My conclusion is that the hypothesis that this cycle represents a real physical change in the annual cycle of temperature variations due to enhanced winter warming, is untenable.
My final conclusion from the previous post stands: there’s something wrong with UAH TLT data.”

Gneiss
May 14, 2011 9:15 am

stephen richards writes,
“Why do you trolls never ever read reports properly? Are you incapable of proof reading?”
3 false assumptions:
1. I’m not a troll, I mean everything I write and state the facts as accurately as I can.
2. I did read the report and found the stated rationale unconvincing.
3, I’m capable of proofreading, and also of thinking, analyzing data, and looking up research for myself.
But away from stephen richard’s name-calling and back to the substance … I wrote that scientists were unimpressed by UAH shifting the baseline for their anomalies because that step, by itself, makes no statistical difference. Trends will remain the same. Its main impact, however, was to shift the UAH anomalies downwards, so they would appear to be lower, and have a better chance to sometimes become negative. That can fool the many people who don’t understand anomalies. So when I read about the changeover I wondered if that might be why they did it. Sure enough, a few months later we see headlines that the UAH anomalies had “gone negative.”
REPLY: And the baseline argument in reverse is true for GISS and their 1951-1980 base period…it makes their anomalies appear higher. But you never complain about that. GISS graphs and “highest ever” pronouncements from their anomalies get probably 100x the press UAH gets. But it’s all good, because Dr. James Hansen is a “bias free” scientist /sarc – Anthony

Gneiss
May 14, 2011 10:11 am

Anthony writes,
“And the baseline argument in reverse is true for GISS and their 1951-1980 base period…it make their anomalies appear higher. But you never complain about that. GISS graphs and “highest ever” pronouncements from their anomalies get probably 100x the press UAH gets. But it’s all good, because Dr. James Hansen is a “bias free” scientist /sarc – Anthony”
Baselines don’t work that way. A record high anomaly would be a record high regardless of baseline chosen. 1951-1980 was the standard “period of climatology” in wide use when GISTEMP started, and they’ve kept that to maintain data comparability. Scientists know it would make no difference to the trends or extremes if they changed baselines now, but shifting to an earlier, colder period to make the recent anomalies seem “more positive” for public consumption would be dishonest. They won’t do it.
Shifting UAH to a later, warmer baseline likewise makes no difference to the trend or extremes. It just allows the anomalies to sound lower in public announcements.
REPLY: You are putting words in my mouth. I never said anything about trends. Note the /sarc tag. My point like yours is about public consumption and choice of baseline. What’s good for the goose is good for the gander. GISS is using an outdated standard “period of climatology”, even NOAA/NCDC is moving on to a modern one this year. It will be interesting to see. – Anthony

Matt G
May 14, 2011 3:34 pm

Based on surface versus hot spot alttitude, the warming observed is around 2.3 times greater then should be expected at the surface via greenhouse gas theory. This implies that either the theory is wrong or the surface has an error that is this times greater then should be observed with no other sources. Therefore reduce the warming observed by 2.3 times to get the true theoretical CO2 temperaturebcontribution towards climate trends. This would represent a global figure rise over the decades of only 0.07c per decade at most, not even taking into account natural variability.

May 14, 2011 7:47 pm

The report reads like New, New, Bafflegab trying to delete the Cold Spot and imply that this means the Null Hypothetical Hot Spot is real, after all. Another effort to tangle the web and continue weaving.