Guest post by Willis Eschenbach
Inspired by this thread on the lack of data in the Arctic Ocean, I looked into how GISS creates data when there is no data.
GISS is the Goddard Institute for Space Studies, a part of NASA. The Director of GISS is Dr. James Hansen. Dr. Hansen is an impartial scientist who thinks people who don’t believe in his apocalyptic visions of the future should be put on trial for “high crimes against humanity”. GISS produces a surface temperature record called GISTEMP. Here is their record of the temperature anomaly for Dec-Jan-Feb 2010 :
Figure 1. GISS temperature anomalies DJF 2010. Grey areas are where there is no temperature data.
Now, what’s wrong with this picture?
The oddity about the picture is that we are given temperature data where none exists. We have very little temperature data for the Arctic Ocean, for example. Yet the GISS map shows radical heating in the Arctic Ocean. How do they do that?
The procedure is one that is laid out in a 1987 paper by Hansen and Lebedeff In that paper, they note that annual temperature changes are well correlated over a large distance, out to 1200 kilometres (~750 miles).
(“Correlation” is a mathematical measure of the similarity of two datasets. It’s value ranges from zero, meaning not similar at all, to plus or minus one, indicating totally similar. A negative value means they are similar, but when one goes up the other goes down.)
Based on Hansen and Lebedeff’s finding of a good correlation (+0.5 or greater) out to 1200 km from a given temperature station, GISS show us the presumed temperature trends within 1200 km of the coastline stations and 1200 km of the island stations. Areas outside of this are shown in gray. This 1200 km. radius allows them to show the “temperature trend” of the entire Arctic Ocean, as shown in Figure 1. This gets around the problem of the very poor coverage in the Arctic Ocean. Here is a small part of the problem, the coverage of the section of the Arctic Ocean north of 80° North:
Figure 2. Temperature stations around 80° north. Circles around the stations are 250 km (~ 150 miles) in diameter. Note that the circle at 80°N is about 1200 km in radius, the size out to which Hansen says we can extrapolate temperature trends.
Can we really assume that a single station could be representative of such a large area? Look at Fig.1, despite the lack of data, trends are given for all of the Arctic Ocean. Here is a bigger view, showing the entire Arctic Ocean.
Figure 3. Temperature stations around the Arctic Ocean. Circles around the stations are 250 km (~ 150 miles) in diameter. Note that the area north of 80°N (yellow circle) is about three times the land area of the state of Alaska.
What Drs. Hansen and Lebedeff didn’t notice in 1987, and no one seems to have noticed since then, is that there is a big problem with their finding about the correlation of widely separated stations. This is shown by the following graph:
Figure 4. Five pseudo temperature records. Note the differences in the shapes of the records, and the differences in the trends of the records.
Curiously, these pseudo temperature records, despite their obvious differences, are all very similar in one way — correlation. The correlation between each pseudo temperature record and every other pseudo temperature records is above 90%.
Figure 5. Correlation between the pseudo temperature datasets shown in Fig. 3
The inescapable conclusion from this is that high correlations between datasets do not mean that their trends are similar.
OK, I can hear you thinking, “Yea, right, for some imaginary short 20 year pseudo temperature datasets you can find some wild data that will have different trends. But what about real 50-year long temperature datasets like Hansen and Lebedeff used?”
Glad you asked … here are nineteen fifty-year long temperature datasets from Alaska. All of them have a correlation with Anchorage greater than 0.5 (max 0.94, min 0.51, avg 0.75). All are within about 500 miles of Anchorage. Figure 6 shows their trends:
Figure 6. Temperature trends of Alaskan stations. Photo is of Pioneer Park, Fairbanks.
As you can see, the trends range from about one degree in fifty years to nearly three degrees in fifty years. Despite this huge ~ 300% range in trends, all of them have a good correlation (greater than +0.5) with Anchorage. This clearly shows that good correlation between temperature datasets means nothing about their corresponding trends.
Finally, as far as I know, this extrapolation procedure is unique to James Hansen and GISTEMP. It is not used by the other creators of global or regional datasets, such as CRU, NCDC, or USHCN. As Kevin Trenberth stated in the CRU emails regarding the discrepancy between GISTEMP and the other datasets (emphasis mine):
My understanding is that the biggest source of this discrepancy [between global temperature datasets] is the way the Arctic is analyzed. We know that the sea ice was at record low values, 22% lower than the previous low in 2005. Some sea temperatures and air temperatures were as much as 7C above normal. But most places there is no conventional data. In NASA [GISTEMP] they extrapolate and build in the high temperatures in the Arctic. In the other records they do not. They use only the data available and the rest is missing.
No data available? No problem, just build in some high temperatures …
Conclusion?
Hansen and Lebedeff were correct that the annual temperature datasets of widely separated temperature stations tend to be well correlated. However, they were incorrect in thinking that this applies to the trends of the well correlated temperature datasets. Their trends may not be similar at all. As a result, extrapolating trends out to 1200 km from a given temperature station is an invalid procedure which does not have any mathematical foundation.
[Update 1] Fred N. pointed out below that GISS shows a polar view of the same data. Note the claimed coverage of the entirety of the Arctic Ocean. Thanks.
[Update 2] JAE pointed out below that Figure 1 did not show trends, but anomalies. boballab pointed me to the map of the actual trends. My thanks to both. Here’s the relevant map:







Interesting that the cold anomolies link the developed countries and the warmest anomolies link the places where either nobody lives or primitive peoples.
This should be reasonably easy to prove.
It would be a publishable paper and could cause Hansen to abandon the 1200 km smoothing.
Hansen and Lebedeff 1987 did look at the issue and showed the correlation falls off to around 0.5 at 1200 kms for the Arctic (and around 0.4 to 0.3 for the southern hemisphere).
But they were using data only up to 1985. With more years to look at and the idea that longer-term trends should also be looked at more closely, one could show the 1200 km smoothing algorithm is not “robust” (especially in the time period after 1985 when the records were “adjusted” more).
It seems to me that the ChiefIO, Anthony, Willis, and Pielke Sr. could write one hell of a paper on the surface instrumental temp record.
Willis – which Canadian Arctic station(s) were used in the GISS analysis?
May I ask a question about the baseline used in the GISS anomaly graphs?
Several commenters have mentioned that the selected period is unusually cool, rendering most comparisons “warming.”
We stammering idiots working in business management usually undertake sensitivity analyses before relying on the outputs of models to make consequential decisions. One of the ways we would check the robustness of the warming conclusion is to randomly vary the baseline period (length and window), and see how often the same result is reached.
My question: has anyone actually done this? If so, where are the results reported? If not, why not?
Thanks for any assistance.
@ur momisugly anu: I am curious to hear more about the point you may be making about tobacco companies compared to fossil fuel companies. Where should I go to see this articulated in some detail? [just leave a comment at my site]
Thanks
Dave Springer (16:46:32) :
“I think Pielke is right on the money in this article:
http://wattsupwiththat.com/2009/08/21/soot-and-the-arctic-ice-%E2%80%93-a-win-win-policy-based-on-chinese-coal-fired-power-plants%E2%80%9D/
No source of black soot can make it to the south pole which handily explains why the antarctic interior isn’t warming.”
An easier explanation. The runways for the airports in the antarctic are either ice or snow.
The runways for the airports in Greenland and Alaska are generally paved and plowed.
Willis Eschenbach (17:40:17) :
“Missing. Not “average global anomaly”. Missing.”
What does it look like, if you plot the Artic temps using a smaller grid (250 km, say), leaving the missing areas grey?
(Sorry if I missed something here).
I looked into how GISS creates data when there is no data
Perhaps this surprises many people, but anybody who has worked in a state owned institution/corporation knows this currently happends in state owned companies, this is a common practice as there are no personal responsabilities and wages/salaries are asigned and given without any relation to productivity or proficiency.
This has happened all over the world whereever and whenever a state owned corporation has worked. All, without exception, become broke. It doesn’t matter the people, it’s the system, where there is no a personal, individual commitment things don’t go anyway.
Need a nearest from you example: Just compare the amount spent in the research and sub orbital flight of the X prize winner ship, with the smallest research at NASA. I am not american and I do not write from the US by I can bet you that you can find, as an example, one thousand dollars 1/2 inch screwbolts.
So no wonder Mr.”Coal trains” Hansen invents data…I am sure he even does not have any remorse feeling abou it. Just dig all over the world and you will find thousands of examples. And it has nothing to do with ideology, it’s just reality.
(“Correlation” is a mathematical measure of the similarity of two datasets
That’s the same as throwing two deck of cards on a table and see the coincidences among them. That was called SINCRONICITY by the psychatrist C.G.Jung
http://en.wikipedia.org/wiki/Synchronicity
and it was VERY POPULAR among NEW AGE fans in the 1960’s.
JAE (08:54:38) :
If you plot the Arctic at 250 km, you can see that GISS has almost no coverage there.
http://data.giss.nasa.gov/cgi-bin/gistemp/do_nmap.py?year_last=2010&month_last=2&sat=4&sst=0&type=anoms&mean_gen=02&year1=2010&year2=2010&base1=1951&base2=1980&radius=250&pol=reg
Willis,
Thank you for continuing to explain to people that Anomalies are not the enemy:
“In other words, an anomaly is just like a regular temperature, it just has a different baseline than the triple point of water or than absolute zero. This does not make it somehow second rate or not useful.”
Willis Eschenbach (02:07:17) :
Andrew P. (18:37:01)
Why can’t [they use] the data from the IABP bouys? http://psc.apl.washington.edu/northpole/ there’s always one not far from the pole every time I look…
Very good question, I fear I have no answer. Seems like a natural to me. Anyone?”
Willis,
I know gavin has commented on this at RC. Somebody would have to look for the precise comment, I can’t recall the thread..It should be in one of his replies to somebodies comment
Re: Willis Eschenbach (Mar 26 03:22),
We use anomalies all of the time, without noticing it. Why? Because the base of the anomalies is zero, which seems “natural” to us. Consider two temperatures, say 20° Celsius (C), and 293.15 Kelvins (K). In fact, these two are exactly the same temperature.
The only difference between the two is that they are anomalies from a different baseline. The Kelvin scale has a baseline of absolute zero, and the Celsius scale has a baseline of 273.15K (the freezing point of water).
Wrong example.
All temperature scales differ by specific mathematic formulas, because they all depend on the hard definition of : the boiling point of water at 1 atmosphere, and ice/water temperature at the triple point again at 1 atmosphere. It does not matter what sort of scale you use, it is a physical measure and each measurement can be derived from the others.
Anomalies as used by by climatologists work this way:
Each thermometer in different locations gives a temperature, a time interval is defined and for each thermometer the average of that is taken as a basis and the time variation of the temperature minus this average is called an anomaly.
The hypothesis is that given one thermometer on top of a mountain and the other in the valley and the next by the sea, the temperatures will be different, but the variation from the average will be correlated and the same.
This presupposes that there is only one heat source that creates these temperatures. When there are more than one heat source/sink as is the case with climate where there is convection, evaporation, precipitation etc, heat/energy can be going every which way and the thermometers will be following a different drummer.
I have given the following gedanken experiment. Tell me where it is wrong:
Take an insulated room and put a heat source in one corner with time varying output, and several thermometers all over the place. Take the first ten measurements for each thermometer and call it the average. From then on compute anomalies. Yes, they will be correlated reflecting the variations of the source.
Then put a randomly moving heat source in the room ( on one of those robots that change direction when they hit a wall) and do the same. i.e take the first ten measurements of each thermometer as the average and use that average from then on to compute an anomaly.
My claim is that the the anomaly of these thermometers will not be correlated with either source in a detectable form. They will be plus or minus depending on the vicinity of the moving heat source.
This is the situation in the atmosphere and the oceans, where huge masses of water and air are moved around carrying energy and modifying with various mechanisms the heat content of the atmosphere where the anomalies are measured at 2 meters.
Tell me where the gedanken experiment is wrong.
In other words, an anomaly is just like a regular temperature, it just has a different baseline than the triple point of water or than absolute zero. This does not make it somehow second rate or not useful.
No, it is not. It is taken over a varying baseline and when there are more than one heat sources/sinks, it makes no physical sense.
JAE (08:54:38) :
What does it look like, if you plot the Artic temps using a smaller grid (250 km, say), leaving the missing areas grey?
http://data.giss.nasa.gov/cgi-bin/gistemp/do_nmap.py?year_last=2010&month_last=2&sat=4&sst=0&type=anoms&mean_gen=1203&year1=2010&year2=2010&base1=1951&base2=1980&radius=250&pol=pol
There ya go
Willis, I wonder if arctic UHI effects have been included in these calculations. Per the Barrow, Alaska study, an admittedly rural town (4600) showed up with an average of 3C effect within a 150 KM circle. http://www.geography.uc.edu/~kenhinke/uhi/
Seems the effect was mostly in the winter and corresponding to power/gas inputs to the town. How many of the other sites could have the same problem?
The situation vis-a-vis the Arctic temp history is far worse than some have depicted, what with exrapolations to 1500 km.
The extrapolations, as bad as they are, are based on raw station data that commonly sports great gaps of multiple years/months.
It’s an Empire State Theory built on a mudflat with sinkholes.
Let me make it perfectly clear: The sub-Arctic land station data is bad enough as it is.
Well a lot of babies are born in the spring time; which in the case of humans is nine months after the winter +/- a few months. It’s actually the same for other species; but who cares.
Since winters tend to be colder than Spring, does this prove that cold causes pregnancy ?
I would commend to the reader’s attention; a classic “Farside” Cartoon of Gary Larson. Which I can’t show you here because it is copyrighted; and there is NO legal way for anyone to show, use, or display ANY Farside cartoon; but you can buy any of them for your own personal use. So ANY public display of any Farside cartoon is prima facie evidence of copyright infringement.
So the panel in question, is officially titled “What’s in my Front Yard ?” The lady on the phone is asking her neighbor across the street to describe what is in her front yard. She can’t determine that herself, because her entire living room front window is completely blocked by a ginormous eye.
To me that cartoon panel is a perfect demonstration of violation of the Nyquist Sampling Theorem. It’s the same principle as the blind man feeling around an elephant, and trying to determine what it is; having never seen or even heard of such a creature.
Comparing one thermometer with itself at some different time, isn’t going to yield an accurate value for the mean global temperature of the earth.
When was the last time you actually saw on ANY official weather station report; the total surface area to which that specific location temperature or anomaly actually applies.
You can’t get a global average from a set of data points; none of which have an attached area element which applies to that temperature (or anomaly).
And even if you could determine the correct global mean surface temperature (you can’t), it has absolutely no scientifc meaning at all, since thermal processes all over the world are not uniquely characterized by the local temperature.
Arid tropical deserts react differently from ocean , or from arboreal forests.
So mean global surface (or lower troposphere) temperature has about as much scientific meaning as the mean of all the telephone numbers in the Manhattan phone directory; namely, none at all. Unless of course it happens to be your phone number, in which case you will get a lot of calls.
Well it’s like tornado frequency; hardly relevent; unless it happens to be your house that gets hit.
Quiz: If we have three different summers:
O1: Max.Temps.Avg= 27°C
O2: Max.Temp.Avg=30°C
O3: Max.Temp.Avg.=38°C
Which one was the hotter?
It was the first one, as the avg.minimum was 22°
So temperatures mean nothing.
Still no comment from E.M. Smith?
I checked The Chiefio’s site, he has a post dated yesterday showing his excellent work doing a detailed regional analysis of the temperature record and showing the interesting revealed trends. Brilliant stuff that should well be highlighted here on WUWT (IMHO). He’s currently waist-deep in it and appears very busy finishing it up.
Could someone please stop ’round his place and make sure he’s alright? With all the great work he’s currently doing, I’m worried he might have collapsed from exhaustion!
anna v (10:02:12)
Aaaah, you have found my secret weakness, I love thought experiments.
We’re not trying to determine the correlation of the temperature in some town in Colorado with a moving bunch of warm air caused by an El Nino. We can’t do that without additional information, specifically, the distance of the heat source from the thermometer.
Instead, we’re (generally) interested in the temperature in that town, and how it has varied over time (trend, gaussian average, difference with other local town temperatures, correlation with a local proxy, etc.).
Even in your example, there is no difference between using each temperature’s average of the moving heat source as a baseline, and using the freezing point of water (or absolute zero) as your baseline. You do not lose or gain any information by changing the baseline. A temperature in K is just as informative as a temperature in C. There is no difference in the information content of
f(x)
and
f(x) -3
Suppose, for example, two of your thermometers in your thought experiment measured in K, and two of them measured in C … despite the fact that they have different baselines, would you have less information than if they all measured in C?
George E. Smith (10:51:53) : edit
Where I live, the fall (autumn) comes nine months after the winter, and spring comes either three or fifteen months after the winter … where do you live?
Willis Eschenbach, excellent work.
If not an outright debunking of Hansen’s assumptions, it comes very close.
It would seem that you can’t take any of the pro-AGW scientist’s work at face value — verify all work product — because, more likely than not, there is a probability it will be questionable, either to methodology, assumptions, or inferences.
DD More (10:20:11)
UHI effects are definitely present in Arctic temperature records. In particular, the colder the temperatures, the more effect that there is from the burning of fuel to heat houses and other structures. If the average temperature is 70°F, keeping your house at 70°F doesn’t affect the temperature. But if the average temperature is -10F, the heat from houses, buildings, and things like car exhausts can be a significant effect.
Jimbo (18:15:21) :
“Quick sip of Champaign and back to work!”
Cannot say I agree, Jimbo. I would love to, but here in Norway it works a bit different.
They have already implemented CO2 taxation. And in Norway, if a tax is implemented, it will never go away again.
People just live on. The government grows, and grows. New socio-xxx “students” are pouring out of the Universities every year. Post normal types. And they need work. In government.
And the more of them you get, the more votes for a larger government. An evil spiral, downwards.
And we have to pay tax to feed these types.