By Steve Goddard
We are all familiar with the GISS graph below, showing how the world has warmed since 1880.
http://data.giss.nasa.gov/gistemp/graphs/Fig.A2.lrg.gif
The GISS map below shows the geographic details of how they believe the planet has warmed. It uses 1200 km smoothing, a technique which allows them to generate data where they have none – based on the idea that temperatures don’t vary much over 1200 km. It seems “reasonable enough” to use the Monaco weather forecast to make picnic plans in Birmingham, England. Similarly we could assume that the weather and climate in Portland, Oregon can be inferred from that of Death Valley.
The map below uses 250 km smoothing, which allows us to see a little better where they actually have trend data from 1880-2009.
I took the two maps above, projected them on to a sphere representing the earth, and made them blink back and forth between 250 km and 1200 km smoothing. The Arctic is particularly impressive. GISS has determined that the Arctic is warming rapidly across vast distances where they have no 250 km data (pink.)
A way to prove there’s no data in the region for yourself is by using the GISTEMP Map locator at http://data.giss.nasa.gov/gistemp/station_data/
If we choose 90N 0E (North Pole) as the center point for finding nearby stations:
We find that the closest station from the North Pole is Alert, NWT, 834 km (518 miles) away. That’s about the distance from Montreal to Washington DC. Is the temperature data in Montreal valid for applying to Washington DC.?
Even worse, there’s no data in GISTEMP for Alert NWT since 1991. Funny though, you can get current data right now, today, from Weather Underground, right here. WUWT?
Here’s the METAR report for Alert, NWT from today
METAR CYLT 261900Z 31007KT 10SM OVC020 01/M00 A2967 RMK ST8 LAST OBS/NEXT 270600 UTC SLP051
The next closest GISTEMP station is Nord, ADS at 935 km (580 miles) away.
Most Arctic stations used in GISTEMP are 1000 km (621 miles) or more away from the North Pole. That is about the distance from Chicago to Atlanta. Again would you use climate records from Atlanta to gauge what is happening in Chicago?
Note the area between Svalbard and the North Pole in the globe below. There is no data in the 250 km 1880-2009 trend map indicating that region has warmed significantly, yet GISS 1200 km 1880-2009 has it warming 2-4° C. Same story for northern Greenland, the Beaufort Sea, etc. There’s a lot of holes in the polar data that has been interpolated.
The GISS Arctic (non) data has been widely misinterpreted. Below is a good example:
Monitoring Greenland’s melting
The ten warmest years since 1880 have all taken place within the 12-year period of 1997–2008, according to the NASA Goddard Institute for Space Studies (GISS) surface temperature analysis. The Arctic has been subject to exceptionally warm conditions and is showing an extraordinary response to increasing temperatures. The changes in polar ice have the potential to profoundly affect Earth’s climate; in 2007, sea-ice extent reached a historical minimum, as a consequence of warm and clear sky conditions.
If we look at the only two long-term stations which GISS does have in Greenland, it becomes clear that there has been nothing extraordinary or record breaking about the last 12 years (other than one probably errant data point.) The 1930s were warmer in Greenland.
Similarly, GISS has essentially no 250 km 1880-2009 data in the interior of Africa, yet has managed to generate a detailed profile across the entire continent for that same time period. In the process of doing this, they “disappeared” a cold spot in what is now Zimbabwe.
Same story for Asia.
Same story for South America. Note how they moved a cold area from Argentina to Bolivia, and created an imaginary hot spot in Brazil.
Pay no attention to that man behind the curtain.
Sponsored IT training links:
No matter you have to pass 70-667 exam or looking for 642-165 training, our up to date 640-721 exam dumps are guaranteed to provide first hand success.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.










Hi Steve,
I am in Portland, Oregon. It is now 70 F and at the time of your reply it was about 68 F at my house. Thank you for your clarification regarding ANOMALIES vs TEMPS.
Scott Ramsdell says:
July 26, 2010 at 9:39 pm
How do such systemic problems with data manipulation/insertion persist across so many administrations? The heads of NOAA and NASA are political appointees….It’s incredible to me that the status quo has existed for so long.,..
BINGO
because there is no difference between administrations. people need to wake up and realise there is only one party in this country. the debt and war party.
as long as folks continue to vote repuglicrat there will by only the continuing slide towards totalitarianism in this country.
Bob Tisdale says:
Nope. The 250km and 1200km smoothing has already been performed before the trends are analysed.
So I’m missing how they fill in the missing data. Ok, I think I see why they don’t use trends. Back to the drawing board:
Grid X has less than 66% data, so it reaches out 250km or 1200km looking for close by grid squares with greater than 66% data. Finds 1 or 2 (whatever) and performs an algorithm to fill in the missing data? Or more likely the algorithm doesn’t fill in individual datum, but just calculates an anomaly?
I will duck as is appropriate.
When you deal with anomalies, they are anomalous to a given baseline (normal, average, median) over a SELECTED time period. (As nice as they may be for modelling and other considerations.)
At least temperatures are absolutes that do not depend on a specific period of reference.
Zeke Hausfather wrote,
“If GISTemp (and others) are doing ‘all these manipulations and artificially bump temperatures up a few tenths of degree’, please suggest a way to calculate global temperature using the various raw station datasets available that you think would be ideal.”
There is no need to calculate a global temperature, nor any method for doing so. What we really wish to know is whether there is a global temperature trend, and its slope. We can do that via sampling — compute trends for those stations which have unbroken records of sufficient length. There is sufficient data for that in N America, parts of W Europe, Japan, Eastern Australia, and a few other places. If trends for those well-recorded areas agree closely, we can be confident there is a global trend.
(Appreciated your temp reconstruction, Zeke, and the others also. Good work. Now someone needs to construct some trends as above, from original (non-adjusted) CLIMAT and similar records).
frank says:
July 27, 2010 at 10:45 am
David Jay says: July 26, 2010 at 6:45 pm
I understand the loss of the cool spot in Africa, averaging (smearing?) should move temperatures away from extremes. However, the hot spot in Brazil is a winner. I want to hear an explanation of that methodology!
…. The question is will Steve Goddard present data for any period besides 1880-2010 so that we can see if the mysterious red color arises from real data or extrapolated data.
______________________________________________________________________
Steve does not have to because CHIEFfIO already has
http://chiefio.wordpress.com/2009/11/16/ghcn-south-america-andes-what-andes/
http://chiefio.wordpress.com/2010/01/08/ghcn-gistemp-interactions-the-bolivia-effect/
Steven Mosher says:
July 27, 2010 at 10:07 am
you should meet mr stratosphere. I dont think you understand using a % tells you very little. C02 exists throughout the atompsheric column as does H20 at varying concentrations. Go say hello to mr stratosphere. Then go look at “line broadening”.
Then come back. Or go study a LBL radiative transfer model. Or if your an engineer who has worked with radiative physics just run the code you use everyday to design sensors or missles or airplanes that have IR stealth. We all know how important C02 is to the transfer of radiation in the real world. Ya, it warms the planet. Definately doesnt COOL the planet. Warms it. question is how much. Read your Lindzen, spenser, christy, Willis, Mcintyre, Monkton, etc etc. Yup. Warms the planet. How much? thats the real question
~~~~~
Please don’t bring up the “authority” bit Stephen. I’m aware of broadening. I respect all of those men you list, as you, but don’t have to agree with them on every statement they make, and I don’t. Dr. Spencer and Dr. Christy do some great work. Lindzen has his head planted in firmly in reality and I’m thankful. Dr. Spencer’s work on population density is one of the best indications of what has happened and the scale I have seen. I said nothing of COOLING. Do I rule out cooling, over ridden by the suns increase over the decades, no, not in a bottle but in the atmosphere system. Cool the stratosphere, yes, seems it does lacking much water vapor there. Cool or warm the troposphere, haven’t seen anything conclusive either way. Co2 can absorb and radiate in bands, especially LW, sure. Still doesn’t guarantee its interaction in the atmosphere either way. Warm in the lab in a bottle, warm it does. MODTRANs is a perfect model on the atmosphere scale, I don’t think so. But for you and I to have a serious discussion of CO2 effect on the atmosphere as a whole and the climate system you would have to get off your absolution of belief without bounds.
But you can’t get away from the H2O to CO2 ratio. Explain that and I will listen. Just don’t go off on tangents to try to get me to ‘believe’. Most scientists have been wrong on gray subjects over history and I keep in mind that most scientists are probably wrong on this one too, maybe both sides in the details of the cause and effect.
Oh, using a rough % tells me wonders in physics many times. Maybe you can take a LBL radiative transfer model and explain how CO2 is not just a small percent in relation to water vapor concentration and effect in the real atmosphere. I see your point, if everything stays still, then…, but it doesn’t stay still, factors in the atmosphere are always changing and changing only for reasons planted in physics and always to maintain an equilibrium as much as the system can perform once again by physics. No mystics.
I’ll give Co2 2% of the warming plus a bit for the logarithmic nature of all GHGs, right now that is all. I seriously think the balance was the sun over these decades coinciding with Co2 rise. If all climates moderate due to the sun’s pause in the next two decades, if the pause continues, we will know.
Gail Combs says: “It is observed that there is a correlation between temperature anomalies at widely spaced locations….”
The existence of correlation does not suggest that extrapolation over 1200 km or over 250 km is legitimate. If trend A is increasing @ur momisugly 10% and trend B is increasing @ur momisugly 1%, you will get an extremely high correlation, but that does imply that we can toss out trend B, and say that the overall trend is 10%.
Numerous studies have shown that temperature trends in urban areas are increasing faster than temperature trends in rural settings. (I would understand an argument that Arctic stations are not urban. However, we have seen some Arctic station experience local siting issues leading to warming bias.) Moreover, temperature trends on land are increasing faster than trends over oceans, and GISS is using land stations to project temperature trends over oceans.
And back to the question of positive correlation — it is not always positive. My hometown has had anomalies decrease by .033 from the 1930s to the 1990s while Minneapolis (150 km away) has had anomalies increase by .013 over the same time.
(BTW, I noted that you used January temperatures in your Montreal / Washington correlation. Perhaps January was the easiest to compute, but it is curious that you used January rather than the year.)
There is nothing linear about climate or weather.
Sometimes it is minus 20F in Boulder, fifty degrees warmer in the mountains, and seventy degrees warmer on the western slope.
stevengoddard replied, “ Your explanation is unpalatable. “
Not sure why. GISS presents how they prepare the trend maps on their mapmaking webpage. GISS writes, “Trends: Temperature change of a specified mean period over a specified time interval based on local linear trends.” And they further qualify it with, “’Trends’ are not reported unless >66% of the needed records are available.” Refer to:
http://i32.tinypic.com/jtkvpy.jpg
And that’s a screen cap from this page:
http://data.giss.nasa.gov/gistemp/maps/
You wrote, “The GISS maps say that grey areas represent missing data.”
Correct. They note this at the top of the map output page:
http://i25.tinypic.com/ngaky8.jpg
But for the trend maps you’ve presented in this post, it does not mean that monthly data does not exist in Africa, or Asia, or South America. It means those grey areas did not pass the requirement of 66% of the needed records in order to create a trend.
If you’re not aware, there is little 250km radius smoothing data in Africa, Asia, and South America for the year 1880:
http://i28.tinypic.com/o00wg1.jpg
But there’s considerably more on those continents in 1930:
http://i27.tinypic.com/70xogn.jpg
And there’s even more in 1980:
http://i26.tinypic.com/2iizfc3.jpg
In fact, in 1980, GISS doesn’t really need the 1200km smoothing to infill data in Africa, Asia, and South America.
And it’s well known that the number of stations have decreased in recent decades so that there less data available for the 250km radius smoothing in Africa, Asia, and South America in 2009, for example:
http://i29.tinypic.com/2vbn689.jpg
So your statement in the post, “Similarly, GISS has essentially no 250 km 1880-2009 data in the interior of Africa, yet has managed to generate a detailed profile across the entire continent for that same time period,” is incorrect. 250km radius smoothing data does exist on a monthly basis between 1880 and 2009 in Africa, Asia, and South America.
In an earlier comment, you replied, “Why don’t you write up a separate article about the issues you find interesting? “
I do and sometimes I link them here at WUWT to posts where they are relevant, as my comments on this thread have been. Do you find this comment irrelevant, Steve?
http://wattsupwiththat.com/2010/07/26/giss-swiss-cheese/#comment-440748
You wrote, “The point of this article was to show that the GISS 1200 km smoothing is inconsistent with their 250 km smoothing…”
The GISS trends you presented in this post for 1200km radius smoothing are not inconsistent the 250km radius smoothing. The trend maps present the available data in agreement with the GISS qualifications on their map making webpage, which I have discussed previously.
You continued, “– and that they are claiming to know long term trends in places where there is little or no data.”
No. They aren’t claiming any such thing. They are presenting trends for areas that pass a specified threshold for data availability and the data availability varies depending on the smoothing radius. This is why you’re seeing different trends with the different smoothing.
Steven, in summary, I have shown that monthly data is available in Africa (and Asia and South America) in growing amounts until the mid-to-late 20th century, which contradicts your statement, “…GISS has essentially no 250 km 1880-2009 data in the interior of Africa…”
And I presented the threshold GISS uses to determine trends for the maps, which explains why “…they ‘disappeared’ a cold spot in what is now Zimbabwe…” when you presented the map with the greater smoothing radius.
Steve M. from TN says: “Grid X has less than 66% data, so it reaches out 250km or 1200km looking for close by grid squares with greater than 66% data. Finds 1 or 2 (whatever) and performs an algorithm to fill in the missing data? Or more likely the algorithm doesn’t fill in individual datum, but just calculates an anomaly?”
Nope. The 66% threshold of data availability only come into play with the trends.
GISS provides a brief overview of the process they use to create their GISTEMP product here the heading of Current Analysis Method here:
http://data.giss.nasa.gov/gistemp/
And they go into more technical detail here:
http://data.giss.nasa.gov/gistemp/sources/gistemp.html
Steven Mosher says:
July 27, 2010 at 8:28 am
‘If it warms by 1C over a century at location lat X, Lon Y, then the available data shows the following: It will also warm by 1C at position lat X2, Lon Y2. Go figure, heat moves.’
Gibberish.
Reductio ad absurdum – one thermometer would be enough. Where shall we put it?
“For example, is death valley temps were constant its anomaly would be ZERO.”
//No, the derivative of its anomaly curve would be zero.
Its anomaly would be the distance of that constant temperature from the defined baseline. If the defined baseline is a lower temperature than death valley, death valley would show a constant high anomaly.
That is, I would assume that if death valley hadn’t warmed then it is safe to assume that the arctic hadnt warmed. ( if I choose to make that assumption)”
//I would assume that NYC has warmed, therefore so too has northeast Greenland ( if I chose to make that assumption,) and as you seem to imply, both would be reasonably human conclusions which are concurrently logical fantasies.
When it comes to ‘infilling” I can
//Note: You’re talking about infilling locations which *contain (unmeasured) data that could alter the data with which they are infilled.*
*I went to a local high school during basketball season to test your method of estimating pure error. Unfortunately, the basketball team is away now and again.
1. NOT infill
//Subsequently making a conclusion about a total area from some of its points is obviously assumptive and depending on the dispersion of sampling locations possibly terrible.
*I measured the heights of 1 student from 9th, 3 students from 10th, 5 students from 11th and 27 students from 12th grade and found an accurate mean height for the whole school.
2. Infill the GLOBAL AVERAGE of all grids.
//Making a conclusion about the contents of any area from the average of other areas also incompletely measured is obviously assumptive
*We measured the heights of all students in grades 9 and 11 and now accurately know the average height of students in 10th and 12th.
3. Infill using the closest grids.
//Making a conclusion about the coupling between adjacent systems is obviously assumptive.
*Cindy Lou Who, a student at Sample HS, is obviously approximately as tall as that guy next to her, Tommy “Treetop” Thompson, because, you know, they’re dating.
4. Infill with the highest anomaly on the planet( worst case)
//Making a conclusion that highest measured anomaly from baseline gives you information about the limitations of unmeasured regions is obviously erroneous.
*Johnny “Checkmate” Chang is the tallest boy on the chess team. Therefore, the whole basketball team must be shorter than John.
5. Infill with the lowest anomaly on the planet (best case)
//Making a conclusion that lowest measured anomaly from baseline gives you information about the limitations of unmeasured regions is obviously erroneous.
Tommy Thompson is the smallest boy on the basketball team, therefore the rest of the boys in the school must be at least as tall as he.
Assuming unmeasured points could be outside of the scope of the measured points suggests no non-estimate method of error can be obtained for past data sets. Where did they get the error bar from??!?
To the scientists,
Atmospheric temperature is not a measure of atmospheric heat content. Water vapor plays a huge role. This afternoon in Orlando it was 95 deg. F with 44% relative humidity (RH) while at noon today in Phoenix it was 94 deg. F with 27% RH.
The temperature is nearly the same but the total heat content is quite different. Orlando had an enthalpy (total heat content) of 39.3 Btu/lb of air, while Phoenix had 32.7 Btu/lb. The air in Orlando was only 1 deg. F warmer but it had about 17% more heat per pound of atmospheric air than Phoenix.
To put it another way, if Phoenix had the same absolute humidity as it did at noon today the temperature would be have to be increased to about 125 deg. F before that air would have the same energy content as the Orland air.
Given the above, how is it that temperature tells us anything useful about the heat content of the planet? Sure Phoenix is usually dry and Orland is usually humid but sometimes Phoenix is wetter than normal and sometimes Orlando is drier than normal. It is certainly not true that average temperature at a given location always corresponds with, or correlates to, average total heat content.
For example, the average temperature on 7/22 in Phoenix was 90 deg. F and the average enthalpy was 34.0 Btu/lb but on 7/26 the average temperature was 95 deg. F and the average enthalpy was 33.6 Btu/lb. Total heat content was greater when the temperature was 5 deg. F cooler … no correlation there.
The AGW hypothesis says that increased CO2 will increase total heat by driving the biosphere to a new and higher heat-equilibrium point. I don’t understand how anyone can accept historical records of temperature, alone, as telling us anything about whether the heat content of the biosphere has increased or decreased.
Hopefully one of you can enlighten me on the subject.
dT
Re: Steven Mosher says:
July 27, 2010 at 11:15 am
Steven, I think you are being just a bit disingenuous here. Heat-seeking missiles utilize near infrared wavelengths, not thermal infrared.
http://www.ausairpower.net/TE-IR-Guidance.html
These same near infrared wavelengths are those utilized by infrared astronomers.
http://www.ipac.caltech.edu/Outreach/Edu/Windows/irwindows.html
High sky transparency and low sky emission are only found in the near infrared, but if clouds enter the picture then all bets are off. 🙂
Bob Tisdalr,
This video shows the GISS coverage holes for May, 2010 Pink represents missing data. GISS has huge holes in Africa.
[youtube=http://www.youtube.com/watch?v=NDm4_NwRzVU]
When Hansen claims in December that 2010 was the warmest year on record – by 0.01 degrees, are you going to rush to his defense?
Bob,
The GISS 250km 1880-2009 trend map shows “missing data” for almost the entire African interior. The fact that they added a few stations 70 years later does not change the fact that they do not have any 1880-2009 trend data for most of Africa.
This article is about long term trend claims by GISS and their lack of data to support it.
Here are a couple of station records from my home state North Carolina
Norfolk NC
Fayetteville NC
North – Raleigh NC
Or better yet look at some of the Stations in the Tennessee area all within less than 200 km
Hendersonville
Copperhill
Knoxville
Dahlonga
Walhalla
Rome
Middlesboro
Valleyhead
Rogersville
Greenville
Scottsboro
Pennington Gap
My Statistics teacher always said to PLOT the data. Well here are the plots and I do not see any patterns that justify saying the data from one area can be used to determine the trend in another area in this general location. The patterns of these station records are all over the place: lines up, lines down, broken lines at two different levels, sine waves trending up sine waves flat. I do not care what type of fancy statistics you use. These plots of the actual data prove the hypothesis that you can use temperature records from one area to predict the temperature data from another area can be falsified.
I do not have the computer skills to do the plotting on one graph. I already addresses the use of Pearson’s correlation coefficient in another comment.
Billy Liar
You hit the nail on the head. The same folks who insist that hot temperatures in the 1930s were unique to the US, also want us to believe that a half dozen thermometers can accurately represent the whole planet.
Stephen Goddard wrote:
“It was almost 100 degrees in Colorado yesterday. San Diego was 65 degrees.
Therefore, Death Valley must have been about 70 degrees yesterday afternoon, and Chicago must have been about 125 degrees.”
No Stephen, you are still confused about the difference between temperatures and temperature anomalies.
According to Accuweather (not sure how accurate their figures are), Denver had a high of 97F yesterday, the climatological mean maximum is about 89F. That is an anomaly of +8F. San Diego was 68F yesterday, with the climatological mean maximum about 77F = -9F anomaly. Death Valley is located between the two and so you might expect the anomaly to sit somewhere in the middle.
In fact, Death Valley had a max of 118F which is an anomaly of about +2F (climate mean max = 116F), i.e. +2F is in between -9F and +8F.
Of course, this isn’t a sensible thing to do for a single day. What GISS do is calculate it for monthly or annual means which will result in a far more robust correlation.
Similarly, GISS has essentially no 250 km 1880-2009 data in the interior of Africa, yet has managed to generate a detailed profile across the entire continent for that same time period. In the process of doing this, they “disappeared” a cold spot in what is now Zimbabwe.
Same story for Asia.
Same story for South America. Note how they moved a cold area from Argentina to Bolivia, and created an imaginary hot spot in Brazil.
Steve, come on now – you should know better than that.
Just as climate does not have any relationship to weather, 1200Km grid squares are not made up of 250Km grid squares put together. They are completely independent. Why should spacial aggregation be different to temporal aggregation?
This is an interesting comment thread. I want to see more like it!
On weather in the San Francisco, it can be foggy, cloudy and 55F in San Francisco, and with a 40 minute drive east to a city called Walnut Creek, (on the other side of the Oakland Hills from San Francisco) it can be bright, sunny and 97F. This litteraly has happened.
Contrarian says: July 27, 2010 at 12:04 pm
Contrarian, the GHCN v2.mean data that most of us used does come directly from the CLIMAT forms, in modern practice. Historically other sources were used, but I believe there has been no attempt to adjust them.
But there has been another development – Ron Broberg has been processing the GSOD data, which comes directly from raw SYNOP data. He used that to calculate trends, and so did I.
“…based on the idea that temperatures don’t vary much over 1200 km. ”
Where does this idea come from? I was in Comogli, Italy one Winter and the temp was +3. In Milan, 120km away it was -5, and in Como, a further 50km away it was -20.
MarkG: Comparing stations near Montreal – you need to use anomalies.
Here is a plot comparing Montreal Dorval and McGill (GHCN station data).
Here is Dorval, McGill plus Washington NA (GHCN data).
Here is a plot comparing the 5×5 grids containing Montreal and Washington DC (CRUTEM3 data).
Here is a plot comparing the 5×5 grids containing Birmingham and Monaco (CRUTEM3 data).
Regarding the 1200km smoothing – in many areas it will make no difference, but in many areas it will make a difference (as is clear from the main posted article)