We’ve covered this topic before, but it is always good to mention in again. Howard Goodall asks this on Twitter:
“Ever wondered why climate scientists use anomalies instead of temperatures? 100 years of catastrophic warming in central England has the answer.”
He provides a link to the Central England Temperature data at the Met Office and a plot made from that data, which just happens to be in absolute degrees C as opposed to the usual anomaly plot:
Now compare that to the anomaly based plot for the same data from the Met Office:
The CET anomaly data is here, format example here.
Goodall has a point, that without using anomalies and magnified scales, it would be difficult to detect “climate change”.
For example, annual global mean NASA GISS temperature data displayed as an anomaly plot:
Source: https://data.giss.nasa.gov/gistemp/graphs/
Data: https://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt
Now here is the same data, through 2016, plotted as absolute temperature, as you’d see on a thermometer, without the scale amplification, using the scale of human experience with temperature on the planet:
h/t to “Suyts” for the plot. His method is to simply add the 1951-1980 baseline temperature declared by GISS (57.2 deg F) back to the anomaly temperature, to recover the absolute temperature, then plot it. GISS provides the method here:
Q. What do I do if I need absolute SATs, not anomalies?
A. In 99.9% of the cases you’ll find that anomalies are exactly what you need, not absolute temperatures. In the remaining cases, you have to pick one of the available climatologies and add the anomalies (with respect to the proper base period) to it. For the global mean, the most trusted models produce a value of roughly 14°C, i.e. 57.2°F, but it may easily be anywhere between 56 and 58°F and regionally, let alone locally, the situation is even worse.
What is even more interesting, is the justification GISS makes for using anomalies:
The GISTEMP analysis concerns only temperature anomalies, not absolute temperature. Temperature anomalies are computed relative to the base period 1951-1980. The reason to work with anomalies, rather than absolute temperature is that absolute temperature varies markedly in short distances, while monthly or annual temperature anomalies are representative of a much larger region. Indeed, we have shown (Hansen and Lebedeff, 1987) that temperature anomalies are strongly correlated out to distances of the order of 1000 km.
And this is why, even though there are huge missing gaps in data, as shown here: (note the poles)
Note: Gray areas signify missing data.
Note: Ocean data are not used over land nor within 100km of a reporting land station.
GISS can “fill in” (i.e. make up) data where there isn’t any using 1200 kilometer smoothing: (note the poles, magically filled in, and how the cold stations in the graph above on the perimeter on Antarctica, disappear in this plot)
Note: Gray areas signify missing data.
Note: Ocean data are not used over land nor within 100km of a reporting land station.
It’s interesting how they can make the south pole red, and if it’s burning hot, when in reality, the average mean temperature is approximately -48°C (–54.4°F):
The source: https://data.giss.nasa.gov/cgi-bin/gistemp/stdata_show.cgi?id=700890090008&dt=1&ds=5
Based on GHCN data from NOAA-NCEI and data from SCAR.
- GHCN-Unadjusted is the raw data as reported by the weather station.
- GHCN-adj is the data after the NCEI adjustment for station moves and breaks.
- GHCN-adj-cleaned is the adjusted data after removal of obvious outliers and less trusted duplicate records.
- GHCN-adj-homogenized is the adjusted, cleaned data with the GISTEMP removal of an urban-only trend.
ADDED: Here is a map of the GHCN stations in Antarctica:

Source: https://data.giss.nasa.gov/gistemp/stdata/
It’s all in the presentation.
NASA GISS helps us see red in Antarctica (while erasing the perimeter blues) at -48°C (–54.4°F), thanks to anomalies and 1200 kilometer smoothing (because “temperature anomalies are strongly correlated out to distances of the order of 1000 km”).
Now that’s what I call polar amplification.
Let the Stokes-Mosher caterwauling begin.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.









> GISS can “fill in” (i.e. make up) data where there isn’t any using 1200 kilometer smoothing
I live in Southern California where the climate is acknowledged to be moderate. Drawing even a 120km (never mind 1200km) circle around my location would routinely provide 20°C variations. Extrapolation is nonsense for temperature.
I can only repeat the wise words of our famous DOS boss: “What difference does it make?”
One time, when he was being honest, a climate scientist who frequents these parts admitted that there are big problems with the ground based temperature network, but in his words, it was the best they had, so they had to use it.
In other words, even though they know they can’t use the data they have to support the pronouncements that they are making, they are using it any way.
Why are the years 1951-1980 used as the baseline for global temperature anomalies? It has been well established that global cooling occured between approximately 1948 to 1978. So why use that period for the baseline in 2018, and not for instance, the years 1966 to 1995?
The data manipulations are confusing enough without changing the baseline. Thirty years is the traditional length of a climate comparison period. The research ramped up in the 80s so the most recent period encompassing full decades was 1951-80, Basically, the reason is historical accident.
Good points! I actually have that GISS temp’ graph as my Twitter screen header (or whatever it’s called) for exactly the reasons stated in this piece. Do I get a gold star? 🙂
https://twitter.com/Cheshire__red
Oops. Mods!
There are several distinct issues here. Using anomalies allows intercomparisons in different climate zones (latitudes, elevations) but exaggerates changes and obscures essentially meaningless changes in actual temperatures. Infilling (by whatever method) vast sections of land with no data is provably wrong on these scales. Homogenization just provably contaminates better records with poorer records. Guest posted on that using all the surface stations project CRN1 USHCN stations subdivided into urban, suburban, and rural. Homogenization did remove apparent UHI from the urban stations, but ‘heated up’ all the suburban stations and 9 of 10 rural stations.
ristvan, I come at the issue from a background in Time Series Analysis. Some process generates the temperature data we observe. The claim is that the addition of CO2 by human endeavors has changed the underlying process, that is, the time series process is nonstationary. The way I think of it is that the data for each station is generated from a potentially unique time series process. Without any human influences, stations vary in their seasonal component, altitude, wind currents, humidity, etc all of which can affect the temperature generating process. Human influences can also impact the measured temperature whether through urbanization heat island, CO2, etc. I would approach the issue station by station. The first step would be to take the raw monthly data for whatever time period is available and deseasonalize it by taking 12th differences. I would then test for stationary (mean and variance). Unit root tests (Dicky Fuller) would be a place to start. I believe there are something like 720 stations in the US with data back to the 1920s. I would be curious how many stations contain a unit root of 1 or higher. My bet is that a fair number would show stationarity. The anomaly approach is very ad-hoc. It should be clear that the underlying assumption of the AGW crowd is that the temperature data is non-stationary. We can test for that!
Good bet! In fact, aside from UHI-corrupted urban stations, unadjusted century-long records tend to manifest weak-sense stationarity. The problem lies in the relative paucity of such long, non-urban records outside of USA, Scandinavia, and Australia.
” deseasonalize it by taking 12th differences. I would then test for stationary (mean and variance)”
An inadequate test. Stationary differences with positive mean would imply regular warming.
“The anomaly approach is very ad-hoc. It should be clear that the underlying assumption of the AGW crowd is that the temperature data is non-stationary.”
There is no assumption about stationarity. The purpose of anomaly is to make spatial averaging reasonable. With absolute temperatures, it isn’t, because the data is too inhomogeneous, and the sampling fixed. Subtracting something close to expected value, for the time and place, removes the main source of inhomogeneity, which is varying mean (altitude, latitude etc). That greatly diminishes the sampling issue.
And there we go, Nick that is totally unphysical, because 2 areas that have the same anomaly, if they’re 20°F different in temp, the energy change between that exact same anomaly is completely different because t^4 power.
It’s just either bad science, or dishonesty.
If you average 60°F with 70°F, you dont get 65°F, you get over 66°F( iirc about 66.4F or so)
“Energy change” — this seems extremely important. Do anomalies account for this? I think not.
For example, doesn’t air at higher humidity have greater energy capacity than air at lower humidity, when both are at the same temperature?
Temperature, then, just does not seem to tell the whole story, Temperature does NOT tell how water deals with energy. If temperature does not tell the whole story, then how can temperature anomaly tell the whole story more precisely?
“But think about what happens when we try and estimate the absolute global mean temperature for, say, 2016. The climatology for 1981-2010 is 287.4±0.5K, and the anomaly for 2016 is (from GISTEMP w.r.t. that baseline) 0.56±0.05ºC. So our estimate for the absolute value is (using the first rule shown above) is 287.96±0.502K, and then using the second, that reduces to 288.0±0.5K. The same approach for 2015 gives 287.8±0.5K, and for 2014 it is 287.7±0.5K. All of which appear to be the same within the uncertainty. Thus we lose the ability to judge which year was the warmest if we only look at the absolute numbers.”
– Gavin Schmidt, Director of the NASA Goddard Institute for Space Studies. Blog RealClimate, Observations, Reanalyses and the Elusive Absolute Global Mean Temperature (http://www.realclimate.org/index.php/archives/2017/08/observations-reanalyses-and-the-elusive-absolute-global-mean-temperature/).
So, this man claims he found some way to create information, that is, destroy entropy.
Get a free pass out of 2nd law.
And people dare use the word “scientist” about him, and the like of him.
Says it all.
If the absolute numbers have an error factor as large as the differences then what meaning can you get out of the stats?. Gavin Schmidt is a mathematician who should go back and take Statistics 101 because he obviously doesnt understand the basics. And he is the head of GISS Very scary!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Why use Fahrenheit? Why not use the Kelvin scale then almost all temperature fluctuations will appear flat – even those that occurred during the LGM when mile high ice sheets forged their way across the US and Europe.
If increases of 2 to 3 degrees are (or are not) going to result in local environment changes then we need to be able to analyse the temperature changes in detail. The use of anomalies allowed us to do this. .
Fahrenheit is readily understandable for many. They can relate to that value. Show the average person a graph in Kelvin and it would be completely meaningless to them.
Really? Boiling point of water = 212 deg F; Freezing Point = 32 deg F. Hardly intuitive. The Celsius scale is far more widely used across Europe. However the Kelvin scale is really the only scale which is truly representative of temperature.
It’s not really relevant though. The main point is that there are good reasons for using anomalies – whichever temperature scale is used.
By many here in the US.
Since Celsius and Kelvin differences are exactly the same it doesnt matter but I agree why not use Kelvin?
The problem is that almost all thermometers are in Celsius ……. or Fahrenheit in USA.
“allows” not “allowed”
I obviously live on those GISS temperature maps.
There is no way that was the temperature were I live. -8C below normal is somehow Red on these maps?
I’ll believe the 1200 km smoothing is useful when they can accurately tell me the temperature in Breckenridge Colorado based on the temperature in Las Vegas Nevada and points in between.
Until then, any distance smoothing/infilling is just another cherry pick.
I would think local humidity variations would be a huge factor in how well correlated anomalies would be over a given area. That makes me wonder a lot when I see these extrapolations are generally over desert areas.
Dew point temp is what Tmin follows. So as the El Nino’s, and Oceans move warm water around, down wind warms in response, until the dew point drops, and temps fall right along with it.
That’s not anomaly data. All the figures are for absolute daily temperature in tenths of a degree C.
Is -40 degrees a common human experience? Let alone an average annual value of -40 degrees?
Well, the recorded daily range of temperature over the earth is in the order of 90C to 130C (depending on season) between the coldest and warmest places, so -40C/F is within human experience. As at 0500UTC today, the range is +45C to -66C.
I miss the precedented ice myself… http://www.mlive.com/news/index.ssf/2018/04/mackinac_ferry_delays_opening.html
My question is, why 1951-1980? Why set that as a base line or what is “normal”?
Why not 1931 to 1960? That’s 30 years. Or even out to 1980? That’s 60 years. Or even 1881 to 2018? That’s all we know, isn’t it?
(OK. Maybe we don’t know that anymore since the records have been fiddled with so much.)
Better yet, why not use the satellite records as a baseline? That’s the closest we have to “global” record.
Best yet, admit we don’t really know enough about what has changed (let alone the cause of what did or didn’t change) to base $$$$$$$$$$$$$ policy decisions on such “settled” uncertainty.
Perhaps it has to do with “The Cause”?
My question is, why 1951-1980?
Agenda + Funding = 1951 – 1980
” My question is, why 1951-1980?”
Simple. Gistemp started in 1988. The convention is to use calendar decades. It doesn’t matter what base you use, but changing is a pain, especially when you have a whole lot of data in print.
Nick – perhaps you could explain why the 30 year periods are used to pick a baseline period. I understand that this is a ‘climate’ rather than ‘weather’ period, but it just seems to me a further excuse to cherry pick the baseline. Had the baseline been picked as the average for the last 90 years, perhaps some of those yellow and red colourations on the map (using the same scale of course) would have turned to blue. There is a huge difference in the amount of negative or positive anomalies depending on which 30 year period is used, so if ones findings are purely scientific why use a period that is so open to cherry picking – the longer the time period used the more representative are the anomalies. The only reason really is that it can be manipuated to produce a better representation of ones conjecture, is it no. Why stop at 1980, surely by your own opinion here the latest decadal ended 2010, so accuracy should use each temperature series as far back as it goes and then forward to 2010 as an accurate as possible setting of the baseline. It confuses me as a lurker why any statistician or scientist for that matter would seek to skew the represented data to try and load the result
“perhaps you could explain why the 30 year periods are used to pick a baseline period”
Remember, anomalies are calculated for stations. You want a period where as many as possible stations will have data throughout. That rules out 90 years. In fact, the 30 years aspect doesn’t matter that much. Any consistently applied normal is very much better than no anomaly at all. The main reason for choosing something and sticking with it is that you don’t have confusions about which base is being used where. We get enough of that even when the providers are consistent.
The main reason for a period as long as 30 years is to ensure month to month consistency. There is a separate normal for each month. And you don’t want to build in to the base a chance difference caused by, say, a run of warm Junes. Else you’ll be left wondering why modern Junes are so cool relative to May.
Yes, but why 1951-1980 ? Why not 1950-1979 ? I see this often: (multi-)decadal
ranges where the start point is an number ending in ‘1’, and the end point a number
ending in ‘0’. These people probably think that “the sixties” is the period 1961-1970.
Or they think that counting begins at ‘1’. Doesn’t say much for their numeric
sophistication.
“Why not 1950-1979 ?”
Well, the first decade AD was 1-10. I guess they just continued from there.
“The reason to work with anomalies, rather than absolute temperature is that absolute temperature varies markedly in short distances, while monthly or annual temperature anomalies are representative of a much larger region.” what utter hand waving nonsense … anomalies can ONLY be calculated by starting with absolute temperatures … if the anomalies are representative then so are absolute temperatures …
Layman here and not understanding. Maybe someone could explain? What I get from the GISS quote is that anomalies by themselves, without smoothing, are representative of larger areas than are absolute temperatures. Is that true? If so, why?
L: Read one or both of the RealClimate posts I linked to in another of my comments.
There’s a more-onerous aspect to the usual use of anomalies than is acknowledged here: it allows the severe mangling of low-frequency climate-signal components through the artifice of surreptitiously shuffling individual station data in and out of the manufacture of regional averages. In other words, the set of stations that determines the average in a any particular region varies considerably over time.
Thus the “trends” evident in “regional anomalies” do NOT show the actual secular changes that may have taken place at a uniformly FIXED set of sample locations. They show a patently contrived statistic, akin to testing the long-term effect of a drug not by following a fixed set of subjects that have been given a particular dose, but by shuffling in ad liberum subjects that never took the drug. BTW, the “drug” is not CO2, but the antidote to UHI provided by stringent time-series vetting.
And it’s fine for math or statistics, but has no place in climate or weather.
Snap out of it.
Come on, we MUST have those colorful graphs. That’s how Mikey realized climate change/climate disruption/global warming was SERIOUS. Without colored graphs, we might all be laboring under the impression this is all insignificant.
It’s important to have anomalies so you can leap to conclusions-
“We’re not going to lose it tomorrow, but we’re at the point where if we don’t make some really dramatic changes to our emissions, then we are at risk.”
https://www.msn.com/en-au/news/australia/coral-on-the-great-barrier-reef-was-cooked-during-2016-marine-heatwave-study-finds/ar-AAw2bwH
“Associate Professor Webster said scientists were trying to develop computer models of how reefs grow and “ultimately try and make better projections about how reefs might respond in future”……
“We know that they can come back — they are resilient species and if we take the right measures they’ll be around for the next generation,”‘
http://www.abc.net.au/news/2017-01-15/ancient-samples-great-barrier-reef-recover-but-new-threats-study/8182822
Some scientific anomalies always have a definite similarity about them, namely their bought and paid for conclusions are never anomalous.
Since there aren’t any trees to fall and make noise down there, we need Polar Amplification.
It’s Kriging and modeling all the way down…
The anomalies map shows a grey area around Giles, WA in Australia ( just SW of the centre). This station, now, has enough data (daily to 1956 and monthly to 1949) according to BOM ( previously missing a lot). 200km to the SW is Warburton airfield that is missing a lot of data since 1951 and further west is Carnegie with nothing before 1999. Similar lack of data all around Giles but they still managed to calculate an anomaly all around that grey spot.