By Dr. Roy Spencer
(See the graph an summary comparison below to RSS. – Anthony)
Our Version 5.5 global average lower tropospheric temperature (LT) anomaly for October, 2012 is +0.33 deg. C (click for large version):
The hemispheric and tropical LT anomalies from the 30-year (1981-2010) average for 2012 are:
YR MON GLOBAL NH SH TROPICS
2012 1 -0.134 -0.065 -0.203 -0.256
2012 2 -0.135 +0.018 -0.289 -0.320
2012 3 +0.051 +0.119 -0.017 -0.238
2012 4 +0.232 +0.351 +0.114 -0.242
2012 5 +0.179 +0.337 +0.021 -0.098
2012 6 +0.235 +0.370 +0.101 -0.019
2012 7 +0.130 +0.256 +0.003 +0.142
2012 8 +0.208 +0.214 +0.202 +0.062
2012 9 +0.339 +0.350 +0.327 +0.153
2012 10 +0.331 +0.302 +0.361 +0.106
Differences with RSS over the Last 2 Years
Many people don’t realize that the LT product produced by Carl Mears and Frank Wentz at Remote Sensing Systems has anomalies computed from a different base period for the average annual cycle (1978-1998) than we use (1981-2010). They should not be compared unless they are computed about the same annual cycle.
If the anomalies for both datasets are computed using the same base period (1981-2010), the comparison between UAH and RSS over the last couple of years looks like this:
Note that the UAH anomalies have been running, on average, a little warmer than the RSS anomalies for the last couple of years.
Source: UAH v5.5 Global Temp Update for October 2012: +0.33 deg. C


In a cyclical world, linear extrapolation will eventually lead you to the wrong answer.
alex says:
November 6, 2012 at 11:56 pm
A nice steadily warming channel with noise. No sign of any cooling whatsoever.
_______________________________
Ye little gods, it only covers 33 yrs and starts in 1979 when it was COLD, at least here in the USA.
GRAPH
What is hard for warmists to hide is freezing weather. You can diddle the temperature records all you want but Mother Nature tells the truth with freezing weather. Here in the sunny south (NC) it reached freezing in OCTOBER! And it is not just here.
Snow And Heavy Rain Sweep Across UK
This is not something new either. A massive snowstorm hit China in January 2008. …China’s worst snowstorms in nearly 50 years have brought rain, sleet, wet snow and sharply colder temperatures to most of eastern and central China,… In October 2008 in Tibet At least seven people have been found dead after “the worst snowstorm on record in Tibet,” China’s state-run news agency reported Friday. While London has first October snow in over 70 years (2008)
Also in 2008 The worst frost damage in more than 30 years hit California’s vineyards the week of April 20, with temperatures dipping into the 20s for four nights in a row… the California Farm Bureau Federation says some growers are reporting that half their crop is gone.
In 2010 Deep freeze kills millions of fish in Florida and At least 70 percent of southwest Florida’s winter crop of vegetables, including tomatoes and peppers, were destroyed by freezing weather, said Gene McAvoy, the director of the Hendry County extension office for the University of Florida.
Why in heck do you think the rallying cry was changed from “Global Warming” to “Climate Change” and then “Climate Weirding” you really do need to keep up to speed on the official propaganda better. When people are freezing their rumps off and especially when this year the National Post reports Cold weather kills more than 220 in Europe; Danube freezes over; France set to break power consumption records and even the Guardian reported last year Cold homes will kill up to 200 older people a day, warns Age UK “… There were 26,156 excess winter deaths during 2009-10, with figures for 2010-11 to be published next month…” Pushing the words ‘Global Warming’ while people are suffering from heating bill sticker shock will get you laughed at.
There seems to be significant differences between this October chart and the August chart,(still avaiable on WUWT on the “global temperature page”)..At least to an untrained eyeball the changes seem to be systematic, and go all the way back to 1979. It could possibly have something to do with the change of version to 5.5. Any comments?
Nonsense. Fitting trends to data is a basic statistical technique. It has meaning, and can reveal the significance of the data. That is why it is done. Everywhere. People only complained about it when Roy did it, because the results didn’t confirm their bias. The same people love throwing a linear trend on the same data, because it does confirm their bias. That’s how bias works.
Fitting data with meaningful functions is a basic statistical technique. Fitting data with arbitrary smooth functions is indeed meaningless — literally — and produces nothing more than “a guide to the eye” even if the fit works. In particular, it has little predictive power — many functions will interpolate or approximate a finite data set decently but have very different behavior outside of the fit interval, and the actual data (as it continues outside of the fit) cannot agree with all of them and may not agree with any of them.
Roy understands this perfectly well, because Roy actually understands mathematics and statistical methods pretty well. I personally am rather an expert on predictive modeling methodology, and wrestle with these problems all of the time. There are a few modeling techniques that effectively fit with a large basis to obtain some extrapolative success — neural networks, for example — but methods based on harmonic series e.g. Fourier analysis are notorious for their problems.
That isn’t even an issue here. Roy is (was) fitting a trivial smooth nearly harmonic curve that we all know would not extrapolate on the far side to fit the existing past data, let alone future data. I’m guessing he left it out because its inclusion was too encouraging to folks like Henry P who think that because they can achieve a crude approximate fit with a few hand-selected fourier components that they have discovered the secret of the ages and can thereby predict the future progression of global temperature, to the point where he is “certain” that his model is correct and therefore UAH’s actual data must be wrong. It also distracts the observer from doing the obvious — letting the data speak for itself.
In a noisy, chaotic series like this where there is little reason to select any given analytic basis as being “meaningful”, even linear fits are pretty meaningless, especially when performed on a remarkably short data series. You can see this lack of meaning by observing the rather large variation in the “best fit” slope of a linear trend resulting from “cherrypicking” the end points within the series. Shift the end point a few years, or even months, at either end and you can make the slope vary by a large factor, implying that even the linear trend in the data is highly uncertain.
The Koutsoyiannis paper on stationarity vs nonstationarity hydrology — the one that caught my eye several years ago as a “playah” in this game — has one of the best discussions of this point that I’ve ever seen. It should be required reading not only for all climate scientists (although especially for them) but for all scientists, period. He presents a series of fits to successive windows on the same set of data to show how utterly misleading and meaningless they all are as to the actual (rather simple) functional behavior of a given data set generated from an analytic function plus noise. I commend it to you:
http://itia.ntua.gr/en/docinfo/673/
(click on the preprint PDF or download the possibly paywalled actual publication). Note also that this page has a number of his other papers on this general subject — which broadly speaking is the introduction of what he calls Hurst-Kolmogorov statistical analysis into climate science, a sort of punctuated equilibrium model that describes one particular aspect of global climate almost astonishingly well (I reiterate, Bob Tisdale needs to apply it to SSTs as they obviously are a “perfect fit”).
Henry P would also benefit tremendously from reading the introduction to this paper. Note well that Koutsoyiannis is keeping it simple and only illustrating three distinct windows onto the data. He is also keeping it functionally simple as none of the functions that appear to fit in a window are unique even in that window — a linear fit or exponential fit would clearly work nearly as well as his parabolic fit, a fifth order polynomial would fit the entire sequence pretty well (and if not fifth, sixth or seventh — Wierstrauss’ theorem after all). Koutsoyiannis himself points out that if the series continued, his beautiful cosine fit could turn out to be fitting nothing but noise on an even longer timescale meaningful trend!
The point of this — in case it eludes you — is that merely fitting a finite segment of data to a functional representation of any sort is right up there with Tarot and Tea Leaves as far as having predictive value is concerned. A fit that worked remarkably well — “convincingly” well to the uninitiated — in the first two windows proves to be completely and damningly wrong in the third, not only wrong but literally irrelevant — even as the third window seduces you to conclude that the cosine law is itself meaningful just because it works across this finite sample.
Fits like this have some degree of reliability and extrapolability under only two circumstances. One is when there is a sound physical argument to support the use of some particular fit function, one where the parameters of the fit themselves provide actual information about the physics or other dynamics of the system. The other is where, over time, empirical experience is that some particular fit scheme just works, at least so far, in a robust way over a very long series and works — so far (!) — to extrapolate the series as time continues to evolve. The latter is enormously dangerous — so much so that Nassim Nicholas Taleb wrote an entire book (The Black Swan) criticizing its widespread use in pseudoscientific modeling of essentially unpredictable series wherein “black swan events” are known to occur with some unknowable frequency. All too often they support some sort of Martingale system — doubling down to ride a comfortable linear trend. Sometimes, however, the model fit scheme does have meaning and correspond to physics, we just don’t know how (yet), or is so general that when built in a certain way the model itself can replace the human brain and make information-theoretic compressions that correspond to unknown but real internal dynamics that are empirically safely extrapolable.
The latter is one of my primary businesses — literally, as fits/predictive models of this sort are enormously valuable when you can build them — but they are not for tyros to undertake. I’ve often thought about building a really clever neural network to model the entire planet, but this is a ten million dollar five plus year sort of project even for me — the computer required to build and run and refine it would all by itself be pretty expensive — but if done very cleverly I think it could manage the integrodifferential evaluation back to timescales in the remote past, robustly, allowing for noise and missing information to give us perhaps the best possible model for global climate ever built (and one that by its nature would be utterly unbiased, as AFAIK it is literally impossible to introduce a meaningful and deliberate bias into a standard neural network build process).
rgb
And I’ve just posted a look at the October 2012 sea surface temperatures and anomalies along Sandy’s path, which supports my post from a few days ago:
http://bobtisdale.wordpress.com/2012/11/07/october-2012-sea-surface-temperatures-and-anomalies-along-sandys-path-were-not-unusual/
Regards
edbarbar said on November 6, 2012 at 7:25 pm:
Where does the “0″ line come from?
The temperatures in the main graph are anomalies. Global values are found along with the regional breakdowns here:
http://vortex.nsstc.uah.edu/data/msu/t2lt/uahncdc.lt
The reference period for the annual cycle listed there is 1981-2010.
What it means:
The anomalies given are “deviations from monthly normals”. The monthly part is important, if they simply subtracted the monthly temperatures from a simple average of all temperatures in the reference range there would be a clear seasonal/monthly variation.
So instead the average of individual months is used. For January, take the average of all January temperatures from 1981 to 2010. Then to compute a January’s anomaly, you subtract the average of the Januaries from that month’s temperature.
That removes the annual signal, as in the seasonal/monthly variation, leaving the deviations from monthly normals.
So the “0” line is those monthly averages. It’s sort of false to display them as a line, as the segments between months have no meaning. But a vertical bar chart, while technically a better presentation, is not well appreciated in climate science.
Just a follow up on my previous post. To see what I’m asking about, do the following:
Load the following two graphs in two different tabs of the same browser. Shift between the two tabs. This should show obvious visual differences between the two graphs. My question why this is so?
http://www.drroyspencer.com/wp-content/uploads/UAH_LT_1979_thru_Oct_2012_v5.5.png
http://www.drroyspencer.com/wp-content/uploads/UAH_LT_current.gif
.
Re mb on November 7, 2012 at 11:35 am:
As mentioned on WUWT before, as noted at the bottom of the relevant UAH data file, starting with October 2010 they went to Version 5.5.
From this dynamic page on Dr. Spencer’s site:
http://www.drroyspencer.com/latest-global-temperatures/
As of September 2012, the Advanced Microwave Sounding Unit (AMSU-A) flying on NASA’s Aqua satellite has been removed from the processing due to spurious warming and replaced by the average of the NOAA-15 and NOAA-18 AMSUs. The graph above represents the latest update; updates are usually made within the first week of every month.
Starting with October 2012, dang it!
Spurious warming of the data was planned apparently…
Record lows according to NOAA for Oct: Low min 857, Low max 1828, Ties 1039, Total 3724
rgbatduke says:
………
Hi Dr. Brown
I was intrigued by your suggestion of ‘integrodifferential evaluation’ from one of your past WUWT comments, although in the years gone by I was fairly familiar with both integration and differential equations, but after looking up few web references I gave up.
However some time ago after reading sporadic comments linking volcanic eruptions to the solar activity, although it didn’t look a plausible either data or physics wise, I had a go.
Integrating eruption numbers on decadal scale, and then differentiating on annual scale a pattern appeared which may (or more likely may not) be of interest:
http://www.vukcevic.talktalk.net/Ap-VI.htm
Physics? Hmm ….
@kadaka November 7, 2012 at 5:07 pm> It appears to me that the mysterious change goes in the opposite direction from what you are suggesting. The effect is not that big, but it seems that in the newer graph the temperatures of the 80s are lower, and the temperatures of the 00s are higher. I haven’t done any serious analysis of this though, I’ve just stared at the two different graphs. Also I don’t see how the removal of the data from one satellite (launched in 2002) could influence the graph all the way back in the 80s.
My guess is that we have some change of algorithm here. Is there any documentation anywhere of how and why it has changed?
From mb on November 8, 2012 at 8:40 am:
Sure.
http://wattsupwiththat.com/2012/10/09/uah-global-temperature-up-slightly-in-september/
Which references this post getting deeper into the details:
http://www.drroyspencer.com/2012/10/uah-global-temperature-update-for-september-2012-deg-c/
Annual Signal Comparisons:
UAH v RSS
GISS v HadCRUT4
UAH v HadCRUT4
UAH v GISS
Summary
Context
Why do we never get the figures after deducting the long term rise? Surely if man is responsible for using large amounts of fossil fuel in the 20th century the rise before that was near constant and should be deducted from any rises to get a true representation of the believed fossil fuel impact. The very long term figure from the graphs I have seen are nearly totally independent of the start and finish dates so should be very uncontentious.