All the curves that are fit to print

Regarding the latest UAH and RSS global temperature data plots Dave B writes: “…could you post a best-fit, to be fair? I don’t have the technology.”

Sometimes I’m tempted to tell people to do the work themselves, after all, I’m overloaded as it is. But, it is the 4th of July weekend, and I’m stuck here in the smoky toasty Sacramento Valley babysitting a bunch of servers until my chief tech support guy comes back from vacation, so what the heck.

I’m not sure what he’s implying by “fair” but it has been my experience that no matter what you put in a graph, or how you graph it, somebody will find fault with it.  Below are raw data overlaid with 1st order and 5th order curve fits to show long and short term trends.

Click for large plots

And “to be fair”, and to make everyone happy/angry here is the last 11 years, when the warming trend flattened.

Click for a larger image

Have at it lads. 🙂

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
77 Comments
Inline Feedbacks
View all comments
James S
July 3, 2008 10:28 pm

Out of interest (and I’ve seen it before but just can’t find it anywhere) what is the current “background” warming level?
I seem to remember it is somewhere in the region of 0.6 degrees centigrade per century since the “end” of the last ice-age (end is in inverted commas as I’m was always taught in my geology lectures that we are still in an ice-age as we have icecaps – it is just a warm period inbetween cold ones).

Pieter Folkens
July 3, 2008 10:46 pm

Short term, long term, and really long term . . .
I’m a fan of eustatic sea level data, in part for its inherent smoothing and detailed long view. The temperature graphs of recent are very short term. If one looks at the entire present interglacial, the sea level data shows that things peaked around 6,000 years go. If the starting points of trends analysis begin a few thousand years ago rather than a hundred years ago, it is pretty clear the long-term trend is cooler. This is consistent with the behavior of the previous interglacial (Eemian) where it warms to the era’s highs early in the period, then gradually cools only drop suddenly at the end of the interglacial. Following this notion, it is unlikely that the world will return to the highs of the Medieval Climate Optimum, but increasingly possible that a protracted cooling will follow the late 20th Century highs.

DaveK
July 4, 2008 12:04 am

Higher order polynomial regression curve fits are great for smoothing data, and to see trends within the scope of that data. But if you have ever tried to use them to extrapolate outside that scope, you will know the perils of that exercise. Once you exceed the boundary conditions of that fit, all bets are off, and the higher order the polynomial, the less useful it is for predictions beyond the scope of the data, especially when the so-called “dependent variable” (the y axis value) has little dependence on the “independent variable” (the x-axis).
Heck, even first-order regressions are subject to this limitation. Just because you can get a line or curve to fit doesn’t prove a causal relationship.
My $.02
DaveK

July 4, 2008 12:48 am

one wonders what sort of temperature record we have when these sort of things go on;
http://www.topix.net/world/australia/2008/07/fiddling-the-figures-on-climate-change
seems like yet again the scare brigade get caught out

The engineer
July 4, 2008 12:53 am

Brendan.
We have wind here in Denmark (around 20% of electricity needs are supplied that way) and there is nothing wrong with it at all, as long as it doesn ‘t “stand alone”. In Denmark we have swedish hydro-electric, natural gas and of course oil as back-up, so no problem. Wind is also looking more and more economical compared to the rising oil prices, so don’t knock it.
Question to all:
Isn’t all this talk about “surface temperature” a waste of time. Surely the only
temperature that effects climate is the temperature (not at the surface) of the oceans, as witnessed by PDO, AMO, El nino etc. ???
Surely surface temperatures are too sporadic to gain any real meaning on climate from them ??

Philip_B
July 4, 2008 2:05 am

Just because you can get a line or curve to fit doesn’t prove a causal relationship.
But it is indicative of one.
While, the failure of a line or curve to fit, as in the the IPCCs climate predictions, is conclusive proof of no causal relationship.

Patrick Hadley
July 4, 2008 2:50 am

Much as I love to see the temperatures giving grief to the alarmists, we might find the temperatures start to turn upwards soon. According to http://www.cdc.noaa.gov/people/klaus.wolter/MEI/mei.html we are probably coming to the end of the La Nina. If we go into an El Nino this can mean a rapid rise in temperatures. Of course if we are in a period like that between 1950 and 1975 when La Ninas seems to follow one after the other, then we may be in for a prolonged period of lower temperatures.

July 4, 2008 2:57 am

@the engineer:
Ocean temperature is dropping.

July 4, 2008 3:23 am

The Engineer:
I don’t agree that surface temperature is a waste of time, since it’s what most people key off of and understand about global warming. The time spent illustrating and discussing that there has been little to no warming for 11+ years will eventually pay off.
I do agree that surface temperature fluctuations are periodic and driven in the most part by oceanic oscillations: the AMO and ENSO, but not the PDO, which is not the simple residual of global SST subtracted from the North Pacific SST, north of 20N. The magnitude of the “North Pacific Residual” is more on the order of the AMO, but the timing of its oscillation is different than the AMO, and the PDO for that matter. The PDO is an aftereffect of ENSO, or at least one paper describes it as such. In “ENSO-Forced Variability of the Pacific Decadal Oscillation”, Newman et al state in the conclusions, “The PDO is dependent upon ENSO on all timescales.” Refer to:
http://www.cdc.noaa.gov/people/gilbert.p.compo/Newmanetal2003.pdf
I’ve illustrated and discussed the difference between the PDO and the North Pacific Residual here:
http://bobtisdale.blogspot.com/2008/06/common-misunderstanding-about-pdo.html
I have 17 posts on Smith and Reynolds SST data, including instructions on how to download it from NOMADS, over on my blogspot. For the Smith and Reynolds posts, I’ve tried to keep my AGW skepticism to myself and to report only what I found.
http://bobtisdale.blogspot.com/2008/06/smith-and-reynolds-sst-posts.html
Regards

July 4, 2008 4:43 am

‘WWS’ asked for a 200 day (roughly 6 month) moving average… As we say round here, ‘Yer ’tis:
http://www.woodfortrees.org/plot/hadcrut3vgl/from:1979/offset:-0.15/mean:6/plot/gistemp/from:1979/offset:-0.24/mean:6/plot/uah/mean:6/plot/rss/mean:6
REPLY: Just a note to readers, Paul has created an exceptional web resource at http://www.woodfortrees.org which is in the blogroll on the main page. If you wonder whata certain graph of temps or other variables looks like, be sure to try this page and it’s easy interactive menu system for generating graphs.

The engineer
July 4, 2008 5:08 am

Bob Tisdale.
Thanks for the information. I’ll get onto it during the weekend.
My main point was that its the sea that drives the climate, not land.
The sea stores energy for long periodes of time, the earth warms
or cools very quickly.
Trying to compare, or average temperatures in the desert with temperatures
in the antartic, makes no sense to me at all. Climatologists are attempting
to compare thousands of different situations that exist today with thousands of different situations that existed yesterday in chaotic systems
and then trying to extrapolate to some kind of average figure in a stable system. The noise must be almost suffocating.

Allen
July 4, 2008 5:41 am

Another simplified approach to explanation of “first and fifth order” curve fits to data: A first order fit (linear) results in a straight line through the data points. A second order fit (quadratic) allows a smooth arc through the data points. A third order fit (cubic) allows the arc to change direction (up or down) one time — creating a simple “wave shaped” line through the data points. A fourth order fit (quartic) allows the wave to change direction one extra time as it passes through the data. A fifth order fit allows the wave to change direction one more time than does a fourth order. And so on — as the “order” increases, the number of individual “waves” (ups and downs) in the fitted curve increases.
I like a cubic fit myself (third order fit) — for data that is obviously non linear, subject to change in direction at any time, and when I don’t know the underlying physical reality. The cubic allows the “curvature” to reverse direction so beats a quadratic (second order fit), in my mind. Nonetheless, the cubic remains conservative compared to higher orders — being less likely to exaggerate curvature at the endpoints of the data.
By comparison to a cubic fit (third order fit), the outcome of a linear fit (first order fit) depends too greatly on the time interval chosen (though all “curve fitting” can suffer from that). Nonetheless, linear is useful to clearly illustrate a simple point to an audience of varied academic background (I suppose that’s why so many linear fits are seen in the “global warming” popular literature).
Of course, as others have said, a curve fit (of any order) is no more than a math exercise (for example, the results cannot be reliably extrapolated to times beyond the existing data) — until one has discovered some underlying reality that explains the direction(s) the curve fit takes. Yet, the curve fit result might suggest those real mechanisms to a person with a broad enough scientific background — so has some scientific value.
For what its worth.

Mike Monce
July 4, 2008 5:53 am

As a physicist, I deal with fitting data on a daily basis. If I were presented with that data in raw form (not knowing what two variables were plotted) so I wouldn’t superimpose some physics on the numbers, my first thought would definitely not even think about a linear fit. That sort of “noisy” data doesn’t even qualify.
My next step would be to run an FFT. The numerous peaks and valleys suggest an underlying complex time series. After the FFT one can then decide whether to investigate the data in the high frequency realm; in this case days or months, or look at the longer term trends of years, multiyears, or decades. Again I would let the FFT guide me a bit here: IF the longer frequencies dominant the harmonic spectrum, I would average the data over the shorter time frames and then replot and see what happens.
The above sequence of analysis assumes I don’t have a clue about the physics, but would be trying to find out some relation. Knowing some of the physics behind the data helps tremendously in sorting through the analysis.

July 4, 2008 6:08 am

Surely surface temperatures are too sporadic to gain any real meaning on climate from them ??

Maybe. However, the IPCC AR4 projections focus largely on these. Their main projection over time is for surface temperature. They’ve included such projection in every one of their four reports. The compare data to their own projections in their reports.
Presumably, we can learn just as much from comparing data to projections of surface temperature as the IPCC thinks we can learn by reading their comparisons.

July 4, 2008 6:44 am

My favorite long term global temperature graph is here: http://www.longrangeweather.com/global_temperatures.htm
The current cooling period looks very much like the 1300-1350 period. We can only hope is does not proceed quite like the period from 1350-1600. Brrrr!
I have great difficulty fitting this graph to a hockey stick, though I might be able to beat it into that shape with a hockey stick.

Allen
July 4, 2008 6:52 am

Mike,
I agree about the FFT.
When I tried FFT (just for fun) on global temperatures (various incarnations) and historical monthly Sunspot numbers (one incarnation), global temperature data and sunspots share a lot of frequencies.
Then, I tried (just for fun), a simple model (i.e. intuitive but unsubstantiated physics) of global temperature as a function of Sunspots, etcetera. The simple model “explained” global temperature variations (on the 6 to 12 year and longer scales) over the last 160 years (with only a couple tenths left to explain). So, FFT are fun and practical to get a possibly meaningful handle on noisy data. About sunspots and global temperature — I have not made up my mind.

Paul Linsay
July 4, 2008 7:05 am

Anthony,
The mindset of climate science is linear or polynomial functions but they represent a limited class of functions and ones that are hardly seen in nonlinear systems. The climate is highly nonlinear and we see things like the PDO which causes a climate shift, not a simple rise or fall. You could achieve just as good a fit to the data with a step function: flat from 1979 to about 1998 and then a second level from about 2000 on connected by a smooth transition at the 1998 El Nino. The rise in the step is about 0.2C. It’s not what the modelers would predict, but so what.

The engineer
July 4, 2008 7:06 am

Lucia, knowing that you are a statistician and I am an engineer will
make this conversation easier.
While I am aware that someone at some point chose to try and work
with a global average surface temperature, that very process seems
to me to be an unnecessary complication of an already chaotic
process.
Surely if one chose the temperature of a specific area of land or sea,
one could eventually eliminate all the noise and compute a true value
for temperature change for that area.
If others did the same for different areas, one could start averaging
those numbers. That would basically be many small programmes
computing local relationships, instead of one massive computer
attempting to average chaotic global relationships.
But of course it couldn’t be that simple !!

Basil
Editor
July 4, 2008 7:07 am

Patrick Hadley,
The latest ENSO ensemble forecasts are projecting ENSO neutral conditions through the Winter of 2009. The dynamic models are projecting slightly positive SST anomalies, while the statistical models are predicting slightly negative SST anomalies, with the average hovering around zero. The probabilistic forecast is 75% ENSO neutral, and 25% El Nino OR La Nina.
All the curves fit to print, and more, on this subject, will be found here:
http://iri.columbia.edu/climate/ENSO/currentinfo/QuickLook.html
and here:
http://www.cpc.noaa.gov/products/analysis_monitoring/lanina/enso_evolution-status-fcsts-web.pdf

July 4, 2008 7:14 am

Fourier: You ask, we provide:
http://www.woodfortrees.org/plot/uah/fourier/low-pass:10/inverse-fourier/plot/uah/fourier/low-pass:2/inverse-fourier
(low-pass filter at harmonics 2 and 10)
But beware: DFT expects the end-points to match up, and creates artefacts to make this happen if they don’t. It’s best for looking at cycle comparisons in the middle of the data.

Brendan
July 4, 2008 7:47 am

Engineer –
I love knocking wind. Wind is a backup, and a moderately expensive one at that. My point in showing the article (did you even read it?) is that even if there was a supernetwork of electrical tie ins across Europe, the correlation of wind from the sea to the urals is too strong – and there are too many days during peak demand where you see calms of up to a week or more, making it impossible to rely on wind as your main source of power. I won’t repeat what was said very well in that article – but they even mention the british off shore wind going through extended levels of calm that make them useless.
If you look at any of the specific positions (ie, “areas”) as Anthony has place here, you would see that its not a simple matter to even eliminate noise there.
As for tying all the data together, there are multiple good ways to do so, that will not only statistically tie together the suffocating data, but will also give good estimations of standard deviation. They do however take some skill, and if the underlying data is corrupt or has been “changed” your mileage results may vary…. Kriging is perhaps the most common, with a sound underlying basis to it. There is some “art” to it in creating some of the input search parameters, and kriging the whole world using one set of these may not in fact be appropriate… There are others, but that would be the first tool I would reach for. (See the R CRAN pages – as an engineer, I would expect you to be able to pick up these methods fairly rapidly).
I should mention that, if properly taken, we probably have enough points to apply simple averageing to the data without resorting to kriging. Kriging itslef would not take into account mountains, or seas, or deserts – just the data. You could create a numerical surface and tie together correlations on how you expect the surface to react, but now we are into numerical modeling – and, well, that’s another crazy cat lady house of cats that we could get into…
FFT has been mentioned, but again, it is just one of several tools. I think we all can agree that curve fits are quick and dirty assessments that have little meaning in reality for this application – although a first order is by far the most appropriate, but not a predictor. FFT’s and some other stats methods are much better, and “honor the data” as Andre Journel used to say…

Bill Illis
July 4, 2008 7:48 am

It looks to me like El Nino is coming back.
http://www.osdpd.noaa.gov/PSB/EPS/SST/data/anomnight.7.3.2008.gif
If you watch this 5 month animation, you can see there has been a switch from La Nina to El Nino conditions. More importantly, if you keep speeding this animation up until you can actually see the waves of cold/warm water moving in unison (very interesting actually), one will see that the Trade Winds pattern has now reversed (blowing West to East versus the normal East to West) and this reversal usually leads to El Nino conditions.
http://www.osdpd.noaa.gov/PSB/EPS/SST/anom_anim.html

July 4, 2008 7:54 am

Just a brief reminder…
One of the “shifts” that occurs every 10 years on temperature averages (Normals)
by the NOAA NCDC is that they compute a “new normal” series of temperatures for each observational climate station, using a 30 year period set on each 10th year (1970, 1980, etc), then curve fit it by some polynomial function to create the “normal” daily highs/lows/averages output for each station. You will have to consult the National Climatic Data Center for the particular function used…
We were using the 1960 to 1990 “normals” through 2002, then switched to
1970 to 2000 “normals” which will be in effect until about 2011 or 2012, when
another set of “normals” will be issued…
Looking at the ‘history” of using these 30 year ‘normals,” it becomes obvious that
as you get away from the 1960-1970 “global cooling” and toward the 1998 warming peak, the normals are going to show substantial “warming”…
Just a thought for your discussion and analysis…
-JH-

Basil
Editor
July 4, 2008 7:54 am

Pat Lindsay,
We’ve been all over the question of linear vs. non-linear, step functions, slope shifts, etc. Over a limited time frame like the “satellite era” they will all have pros and cons, especially as indicators of where the trend will be next year, or ten years from now. The reason we get so focused on this, though, is because of a failure of climate “science.” Sometime during the late 1990’s the study of “natural climate variability” fell out of vogue in climate studies. What studies did get published were published with a clear bias toward AGW, or they didn’t get published. I.e., studies of “natural climate variability” were motivated by the goal to quantify it so as to remove it as a source of noise in the data, thus revealing the “real” trend caused by AGW. I don’t think you could get a paper published in the past 20 years without this kind of focus, or at the very minimum a “but this doesn’t disprove AGW” qualifier.
But anyone with a proper understanding of decadal, bidecadal, and multidecadal variation in climate would hardly be surprised at the recent retreat from the relentless upward “trend” of the late 20th century. You’ll search IPCC AR4 in vain for any kind of serious review of the literature on natural climate variability over these time scales, especially in global temperature. But it is there. And that “real” climate “science” seems taken by surprise ought to be an embarrassment.
As an economist, I cannot help but draw an analogy to business cycles. They are hard to predict, as to timing. But if anyone tries to argue — and at one time, some did — that business cycles are a thing of the past, I stop taking them seriously. I may not be able to predict when the next recession, or expansion, will occur, by I know they will.
Similarly with climate variability. There are all kinds of cycles in climate: interannual, intraannual, decadal, multidecadal, centennial, millennial, and so on. The least credible forecast of all is that we are on a relentlessly upward trend. Every upward trend will turn down at some point, and on scales that are roughly, if not exactly, predictable. In the early 21st Century we were due for a downturn on several scales — bidecadal, multidecadal (e.g. PDO, and coming soon, probably more from the NAO), and centennial (here maybe a combination of Gleissburg and terrestrial ocean dynamics).
We will never get the trends right unless we understand the periodicities involved.

Brendan
July 4, 2008 7:55 am

I should mention that although wind is looking more economical as compared to oil, the connection between them currently is weak. That is, wind cannot be a replacement for oil, oil is not used to create electricity (for the most part – only crazy cat ladies use oil to create electricity)… The much waited for electric car will bring its own issues as rare earth prices will begin to rise when battery demands begin to go through the roof… I have mentioned before that I think nuclear based methanol is an appropriate susbtiture fuel, in that our current infrastructure doesn’t need to change at all. I am open to solar if the promised breakthrough from Nanosolar reaches $1/W – right now, solar is a toy, and an expensive one at that.
Boy, I really thought my sticks would stir up more bees.