Mathematician Luboš Motl takes on the new UAH data (source here) and some current thinking about slopes in global climate by adding his own perspective and analysis. Be sure to visit his blog and leave some comments for him – Anthony
UAH: June 2009: anomaly near zero
UAH MSU has officially released their June 2009 data. This time, they’re faster than RSS MSU. The anomaly was +0.001 °C, meaning that the global temperature was essentially equal to the average June temperature since 1979. June 2009 actually belonged to the cooler half of the Junes since 1979.
Global warming is supposed to exist and to be bad. Sometimes, we hear that global warming causes cooling. In this case, global warming causes global averageness. In all three cases, it is bad news. The three main enemies of environmentalism are warm weather, cool weather, and average weather.
It is not a coincidence that these enemies are very similar to the four main enemies of communism. The four main enemies that were spoiling the success of communism were Spring, Summer, Fall, and Winter. 🙂 See Anthony Watts’ blog for additional discussion.
Bonus: trends over different intervals
You may have been intrigued by my comment that the cooling trend during the last 8.5 years is -1.45 °C. What is the result if you choose the last “N” months and perform the linear regression?
You may see that the cooling trends are dominating for most intervals shorter than 110 months; the trend in the last 50 months is around -6 °C per century. Only when the period gets longer than 150 months i.e. 12.5 years (but less than 31 years), the trend becomes uniformly positive, around 1.2 °C per century for the intervals whose length is close to 30 years.
Note that those 12.5 years – where you still get a vanishing trend – is from January 1997 to June 2009. If you consider the UAH mid troposphere data instead (relevant for the part of the atmosphere where the greenhouse warming should be most pronounced, according to both proper atmospheric science and the IPCC report, page 675), all the trends are shifted downwards:
You need to consider time periods longer than 180 months i.e. 15 years (at least from Summer 1994) – but shorter than 31 years – to see a uniformly positive warming trend. And the trend that you can calculate from those 30+ years is just 0.4 °C per century and chances are that this 30+-year trend will actually drop below zero again, in a few years. At any rate, the blue graph makes it clear that in the right context, the longer-term warming trend converges to zero at a very good accuracy.
According to the IPCC, the surface warming trend should be around 3 °C per century which should translate to a 4-5 °C warming per century in the mid troposphere where the greenhouse effect has the strongest muscles. You see that according to the last 30 years of the data, the IPCC overestimates the warming trend by one order of magnitude!
Because the mid troposphere is the dominant locus of the greenhouse “fingerprint”, this is the most appropriate method to check the validity of the IPCC predictions. Their order-of-magnitude error is equivalent to the mistake of a biologist who confuses squirrels and elephants.
To be more specific about a detail, half of the Earth’s surface is between 30°S and 30°N – because, as Sheldon Cooper said in TBBT, sine of 30 degrees is exactly 1/2. But the mid-troposphere warming (8 km above the surface) is faster than the surface at least between 40°S and 40°N, i.e. on the majority of the surface, so it is likely that even when you take the global averages of both quantities, the mid-troposphere should see a faster warming than the surface.
Someone may argue that those 30 years represent too short an interval and the trend will be higher in 100 years. But such a reasoning is a wishful thinking. Moreover, periods longer than 30 years don’t really belong to the present generation. In 30 years, most of the population of the Earth won’t remember the year 2009 – and they shouldn’t be affected by stupid fads of those mostly dumb people from 2009.
bluegrue (15:54:10)
Here’s my point; they adjust the temps up by .6 degrees and then claim there’s a .6 degree increase in GST. Doesn’t that strike you as a bit obtuse?
Further; they’ve basically denied any UHI effect, allowing -.1 degree, and then offset that with a +.7 degree adjustment. Tell me, what do these ‘necessary’ adjustments represent?
It is fairly intuitive that there is a UHI effect. I was watching the weather just the other day. All of Iowa is in the low 70’s and upper 60’s. Des Moines is reporting 78 degrees. Not unusual, and makes a bit of sense, if you live in Iowa.
So, where is it that the thermometer reads 79, and you adjust that up to 80, to be more ‘accurate’?
Let me know.
Bluegrue;
Further; TOBS and SHAPS should tally to near zero in the aggregate. Period, end of story.
Unless one assumes they are adding an unaccounted UHI back into SHAPS, thereby maintaining a UHI effect they were not adjusting out in the first place. (That would be a spurious argument at best.)
Here’s a thought; if their sites are so great (despite literal documented evidence to the contrary), why are there ‘necessary’ adjustments? To think that there is anything other than deliberate fraud at this point is to ignore massive amounts of evidence to the contrary, and assume incompetence at a level unprecedented since Easter Island.
SInce there funding doesn’t come from ‘big oil’, I guess their integrity is beyond reproach?
And the ‘we really don’t know everything we need to yet’ crowd is the side that’s in denial?
Beam me up Scotty, there’s no intelligent life down here.
Someone in another thread called WoodForTrees a great “cherry orchard”. I took that as a compliment because I like cherry trees 🙂 But too many cherries give you indigestion…
http://www.woodfortrees.org/plot/uah/last:120/plot/uah/last:120/trend/plot/uah/last:102/trend/plot/uah/last:60/trend
Which is more representative – maybe none of them?
See also: http://www.woodfortrees.org/notes#trends
But the graph of the change of trend over time is an interesting way to look at it. To make it really interesting you’d have the end year plotted on the Z axis – the curve to end year 1998, for example is going to look quite different…
Hank Hancock (01:14:47) :
You have it the other way around, you need to cherrypick short periods to get negative trends. Here’s monthly GISTEMP data running mean over 1, 10, 20 and 30 years. Could you please point out, where exactly there is a cherrypick in the length of the smoothing period? Take any 15 to 50-year average and you’ll get more or less the same result. Of course the longer averages will smooth out detail. No, it’s not the satellite record, but ground-based and satellite data agree reasonably well.
The 30 year average smooths out most of the weather but still allows one to observe changes in climate. It was adopted about 75 years ago!
“Climate” by H. H. Lamb, Methuen (August 1977), ISBN 0064738817, Footnote 1 on page 684. A simple look at the MetOffice site or the IPCC glossary would have revealed to anyone interested, that the 30 year period goes back to a definition by the WMO. If find it curious that most readers seem to prefer to speculate rather than investigate. So the “30 year weather average = climate” was around quite some time before Hansen.
For completeness sake: If you continue to read the footnote you will note, that the author is not happy with this definition as the normal period is updated every 10 years and he further contends, that 10 to 15 year averages are more useful to predict the next year or two.
Hank Hancock – A while back I looked at different methods for long term temp analysis, using other then “statistical” methods. One of which Leif uses on solar analysis, Fourier convolution. In addition, some standard signal conditioning methods to see what would shake out. Using the East English data from 1659 to 2008, it was “detrended” it with a linear line.
http://www.imagenerd.com/uploads/t_est_21-Gnm7m.gif
Next three methods were used to remove the short term data, using 40 year “filters” (fc=0.025 cycles/year) Fourier convolution (FFT), Chevushev 4-pole and moving average. The result was:
http://www.imagenerd.com/uploads/t_est_24-vco3s.gif
Since the 4-pole Cheb. introduces a “phase” delay, of 180 deg. or ~20 years (for a 40 year period wave), I “backed” it off 20 years for the resultant graph:
http://www.imagenerd.com/uploads/t_est_25-avCpP.gif
So the FFT and “phase shifted” long term data look quite similar, especially at the end, using two different methods. Putting the “trend” back in resulted in:
http://www.imagenerd.com/uploads/t_est_26-kT1s8.gif
(temp er should read temp) and comparing it to the composite put out by climnate4you, resulted in:
http://www.imagenerd.com/uploads/t_est_27-qvBaC.gif
So based on this ONE data series, it looks very likely that a downward tend may be in the making. In addition, looking a the error between the actual data and the trend, the resultant error, over the range, seems well bounded.
year 2008 was tenth warmest on record, exceeded by 1998, 2005, 2003, 2002, 2004, 2006, 2001, 2007 and 1997.
I also point out that “tenth warmest” out of ten can also be called “coldest.”
That the warm temperatures are clustered around the peak is no surprise. That’s why it’s called a “peak.”
It’s usually preceded by a run up of temperatures before the peak, as occurred from roughly 1978 to 1998.
It’s usually suceeded by a steady decrease of temperatures after the peak, as has occurred from 1998 until now.
These, too, would be defining characteristics of a “peak.”
The problem with its “peakiness” is that CO2 has not peaked; it continues to rise. Thus absolving CO2 as a driver of temperatures – at least for me.
Meanwhile, solar activity seems to be related to the “peakiness”; cycles 19, 21,22,23 were among the most active ever recorded. That these occurred during the run up to the peak makes the peak as no surprise – at least for me.
Nor will cooling from a less active sun be a surprise to me. Although some AGW true believers will find their faith shattered…
J.Bob: Similar Fourier filtering (to harmonic 4 = roughly 40 years) on HadCrut data:
http://www.woodfortrees.org/plot/hadcrut3vgl/plot/hadcrut3vgl/detrend:0.7/fourier/low-pass:4/inverse-fourier/detrend:-0.7
Note I’ve done the same trick of detrending and then, er, dedetrending to avoid Fourier edge effects.
Bob Kutz (05:58:46)
Are you even aware that you compare the numbers in degree Fahrenheit to others in degree Celsius? Ever heard of conversion factors? Are you further aware that TOBS and SHAP are applied to US48 only? As US48 is just 2% of the Earth surface the global impact of these corrections is minimal. Furthermore your assertion that “TOBS and SHAPS should tally to near zero in the aggregate” is simply that, an assertion, and a baseless one at that.
Woodfortrees (Paul Clark) – What I did was just “chop” off freq. above 0.025 cycle/yr., close to the “cutoff” freq. as shown in the graph ref. below:
http://www.imagenerd.com/uploads/temp_est_12-GOpNo.gif
Note the spectral chart should read cycles/year instead of Hz., so I wasn’t looking at any particular harmonic.
What was interesting is how the recursive Cheb. Filter correlated to the FFT filter, after the phase adjustment. This was one of the cross checks we did when running analysis studies. The phase delay of the Cheb. filter does not help in real time process control, but it does help verify the FFT filter, if it is used in real time data processing. Just one of the reasons why the FFT is incorporated into signal processing IC’s.
Checked you site previously, now I see how you put in other types of filters, thanks for the example
The people at RC had a real hard time with Fourier analysis and filtering. Even called the procedures outline by Blackman and Tukey (co-developer of the FFT) “bungled”.
David, speaking of unbelievable denialism…
In 2005 two Russian solar physicists bet James Annan $10,000 that the earth’s temperature would be cooler, not warmer, ten years from then. The Russians were probably thinking of their grandchildren. 🙂
But the ten year length of the bet, short by climate standards per above comments, indicates that both sides of the bet were confident. Nearly halfway through the bet, the Russians are looking good.
bluegrue (09:24:29) :
Yes, I am aware that I have mixed C and F, not in any formulative fashion though. Simply in separate comments regarding temp. and I don’t usually use C when talking about real time, daily temps. That’s for science logs.
As to the TOBS and SHAPS adjustments, I don’t believe that it’s true that they only apply to US data, as there is little information regarding the adjustments made outside the US available. Common belief is that they closely reflect the USHN adjustments. Nevermind the issues with what happened to the representative sampling in the demise of the USSR.
Further, if you simply recite the surface stations, or adjust for the time of day, then that ought to be essentially a ‘random noise’ adjustment. If it’s not, you are manipulating the data. Or are you thinking that whatever time, earlier or later, the temp adjustment ought to be positive, and whatever the relo, the temp adjustment likewise ought to be positive?
Statistically, if it’s not near zero, it’s manipulation. The graph is fairly transparent in that regard. Considering that the UHI adjusment is in the realm of .1C, it’s laughable that anyone would be willing to support a positive adjustment for over .5 for the combined TOBS and SHAPS. There’s just no damn way taking a temp 1 hour later or earlier is going to have the same effect as having the station in the middle of a parking lot, or even in the park in the middle of what has become a metro area. At least not over a longer period of time. Further, SHAPS should net to Zero, unless you are what? Always moving the station to a cooler location, but not because of UHI? What kind of bs is that? There’s no way to make that claim, short of admiting you are allowing for a UHI effect to be adjusted out, that you weren’t adjusting for to begin with. If you don’t understand what that means, then that’s your problem. But I am having serious doubts about your understanding of any of this.
Further, in you other post, you note that in periods of 30 to 50 years, there’s only a warming trend. Take a look, and note too that the longer period used, the less that trend is. It’s approaching zero, the same as it would for a random sample. Think about that. Meanwhile, the shorter term trend is negative, indicating that while it may have been warming, it is now cooling, and in the long term, these minor changes are just ‘noise’. If there were really a trend, it wouldn’t be very arguable at this point.
Steven Kopits (11:33:23) : “Not to belabor the point, but a squirrel to an elephant is about three orders of magnitude. A horse to an elephant would be about right.”
Well, with all this robustness in the climate models, I’d say there must be an elephant.
Reminds me of the room filled with horse manure. A boy was discovered digging through the manure sure that there must be a pony in it somewhere. So too the AGW’ers. With all this cold weather, there must be an AGWing theory in the manure somewhere.
TJA,
“I don’t know what your point is”
Simple factual correction. I didn’t mean to suggest “he is not climate scientist or statistician”, but simply to correct the information Anthony offered. I think Lubos is one of the best and mathematically most capable commentators of climate change I am aware of. If you know something about string theory, you should also know those guys have to be damn good mathematicians in order to do that science. 🙂
Don Shaw (23:50:44) :
David (14:30:01) : “The time series shows the combined global land and marine surface temperature record from 1850 to 2008. ”
David, I find it interesting that the time period selected is 1850 to 2008. Do you think that going back to 1850, when it was mighty cold, might distort the story? This selection tells me that they are intentionally distorting the picture with this selection?
As seen here:
http://chiefio.wordpress.com/2009/03/02/picking-cherries-in-sweden/
1850 is almost exactly the bottom of the LIA lows (look at the black average line). It is a most perfectly harvested cherry…
Notice that the 1720 era temperatures are just about exactly the same as now. We’ve had a 300 year cycle down into a LIA low and back out again, nothing more. Everything else is stuff and nonsense dancing in the error bands of fictional precision.
bluegrue (06:37:55) : Here’s monthly GISTEMP data running mean
GIStemp? You still use GIStemp? That is Soooo last millennium…
A partial list of problems with GIStemp can be found here:
http://chiefio.wordpress.com/gistemp/
over 1, 10, 20 and 30 years. Could you please point out, where exactly there is a cherrypick in the length of the smoothing period?
OK, how about the fact that there is a roughly 30 (ish) year PDO warm phase in that period. All your intervals are from inside one half of longer cycle that has now flipped from what it was in the ’70s. That is inside of a longer rise from 1850 (the starting point of GIStemp’s cherry pick) at the very bottom of the Little Ice Age inside of a roughly 300 year cycle. Oh, and just for grins, there is a 1500 year cycle of Bond Events that we are also in nearing the end of a warm cycle. (Though it is worryingly possible that we are now in the first stages of Bond Event Zero…
http://chiefio.wordpress.com/2009/04/06/bond-event-zero/
though I dearly hope not.)
Oh, and GIStemp is more of a “fiction creation program” than a temperature series. One example? They need to create lots of data over space and over time where there are none, and they rewrite the history in strange and wonderous ways. (OK, that’s two…)
http://chiefio.wordpress.com/2009/02/24/so_many_thermometers_so_little_time/
So as soon as you say “GIStemp” you are saying “Fictional pasteurized processed data food product”. No thank you. I like my data real, wholesome, and minimally processed.
bluegrue (09:24:29) : As US48 is just 2% of the Earth surface the global impact of these corrections is minimal.
But the US is a much larger percentage of the THERMOMETERS… so your point about the surface area is a rather silly one…
http://chiefio.wordpress.com/2009/02/24/so_many_thermometers_so_little_time/
Take a look at the graphic in the top of that link. Notice that the USA dominates the thermometers x time product that is the “temperature record”. What happens (or happened) to the USA stations is very important.
Since GIStemp happily takes the stations it has and fictionalizes their readings over ever larger areas of time and space, well, lets just say that a little data fudge here and there can flavor a very large space… About 1500 km in the later “anomaly” stages of the code.
After reading much of the above, I guess there are two kinds of people: those who can believe in a worldwide decades-long conspiracy of postdocs, journalists, data collectors, and policymakers, and those who can’t. Oh, and I guess the ocean and the sun would have to be in on it, too, since they don’t seem to be cooperating.
As a published peer-reviewed scientist, I suppose I’ve been hoodwinked by my colleagues.
As for the fellow that asserted that the years Hadley names as the warmest are the product of regression, here are the raw 20 warmest years since 1850 in that series, with no adjustments:
1998
2005
2003
2002
2004
2006
2001
2007
1997
2008
1999
1995
2000
1990
1991
1988
1987
1983
1994
1996
Must be all of those crummy thermometers.
REPLY: Interesting thing about Hadley, they have steadfastly refused to share any techniques, formulae, or code that is used to come up with their temperature dataset. Science without transparency or replication opportunity is not science. Further, it only takes a couple of people and computer code to produce the dataset they offer, so your “decades-long conspiracy of postdocs, journalists, data collectors, and policymakers” really doesn’t apply when it is in the hands of of couple of people.
I challenge you. Ask Hadley to share the techniques, formulae, and code as many of us have. FOI requests have been made and ignored. If you are truly a man of science, it should be no problem whatsoever for you to obtain what nobody else has been able to.
Let us know when you have it. Or let us know when you are rebuked. I’ll be happy to provide the contact information for Dr. Philip Jones there if you wish and I’ll even grant you a full on guest post for you to share what you’ve learned. – Anthony Watts
David:
Confirmation bias and groupthink are all that is required. No conspiracy necessary.
BTW, let us know when Hadley releases the raw data (not the gridded data) and methods that created your list. Until then that list is nothing more than opinion.
jorgekafkazar (16:21:20) :
Steven Kopits (11:33:23) : “Not to belabor the point, but a squirrel to an elephant is about three orders of magnitude. A horse to an elephant would be about right.”
Well, with all this robustness in the climate models, I’d say there must be an elephant.
Harvesting a silly nit: I think it depends on which you are comparing:
Volume (or proxy of mass), Surface area, or length …
Pamela Gray (16:29:42) I heard the story re: the boy shoveling the stall, as a light-hearted example of optimism. Seems hard to imagine such vitriolic folk as being optimists…
Bob Kutz (05:58:46)
Bob Kutz (14:52:22)
Yeah.
Hansen et al. 2001 document the changes with regard to their 1999 algorithm and say otherwise.
Wrong, you are seriously uninformed: TR Karl et al., J. Appl. Met. 25(2), pp. 145–160, Feb. 1986. The time of observation has changed in the US in a systematic, not a random way. Read the paper, note its date.
Take a look yourself here. Taking 10, 30 and 50 year averages, cut down to that period that is covered by all smooths, apply linear regression. Care to repeat the bit about approaching zero?
E.M. Smith
Using Hadcrut does not change the point I made in any way. Have a look at this graph of Hadcrut data, giving linear trends for the period 1905 to 1985 after 10, 30 and 50 year smoothing. Care to explain, why the trends are almost identical?
Words fail me. The thermometer readings are used to determine a temperature for each point of the Earth’s surface, then you integrate over the surface. If you have two streches of land of equal size, one with 10000 thermometers and one with 10 thermometers, the former having an average temperature of 20°C, the other of 22°C, then the average temperature over both is 21°C and not 20.002°C, as you imply.
David
Glad you are still around. Perhaps you can answer the question I posed a few days ago then we can have a sensible discussion on the weather stations and the data that emanates from them.
“Would you like to confirm your understanding of the number of accurate weather stations that were used at the start of the series in 1850, and also how the marine surface record back to then was compiled? Also you might like to find out how often the reporting stations have changed in number and location. Once you know that background we can all have a sensible discussion on the data you have referenced.”
Thanks. Look forward to your reply
Tonyb
David (18:51:46) :
You don’t believe in a worldwide decades-long conspiracy. Neither do I. It’s far more subtle than that. Group think is great if you’re fighting a war, building the first atomic bomb or putting a man on the moon. But it’s a terrible way of trying to get to the truth. It becomes all too easy to ignore or rationalise away facts that contradict what the group thinks.
And of course there are many vested interests. Many jobs depend on AGW. Because of the scare, US spending on climate science has mushroomed from hundreds of millions to over a billion dollars, and thatw as under Bush. If I were a climate scientist I would have to be stupid to question the orthodoxy – assuming, of course, that I didn’t care about the integrity of science.
No, I don’t think it’s a conscious conspiracy. But it may be an unconscious one.
I’m not quite sure what listing the temperatures proves. Nobody is seriously saying there has been no global warming, although the amount is almost certainly exaggerated. You can see the trend simply by looking at the graph of HADCRUT3. But it also shows a very consistent cooling trend over almost the last ten years. The satellite record, which is almost certainly far more reliable than the ground record, is now almost precisely on its 30 year average. According to AGW, shouldn’t the satellite record actually be higher?
With this in mind, it’s not surprising people are becoming more sceptical. But the real problem is the lack of credible proof of AGW. Without that proof, and bearing in mind that the recent warming is well within natural variability, then AGW looks increasingly like a failed theory. But if you do know what that ‘overwhelming’ proof of AGW is, then please let us know!
Chris
bluegrue (06:37:55) – “You have it the other way around, you need to cherrypick short periods to get negative trends. Here’s monthly GISTEMP data running mean over 1, 10, 20 and 30 years. Could you please point out, where exactly there is a cherrypick in the length of the smoothing period?”
I’m sorry but I don’t have it the other way around. I’m not talking about the mean samples (smoothing period) but rather the relationship of the measurement period to the lower frequency cycle. Looking at your running mean, I see a quarter cycle in its positive phase with 1880 appearing to be a transition from a negative to positive phase which the graph provided by E.M.Smith (17:09:30) collaborates:
http://chiefio.wordpress.com/2009/03/02/picking-cherries-in-sweden/
There’s a zero crossing at roughly 1979, assuming 2002 to be the peak of the positive phase. What is most noteworthy is the trend from 1920 to 1940 is almost identical to that of 1979 to 2000. It is hard to correlate the CO2 trend to that little glitch that occurs from 1940 to 1979 and the one that occurs at 2002 to the present.
The fifth image generated by J. Bob (07:59:45):
http://www.imagenerd.com/uploads/t_est_27-qvBaC.gif
provides an excellent “drill down” to a transition ca. 2002 from the past positive phase to the present negative phase in the cycle.
I don’t see a “cherry pick” per se. I see a measurement period that, by way of its arbitrary length, optimizes on the positive phase of a longer climate cycle. The false premise of AGW is that this arbitrary measuring period is somehow of sufficient length to account for dominant climate cycles far longer than it’s statistical scope.
“By agreement adopted by the former International Metereological Organization at its meeting in Warsaw in 1935 recent 30-year averages of weather observations are defined as climatic ‘normals’. The original standard period so adopted was 1901-30; […]”
If the AGW measurement period ties to the length of the satellite record, it is arbitrary. If it ties to the ground instrument record, it is arbitrary. If it ties to the number 30 pulled out of the air by well meaning scientists 75 years ago, it is arbitrary. Being arbitrary, it can only speak to the portion of the cycle it covers with no meaningful application beyond retrospection.