Mathematician Luboš Motl takes on the new UAH data (source here) and some current thinking about slopes in global climate by adding his own perspective and analysis. Be sure to visit his blog and leave some comments for him – Anthony
UAH: June 2009: anomaly near zero
UAH MSU has officially released their June 2009 data. This time, they’re faster than RSS MSU. The anomaly was +0.001 °C, meaning that the global temperature was essentially equal to the average June temperature since 1979. June 2009 actually belonged to the cooler half of the Junes since 1979.
Global warming is supposed to exist and to be bad. Sometimes, we hear that global warming causes cooling. In this case, global warming causes global averageness. In all three cases, it is bad news. The three main enemies of environmentalism are warm weather, cool weather, and average weather.
It is not a coincidence that these enemies are very similar to the four main enemies of communism. The four main enemies that were spoiling the success of communism were Spring, Summer, Fall, and Winter. 🙂 See Anthony Watts’ blog for additional discussion.
Bonus: trends over different intervals
You may have been intrigued by my comment that the cooling trend during the last 8.5 years is -1.45 °C. What is the result if you choose the last “N” months and perform the linear regression?
You may see that the cooling trends are dominating for most intervals shorter than 110 months; the trend in the last 50 months is around -6 °C per century. Only when the period gets longer than 150 months i.e. 12.5 years (but less than 31 years), the trend becomes uniformly positive, around 1.2 °C per century for the intervals whose length is close to 30 years.
Note that those 12.5 years – where you still get a vanishing trend – is from January 1997 to June 2009. If you consider the UAH mid troposphere data instead (relevant for the part of the atmosphere where the greenhouse warming should be most pronounced, according to both proper atmospheric science and the IPCC report, page 675), all the trends are shifted downwards:
You need to consider time periods longer than 180 months i.e. 15 years (at least from Summer 1994) – but shorter than 31 years – to see a uniformly positive warming trend. And the trend that you can calculate from those 30+ years is just 0.4 °C per century and chances are that this 30+-year trend will actually drop below zero again, in a few years. At any rate, the blue graph makes it clear that in the right context, the longer-term warming trend converges to zero at a very good accuracy.
According to the IPCC, the surface warming trend should be around 3 °C per century which should translate to a 4-5 °C warming per century in the mid troposphere where the greenhouse effect has the strongest muscles. You see that according to the last 30 years of the data, the IPCC overestimates the warming trend by one order of magnitude!
Because the mid troposphere is the dominant locus of the greenhouse “fingerprint”, this is the most appropriate method to check the validity of the IPCC predictions. Their order-of-magnitude error is equivalent to the mistake of a biologist who confuses squirrels and elephants.
To be more specific about a detail, half of the Earth’s surface is between 30°S and 30°N – because, as Sheldon Cooper said in TBBT, sine of 30 degrees is exactly 1/2. But the mid-troposphere warming (8 km above the surface) is faster than the surface at least between 40°S and 40°N, i.e. on the majority of the surface, so it is likely that even when you take the global averages of both quantities, the mid-troposphere should see a faster warming than the surface.
Someone may argue that those 30 years represent too short an interval and the trend will be higher in 100 years. But such a reasoning is a wishful thinking. Moreover, periods longer than 30 years don’t really belong to the present generation. In 30 years, most of the population of the Earth won’t remember the year 2009 – and they shouldn’t be affected by stupid fads of those mostly dumb people from 2009.
@bluegrue.. But then again you dont really know the temperature if you have just 10 thermometers! Of the 1700 thermometers used In GISS. how much are in Afrika?? how much are in the Sarah desert (none) even the 1200 km smoothing doenst help there. + most of them are poorly sited and all of them are corrected constantly
btw accoording to sat data (UAH) 2008 was the 17th warmest in 30 years…
1998 0.51 1
2005 0.34 2
2002 0.31 3
2007 0.28 4
2003 0.27 5
2006 0.26 6
2001 0.2 7
2004 0.19 8
1991 0.12 9
1987 0.11 10
1988 0.11 11
1995 0.11 12
1980 0.09 13
1997 0.08 14
1990 0.07 15
1981 0.05 16
2008 0.05 17
1983 0.04 18
1999 0.04 19
2000 0.03 20
1996 0.02 21
1994 -0.01 22
1979 -0.07 23
1989 -0.11 24
1982 -0.15 25
1986 -0.15 26
1993 -0.15 27
1992 -0.19 28
1985 -0.21 29
1984 -0.26 30
bluegrue (06:37:55)One the one hand we’re told temperature is very important, on the other, the monitoring system peaked in 1985, and is presently decimated: http://i44.tinypic.com/23vjjug.jpg
Hansen would have us believe that he can just average this and that with less real data and arrive at a good approximation of reality: http://i27.tinypic.com/14b6tqo.jpg
With a coincidental 2° temperature shift, up of course, it would appear not.
bluegrue (06:37:55) – Why not look at some of the longer term temps to get a better perspective, and look at other sources. Look at the Stockholm data STOCKH-GML 1755-2005 at
http://www.rimfrost.no/
where they have a lot of longer term temp series from other parts of the world. However looking at the two longest, East England and the Stockholm data, you will see that these cycles have happened before.
Steve Keohane (06:49:53) :
bluegrue (06:37:55)One the one hand we’re told temperature is very important, on the other, the monitoring system peaked in 1985, and is presently decimated: http://i44.tinypic.com/23vjjug.jpg
That is a fascinating “animation”! Any explanation for the huge drop out of stations since 1985, even in places like the USA? The soviet collapse drop out I understand, but China, Africa, and US? If it is just arbitrary deletion of stations from use in computation then it is a severe “cherry pick”. If it is actual retirement of stations, then I’m left to wonder why, with AGW such a “world ending problem”, nobody is willing to look at the thermometers…
Hansen would have us believe that he can just average this and that with less real data and arrive at a good approximation of reality: http://i27.tinypic.com/14b6tqo.jpg
With a coincidental 2° temperature shift, up of course, it would appear not.
Another wonderful graph. You can see the collapse of the Soviet Union in the hugh spike of temp / drop of stations. Nothing like deleting most of Siberia to warm your averages…
And per Hansen’s magic hand waving data spread: I’ve read the code, I’ve written up my comments on much (though not all) of it. It makes no sense at all. The most obvious is that IF I have a change of equipment (or TOBS) in, say, 2005 then that is used to rewrite all the history of the station back to the beginning of time. Now just what does a change in 2005 have to do with 1890? If we had a change that lowered the record by 1 F in, oh, 1900, and then another change that raised it by 1F in 2000, then that change in 2000 would be used TO LOWER all the data from 1900 to 2000 (!) since only changes in the last few years are used / “corrected”. So you would get a “double dip” increase in slope of the temperature curve, not a correction.
Another? He happily fills in data for places that have none up to 1500 km away (in the “anomaly” stages STEP4_5. It’s “only” 1000 km in the actual temperature stages… and yes, these are applied sequentially, so a thermometer can get “filled in” 1000 km away then the anomaly can get spread another 1500 km. So if it’s only additive it would be 2500 km for those two steps (though he does this trick in more than two places…)
So take just a moment and ask yourself if Phoenix is a good proxy thermometer for San Diego… I’ll wait… Then ask if San Francisco is a good proxy for Reno? Or Lodi? Or Marysville? (Now that Marysville is gone, it’s missing data going forward will be “filled in” from somewhere “nearby” up to 1000 km away…) The whole process stinks of error creation.
Then we are supposed to get excited about changes in the 1/10 th and 1/100 th degree C positions? When they don’t even have any accuracy in them and are totally a product of False Precision? Even if the data WERE real, which it isn’t, everything to the right of the decimal point is a FICTION. That it is a fiction based on a computed fabrication of synthetic station records makes it a Farce of a Fiction of a Fabrication …
http://chiefio.wordpress.com/2009/03/05/mr-mcguire-would-not-approve/
So I’m left to ask: “Will the real temperature record please stand up?”
Take the raw data, or take individual long life stations and look at that data. Do not, under any circumstances, use GIStemp. (And since Hadley works closely with Hansen, matches his series fairly well, and will not publish their methods: I’m left to assume they use some of the Hansen Magic Sauce and ought not to be trusted either.)
And for the apologists that say “but it matches the satellites” I would point out that rewriting the data prior to 1980 is a great way to fudge the past and increase the slope while staying in sync with the newer less fudge prone series… The method itself sets off my data forensics red flags…
So we have fairly reliable recent data that says the world is not changing much, and that is to the cooler side. We have recent news flow from all over the planet saying that the cooler phase is in many cases colder than has been seen in decades (or in the recorded weather history). We have a few very long time series reliable thermometers (such as the Swedish record) that show a drop into 1850 and a rise back to the prior normal temperature in a simple long term cycle.
The conclusion I’m left with is that the world is absolutely normal. We are fairly ignorant of it’s history. It is most probable that we are headed back into a cold phase for several decades. AGW is bunk, and Hansen with GIStemp is either horridly poorly done, or sinister. Malice or stupidity are the only two choices… pick one. Just don’t expect to use the GIStemp product for anything other than scaring the children…
Mr. Smith, you should go to Washington.
bluegrue (01:45:18) :
You clearly don’t understand the concepts in the graphs in this article, and the graph at woods shows that. It’s not the number of samples per year, it’s the length of data you use; the longer the trail, the lower the trend line, until at some point, it will reach zero.
http://woodfortrees.org/plot/gistemp/from:1880/mean:12/plot/gistemp/from:1880/mean:120/from:1880/to:2008/trend/plot/gistemp/from:1880/mean:600/from:1930/to:2008/trend/plot/gistemp/from:1880/mean:360/from:1980/to:2008/trend
This shows (as does the graphs in the article to which this thread refers) that over time, the trend is toward zero. (Notice; that’s toward, not ‘to zero’)
Now, if you take the .6 out of the data (That’s -.1 +.7) There is no trend at all, and, as Anthony has observed many times, there is no way you can justify that adjustment.
Here’s a fine reference for this ‘uncontroversial’ adjustment; http://atmoz.org/blog/2008/03/24/i-guess-i-dont-understand-the-time-of-observation-correction/
Let’s just find one thing agreeable, you and me; the science is far from fully understood, and even farther from ‘settled’. Only a simpleton like Gore would make such an unscientfic assupmtion.
Can you at least concede that?
and yes, I realize I mispoke and didn’t correct myself at the end of para 1
Hank Hancock (05:12:31)
You are simply eyeballing the data, not a reliable way for analysis.
It is hard to correlate the CO2 trend to that little glitch that occurs from 1940 to 1979 and the one that occurs at 2002 to the present.1940-1979: Aerosols, solar variability. You’ll find it in the IPCC reports. The last 8 years is not long enough to determine a statistically significant trend.The fifth image generated by J. Bob (07:59:45):
http://www.imagenerd.com/uploads/t_est_27-qvBaC.gif
provides an excellent “drill down” to a transition ca. 2002 from the past positive phase to the present negative phase in the cycle.How about a bit of testing, whether or not this downturn, that you base your argument on, is an artifact or not. Filtering can introduce them, especially at the beginning and the end of the series. I’ve done the FFT low pass filtering for Hadcrut3 data using the full data set and the data set truncated at the end of 1996, consider it time travel into the past.
http://i28.tinypic.com/s3efj5.png
The smoothed truncated data set indicates a “downturn” in 1992. None of that has happened. J. Bob, if you read this, could you please repeat your smoothing with truncated data sets (anytime in the 90s, so we still have the recent upturn but avoid 1998) and post the results, too? I’m pretty sure you’ll see the same artifact.
Nonsense, but if you prefer to read only part of my post I can’t help you.
E.M.Smith (17:09:30) :
You are fond of Uppsala, have a look at this
account, how the temperature was reconstructed for the first half of the 18th century. The sharp temperature change at the beginning is most likely a smoothing artifact.
J. Bob (07:39:14) :
Please read my post to Hank Hancock, I’d like you to check something.
As for cycles, all you are showing are variations in a single location, not global level. Secondly, would you please quantify your periods, rather than say “Hey, just look”?
Steve Keohane (06:49:53) :
Please read here:
http://scienceblogs.com/deltoid/2004/04/mckitrick.php
Necessary correction of known systematic errors, US48 only, Celsius / Fahrenheit. As I told you before. This gets boring.
I’d like to hear that from the source.
@ur momisugly Anthony: Do you think that TOBS and SHAP are necessary corrections to the US48 temperature data or not?
The previous post was also on
Bob Kutz (11:21:58)
The discussion had drifted. Here’s Motl’s second plot extended to the full Hadcrut3 data set.
http://i25.tinypic.com/wji05z.png
The pink curve is the same analysis for a simplistic model: temperature anomaly is zero until 1970, it rises linearly to +0.65°C today, add noise. Not too bad for such a simplistic model.
And per popular demand ( woodfortrees (Paul Clark) ), here is Motl’s first plot for 1998 as an end date, I consider this one to be nonsense:
http://i29.tinypic.com/wuglz7.png
Absolutely ridiculous warming rates.
P.S.: I’ll call this a day, CU.
Bluegrue
I reread your reference to Upsalla-incidentally none other than Arrhenius lived there-somewhat ironic.
The grand master of interpolation is James Hansen-who was a brilliant scientist in his day but made a lot of assumptions based on a small number of poorly distributed weather stations, many with chunks of data missing.
http://pubs.giss.nasa.gov/docs/1987/1987_Hansen_Lebedeff.pdf
This paper managed to make it to the infamous Congress hearing in 1988 and was a fundation stone for the first IPCC assessment. Count the number of ‘uncertainties’
CET and Zurich Fluntern are both very old records, the latter two closely mirroring each other.
Tonyb
J. Bob (07:59:45):
Excellent work I might add. I was intrigued to see how the FFT compared to the Chev 4. Chev 4 appears more reactive to the higher frequency components. Curiously, it pulls a slight negative phase shift around 1800, perhaps in response to the large extent from 1770 through 1790. I’m curious if you used a four point transform in the FFT?
http://www.imagenerd.com/uploads/t_est_25-avCpP.gif
The moving mean plot was a real eye opener! At 1825 and more notably in the mid 1840’s it’s at a totally opposite phase from reality. It is showing a marked warming trend leading into the 1840’s – a time when unseasonably cold and rainy weather was bringing on the Ireland potato blight and the Great Hunger. What it most interesting is it’s running a 40 year spread. It’s kind of close to the 30 year spread bluegrue alludes to being just right to see the global warming. Now I get it!
bluegrue (14:00:14)
As long as you’re going to check with Anthony, let’s back up a step, and ask him to direct his attention not just to SHAPS and TOBS, but rather at the validity of the sum of these adjustments, as related by the graph which started this discussion;
http://www.ncdc.noaa.gov/img/climate/research/ushcn/ts.ushcn_anom25_diffs_urb-raw_pg.gif
which YOU claimed were attributable mostly to SHAPS and TOBS, were incontrovertible, entirely justified and necessary.
I indicated the notion that anyone would adjust the most recent readings of objective instrumentation up by .6 C and then claim there’s been .6C warming in the last 100 years and the science is settled is rather a strange way to conduct sciencere. That’s like starting with a hypothesis, and then adding fudge factors to the data until you get what you want.
The alternative, of course, would be to take the objective measurements, and after realizing there’s been a century of no change in the data, start to look at those measurements and see if there’s some change that’s been camoflaged by sighting issues or time of observation issues. We’ve clearly gotten the cart before the horse on this.
Sincerely,
Bob Kutz
Bluegrue
Sorry for the spelling mistakes-it ‘escaped’ before its time!
tonyb
Bluegrue – OK, How about ending the sequence in 1997. Actually, looking at the phase corrected Chev. filter will give you a pretty good idea what happens if you truncate the run. Might take a day of two to look at the truncated data. Am also looking at the Stockholm-Uppsala data from 1772-2005 to see what shakes out.
Couple of points:
1 – I realize that this is only one data set, I doubt if “global temps” existed back then, but it’s the best we have. However one could easily compare it to the “global temps” computed from the mid 1800’s.
2 – Hank, your comment on the famine in Ireland, correlates to research I have been doing on our various families. The result was how many population migrations were caused by the weather/climate? There seems to be a pattern from looking at the various family backgrounds. These are just pieces of info, without any research, but a pattern might emerge.
Unfortunately we have very little Chronicles to go on, but a history of Pomerania, Germany, etc. give glimpses.
~500 B.C. Goth and other Scandinavian tribes head south into Pomerania (northern Poland now). These included the Goths, Vandals etc. Cause of the migration, seems to famine (poor growing weather).
~100 A.D. Roman occupy England, making wine from local grapes. (warmer weather?)
~400 A.D. What is called by the Goths “the great migration” move out of Pomerania, south, into the Roman empire. About that time the Rhine freezes over, allowing the tribes to cross into Roman land. About that time the Huns burst out Eurasia.
~900-1000 A.D. Vikings settle in Iceland and Greenland. Settlement in North America.
~ 1350 – Last Viking settlement in Greenland, beginning of long cold period.
Anyway there seems to be some correlation to weather and mass migrations of the past, and seem to follow a 800-1000 period. That whole area would be worth a few papers.
“”” bluegrue (01:45:49) :
E.M. Smith
Words fail me. The thermometer readings are used to determine a temperature for each point of the Earth’s surface, then you integrate over the surface. If you have two streches of land of equal size, one with 10000 thermometers and one with 10 thermometers, the former having an average temperature of 20°C, the other of 22°C, then the average temperature over both is 21°C and not 20.002°C, as you imply. “””
Well you are forgetting that there are approximately zero thermometers over 70 percent of the earth’s surface. It is my understanding that around 1850 there were precisely 12 thermometers in the Arctic (north of +60 Lat), and that number increased over the years to around 86 or so, and then in recent years decreased to avout 72 or so; my guess being that cionicided with the collapse of the Soviet Union.
It is laughable to suggest that this hodgepodge of sampling methodology, in any way conforms to the basic laws of sampled data theory. Having 10,000 thermometers on one area and ten on another, doesn’t improve the situation over having ten on each.
Nyquist does not reqire that the sampling intervals be equal; but the maximum sample spacing (anywhere) sets the band limit of the recoverable signal; not the minimum spacing. Ergo, GISStemp, and HADcrut are both just a set of numbers bearing no relationship to the temperature of planet earth.
All of the wonderful manipulations of statistical mathematics can be appled to any arbitrary set of numbers; those numbers don’t actually have to have any linkage to each other; they are simply a set; yet they have an average or mean, a median, a standard deviation; any high falutin extraction you want to make; you just can’t extract any information from that set; there isn’t any to be had.
It can be argued that the maximum information density is carried by white Gaussian noise; because no matter how long a sample sequence you want to study, at no point can you predict the value of the next sample; you can’t even predict if it will be higher or lower than the latest value. So in that sense the signal is 100% information, having zero redundancy.
Conversely, GISStemp, or HADcrut have close to zero information content; they violate Nyquist both temporally, and in spades in spatial sampling.
You only need a violation by a factor of two, to make the average unrecoverable (without aliassing noise).
And the Central Limit theorem cannot buy you a reprieve from a Nyquist violation.
Hank – got side tracked on above post. The Chev. filter was 2 cascaded 2-pole recursive filters, with a cut-off freq. of 0.025 cycles/year and 5% ripple. I used cascaded filters to avoid stability problems. The FFT and Chev were completely different computational methods. I feel this gives a better cross check. I used the coefficients from “The signal Processing Handbook”
at
http://www.dspguide.com/ch20/2.htm
J Bob
As you say History tells us a great deal-there are many papers on all the events you mention, many posted by my self and others in this forum over the months.
The trouble is that these events are called ‘anecdotal’. A complete Roman army destroyed when their enemies crossed the frozen Rhine is anecdotal! Yet cherry picking highly dubious bristlecone pine proxy is the epitome of science! The Romans were great ones for recording weather so we can reconstruct the climate of much of the Known world from around 600AD to the fall of the Byzantine empire in 1453AD.
The Vikings of course are dismissed as being a very localised anomaly.
George E Smith
Did you read my link to the Hansen paper? This describes the number of thermometers he used from 1880 to construct his global temperature. The numbers worldwide in 1850 were about 20 (that could be considered reliable).
There is no doubt the climate irrationalists are scared of history. The story hinges on limited climate variation within natural variability during constant co2 levels which only in recent years goes beyond natural limits due to increased co2 concentrations.
Tonyb
Hank Hancock (14:28:57) :
“I’m curious if you used a four point transform in the FFT?”
Hank, I’m not sure what you ment, but what I did use were the 1st 13 freq.
in the freq. domain. The freq. increments are given by 1/(# sample [512] times the time increment [1 year] ). So freq. from 0.002 to 0.025 cycles per year were used to re-construct the filtered input in time. Does that help?
Tonyb – The historical-weather climate observation was only that, and would be worth many threads, which I would love to get into. However “given the time alloted to us”, [Gandolf, Lord of the Rings], I guess I’d better stick to what I started out, namely apply signal conditioning methods to get better insight into long term weather/climate patterns.
J Bob
‘Lord of the Rings’ ‘An inconvenient truth’ “IPCC Assessments 1-4” -six of my favourite science fantasy novels.
Incidentally I meant the Romans from 600BC (not AD) so that gives us 2000 years of ‘anecdotal’ records.
Tonyb
Just curious if anyone has heard from the David (I am a peer-reviewed scientist) that was saying he was flabbergasted by the denialism here, then couldn’t respond to several queries about the source of his data. Any word on how he is making out with his mission to access the techniques , formula, and code from Hadley? Perhaps he found out that his colleagues were actually lying to him.
REPLY: I sent him a personal email, he’s in a state of denial and refuses to look. Turns out he’s the curator of a prominent museum of science, and you’d think he’d have a better handle on dealing with the public aka “us” – Anthony