Do We Care if 2010 is the Warmist Year in History?

Guest Post by Ira Glickstein

According to the latest from NASA GISS (Goddard Institute for Space Studies), 2010 is shaping up to be “the warmest of 131 years”, based on global data from January through November. They compare it to 2005 “2nd warmest of 131 years” and 1998 “5th warmest of 131 years”.

We won’t know until the December data is in. Even then, given the level of noise in the base data and the wiggle room in the analysis, each of which is about the same magnitude as the Global Warming they are trying to quantify, we may not know for several years. If ever. GISS seems to analyze the data for decades, if necessary, to get the right answer.

A case in point is the still ongoing race between 1934 and 1998 to be the hottest for US annual mean temperature, the subject of one of the emails released in January of this year by NASA GISS in response to a FOIA (Freedom of Information Act) request. The 2007 message from Dr. Makiko Sato to Dr. James Hansen traces the fascinating story of that hot competition. See the January WUWT and my contemporary graphic that was picked up by several websites at that time.

The great 1934 vs 1998 race for US warmest annual mean temperature. Ira Glickstein, Dec 2010.

[My new graphic, shown here, reproduces Sato’s email text, including all seven data sets, some or all of which were posted to her website. Click image for a larger version.]

The Great Hot 1934 vs 1998 Race

1) Sato’s first report, dated July 1999, shows 1934 with an impressive lead of over half a degree (0.541ºC to be exact) above 1998.

Keep in mind that this is US-only data, gathered and analyzed by Americans. Therefore, there is no possibility of fudging by the CRU (Climategate Research Unit) at East Anglia, England, or bogus data from Russia, China, or some third-world country. (If there is any error, it was due to home-grown error-ists :^)

Also note that total Global Warming, over the past 131 years, has been, according to the IPCC, GISS and CRU, in the range of 0.7ºC to 0.8ºC. So, if 1934 was more than 0.5ºC warmer than 1998, that is quite a significant percentage of the total.

At the time of this analysis, July 1999, the 1998 data had been in hand for more than half a year. Nearly all of it was from the same reporting stations as previous years, so any adjustments for relocated stations or those impacted by nearby development would be minor. The 1934 data had been in hand for, well, 65 years (eligible to collect Social Security :^) so it had, presumably, been fully analyzed.

Based on this July 1999 analysis, if I was a betting man, I would have put my money on 1934 as a sure thing. However, that was not to be, as Sato’s email recounts.

Why? Well, given steadily rising CO2 levels, and the high warming sensitivity of virtually all climate models to CO2, it would have been, let us say inconvenient, for 1998 to have been bested by a hot golden oldie from over 60 years previous! Kind of like your great grandpa beating you in a foot race.

2) The year 2000 was a bad one for 1934. November 2000 analysis seems to have put it on a downhill ski slope that cooled it by nearly a fifth of a degree (-0.186ºC to be precise). On the other hand, it was a very good year for 1998, which, seemingly put on a ski lift, managed to warm up by nearly a quarter of a degree (+0.233ºC). That confirms the Theory of Conservation of Mass and Energy. In other words, if someone in your neighborhood goes on a diet and loses weight, someone else is bound to gain it.

OK, now the hot race is getting interesting, with 1998 only about an eighth of a degree (0.122ºC) behind 1934. I’m still rooting for 1934. How about you?

3) Further analysis in January 2001 confirmed the downward trend for 1934 (lost an additional 26th of a degree) and the upward movement of 1998 (gained an additional 21th of a degree), tightening the hot race to a 28th of a degree (0.036ºC).

Good news! 1934 is still in the lead, but not by much!

4) Sato’s analysis and reporting on the great 1934 vs 1998 race seems to have taken a hiatus between 2001 and 2006. When the cat’s away, the mice will play, and 1998 did exactly that. The January 2006 analysis has 1998 unexpectedly tumbling, losing over a quarter of a degree (-0.269ºC), and restoring 1934‘s lead to nearly a third of a degree (0.305ºC). Sato notes in her email “This is questionable, I may have kept some data which I was checking.” Absolutely, let us question the data! Question, question, question … until we get the right answer.

5) Time for another ski lift! January 2007 analysis boosts 1998 by nearly a third of a degree (+0.312ºC) and drops 1934 a tiny bit (-0.008ºC), putting 1998 in the lead by a bit (0.015ºC). Sato comments “This is only time we had 1998 warmer than 1934, but one [on?] web for 7 months.”

6) and 7) March and August 2007 analysis shows tiny adjustments. However, in what seems to be a photo finish, 1934 sneaks ahead of 1998, being warmer by a tiny amount (0.023ºC). So, hooray! 1934 wins and 1998 is second.

OOPS, the hot race continued after the FOIA email! I checked the tabular data at GISS Contiguous 48 U.S. Surface Air Temperature Anomaly (C) today and, guess what? Since the Sato FOIA email discussed above, GISS has continued their taxpayer-funded work on both 1998 and 1934. The Annual Mean for 1998 has increased to 1.32ºC, a gain of a bit over an 11th of a degree (+0.094ºC), while poor old 1934 has been beaten down to 1.2ºC., a loss of about a 20th of a degree (-0.049ºC). So, sad to say, 1934 has lost the hot race by about an eighth of a degree (0.12ºC). Tough loss for the old-timer.

Analysis of the Analysis

What does this all mean? Is this evidence of wrongdoing? Incompetence? Not necessarily. During my long career as a system engineer I dealt with several brilliant analysts, all absolutely honest and far more competent than me in statistical processes. Yet, they sometimes produced troubling estimates, often due to poor assumptions.

In one case, prior to the availability of GPS, I needed a performance estimate for a Doppler-Inertial navigation system. They computed a number about 20% to 30% worse than I expected. In those days, I was a bit of a hot head, so I stormed over and shouted at them. A day later I had a revised estimate, 20% to 30% better than I had expected. My conclusion? It was my fault entirely. I had shouted too loudly! So, I went back and sweetly asked them to try again. This time they came in near my expectations and that was the value we promised to our customer.

Why had they been off? Well, as you may know, an inertial system is very stable, but it drifts back and forth on an 84 minute cycle (the period of a pendulum the length of the radius of the Earth). A Doppler radar does not drift, but it is noisy and may give erroneous results over smooth surfaces such as water and grass. The analysts had designed a Kalman filter that modeled the error characteristics to achieve a net result that was considerably better than either the inertial or the Doppler alone. To estimate performance they needed to assume the operating conditions, including how well the inertial system had been initialized prior to take off, and the terrain conditions for the Doppler. Change assumptions, change the results.

Conclusions

Is 2010 going to be declared warmest global annual by GISS after the December data comes in? I would not bet against that. As we have seen, they keep questioning and analyzing the data until they get the right answers. But, whatever they declare, should we believe it? What do you think?

Figuring out the warmest US annual is a lot simpler. Although I (and probably you) think 1934 was warmer than 1998, it seems someone at GISS, who knows how to shout loudly, does not think so. These things happen and, as I revealed above, I myself have been guilty of shouting at analysts. But, I corrected my error, and I was not asking all the governments of the world to wreck their economies on the basis of the results.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

204 Comments
Inline Feedbacks
View all comments
JPeden
December 25, 2010 11:03 pm

Hugh Pepper says:
December 25, 2010 at 7:08 pm
Of course we should care! Peer reviewed research from several data sets confirm that the planet is getting warmer. This is not a controversial statement anymore. It is a change which clearly needs to be mediated, to avoid consequences which will be catastrophic for everyone. If you have real data to disprove this, please present it!
Hugh, if you do really care, perhaps you should stop and reflect upon the question of why it is that all you can do is repeat ipcc “Climate Science” C[A]GW Propaganda memes!
What is really at stake concerning your level of “caring”, compared against the wellbeing of Humanity or of any Country or person, is the fact that the ipcc’s alleged Climate Science-based “cure” to its alleged CAGW “disease” is obviously worse on people than is its alleged CAGW disease.
That’s why, where the rubber meets the road concerning doing what’s best, India and China are proceeding with massive programs to construct coal-fired electricity plants. And not only do they not think that fossil fuel CO2 is the virulent cause of a terrible disease, they think that a massive increase in fossil fuel use, thus CO2 production, is absolutely necessary to cure their already existing catastrophic disease, underdevelopment! Or don’t they care?
That’s also why the ipcc excluded countries containing ~5 billion of the Earth’s 6.5 billion people from having to do anything to achieve its Kyoto Protocol CO2 emissions goals – the ipcc knows its “Climate Science” is not real science, and that its allegations about fossil fuel CO2 causing a net GW disease is only another useful Propaganda ploy, involving demonizing fossil fuel CO2 and disasterizing Global Warming, directed toward the usual corrupt ends.
Hugh, it’s the ipcc via its cynical and manipulative “Climate Science” CAGW Propaganda Operation which shows who doesn’t care. It doesn’t care.

Doug in Seattle
December 25, 2010 11:16 pm

E.M.Smith says:
December 25, 2010 at 10:13 pm
. . . Mr. Hansen is on record in a court of law testifying that breaking the law “for the greater good” is a worthy thing to do. This set case law precident in the UK. He is first and foremost a political activist who believes, by his own words, breaking the law is a fine thing. That means he has no moral compass about defending his code or data from political bias “for the greater good” either.
The only valid approach is to assume is is doing exactly what he has testified is the right and moral thing to do: Whatever it takes to achieve your goals if you think it is for the greater good.
That means “assume the ‘fellow’ is flat out lying until proven otherwise”.

That is the best way of viewing any and all words and numbers coming from Mr. Hansen, or from anyone else who associates with him, or quotes him in defense of his or their views regarding the earth’s climate (or for that matter anything at all).
Thanks Mr. Smith. You’ve made my day complete.

Dave F
December 25, 2010 11:22 pm

Ok Evan, I gotta know, they had global data in 1934?
They were still doing clinical trials of penicillin in 1934, so the quality of the data compared to the quality of the data today is a pretty big issue also. Now since we are talking about a situation where the US of A was still tinkering with penicillin, what kind of quality data do you think you will get from Chile or Russia or any place in Mongolia?
I know you are just the messenger. I just think the global data of that time has no credibility. The technology level between now and then is staggering.

December 26, 2010 12:00 am

An analysis should include a list of stations fairly well distributed over the area for which an average temperature is to be calculated, and all stations must have an unbroken record requiring an absolute minimum of homogenisation, if any at all, and they must be little affected by urbanisation. This will disqualify nearly all stations, but if the remaining ones show a pattern, it will be a genuine one, and then I think it’s fair to make statements on “warmest” or “coldest” based on what we know. And that will not be the entire world or even entire countries. But it’s better to state what we do know and don’t than simply to fill in the holes and thereby contaminate what we actually do know.
Currently too many long series exist that were constructed by splicing and adjustments that will bring too many assumptations into the final figures when these sereis are combined to create a national or global temperature.

bubbagyro
December 26, 2010 12:29 am

boballab says:
December 25, 2010 at 10:49 pm
It does not matter how many bad station thermometers there were in 1887. Statistically, more measurements the better! With more measurements, the outliers statistically rectify themselves, with a too high here matched with a too low there. That is the way statistics deals with crude measurements, with lots of readings! that sort each other out.
The problem lately is that they were dropping measurements that they did not like for one reason or another. If I weigh myself every five minutes, I will get slightly differing results. I can either try to get a perfect weight, stepping on the balance carefully, and then living with that number, n=1, and then say, “isn’t my way grand?
Well it’s not grand. A hundred measurements give me the right number with less care, unless, like GISS and their ilk, I add heavy clothes to my body and report the hundred weighings according to my “adjusted” weight now with clothes on [obviously I am cheating in this experimment, because I want a fatter me and that is driving my method.
SO GISS and NOA begin chopping out stations, because they are too rural and we don’t get the temperature at the right time. We then are left with airports and UHIs that have winter boots and coats on (figuratively),
Now we have readings from “solid” stations covering expansive areas, having thrown out the problematic ones. Now we have 75% of historic stations gone, and as I have pointed out, fewer readings, especially with systematic biases, give bogus results.
Can someone go back to the old sites, and take a good digital reading several times a day or a month, thus adding the 75% back? Someone without a Supervisor telling him how the numbers should be? Discarding any that have been terminally “adjusted” out of existence because they were too cool and brought the means down?
Statistics is science. More numbers, taken blindly, are the way of statistics.
Here is an example: I take ten readings at 3:00 PM with a calibrated thermometer. They come out as 11, 12, 14, 11, 12, 16, 09, 08, 12, 13. The mean is 11.8, the median is 12. Pretty good data, even though my thermometer is a beast with a range of 8. With a hundred measurements, the number gets closer to the mean as the variance decreases with more measurements.
Now in our case, we have thermometers sitting in the sun on asphalt much of the time in an airplanes 500° exhaust. And we wonder what is happening to our climate! Armageddon!
You do not need a Cray to figure this out. Spend the money on more stations scattered generously about, and the climate will thank you. Gaia will breathe easier—we’ve been cavalier with the old Sheila, and the good girl is not a bit stuck up! That is statistical science. A poor scientist faults his tools. A good scientist has a bunch of tools.

Baa Humbug
December 26, 2010 12:29 am

To Hugh Pepper and Onion et al (hmmm there’s a funny line there somewhere but I can’t think of one)
See if we can follow this logic and arrive at a reasonable conclusion.
GISS pronounces a temperature anomaly (call it P1) We believe it, they are the experts working on this stuff day after day and this is NASA so why not. I believe ’em.
Sometime down the track, they make some adjustments and make another pronouncement, P2. Why would we disbelieve their new pronouncement, these are the experts. HOWEVER WE NOW KNOW P1 WAS WRONG.
Sometime later they make pronouncement P3. Why would we disbelieve………..you get the picture, no need to belabour the point.
Each time GISS makes a new pronouncement (7 according to Ira) they themselves are saying the previous pronouncement WAS WRONG. Yes sure the adjustments may have been warranted and correct, but this does NOT change the fact that this makes the previous pronouncement wrong.
So now I ask you, what are the chances for a pronouncement 8? Most would think there is a good chance. And if there is going to be an 8th pronouncement, then the current one, number 7, is WRONG.
The only reasonable conclusion to draw, even without accusations of fudging or fraud, is that we DON’T KNOW what the T anomalies are. If we don’t know that, then we don’t know what the A in AGW is. If we don’t know A, there is no point getting our knickers in a knot over it is there?
All that can be said is “yeah it’s warmed somewhat since the beginning of the instrumental era, but we don’t know by how much.”

aeroguy48
December 26, 2010 12:35 am

So, if they can pre-predict and pre-conclude that 2010 will be the 3rd hottest on record. I need to know when the artic will be ice free so I can pre-prepare my sailboat to navagate the artic ocean. I want to see Sarah Palins house from Russia.

johanna
December 26, 2010 12:39 am

Ira Glickstein says:
December 25, 2010 at 9:24 pm
johanna says:
December 25, 2010 at 8:22 pm
Ira Glickstein said:
While I cannot speak officially for WUWT, it seems to me nearly everyone here accepts that the planet has warmed over the past century or so, and that some of it is due to human activities.
——————————————————————————
I don’t think that generalisation is true, especially the second part.
Not that it matters. Science is not a matter of consensus.
Do I correctly read your opinion, Johanna, that our planet has not warmed since the Little Ice Age and that, even if it has, not a bit of that warming is human-caused? You are certainly entitled to your opinion, but it is my opinion that you are wrong. As I wrote, we need to address: 1) Q. How much? A. Not as much as claimed by warmists, and 2) Q. How much of that is human-caused? A. Not much, perhaps 0.1ºC.
——————————————————————————–
Ira, given all the strong reservations about data integrity, and the methodology which is applied to it, that have been discussed on this site, I am not convinced that ‘the planet’ has warmed ‘over the past century or so’ – nor am I convinced that it has not. It is interesting that the focus of your own post relates to possibly higher temperatures in the US 75 years ago – so there goes most of your century, in the US at least. While we are pretty sure that some parts of the world were temporarily colder a century ago, that doesn’t tell us much about global climate either then or now.
I am even less convinced that we can reliably quantify the human contribution to whatever changes may have taken place over that period, except at a micro, micro level such as how land use in small areas affects local weather.
With the current state of measurement, and that of the past century, let alone what is done with those numbers, a figure like 0.1 degree C (as you cite) is meaningless IMO.
Climatology, as an integrated science comprising many sub-disciplines, is in its infancy. But, your post here is a useful contribution to getting the data right, which is a necessary first step.

December 26, 2010 12:42 am

TYPO
Warmist Year in History?
Warmest Year in History?

Latimer Alder
December 26, 2010 12:53 am

E M Smith
We need you and your skills over at Climate Etc. Hansen’s most recent paper has just received notice over there. Please spend a few minutes there. A repost of your contribution here would be a great starting point. One or two if Hansen’s acolytes (Lacis, Colose) hang out there.
Don’t initially mention WUWT as some of the Alarmists go into a faint/seizure/catatonic trance at the mere mention of its name, and a Catastrophist foaming at the mouth is not a good image for the festive season…….

Chaveratti
December 26, 2010 1:03 am

I can see the MSM headlines now.
“NASA’s top climate scientist says… “

Richard
December 26, 2010 1:10 am

“they [GISS] keep questioning and analyzing the data until they get the right answers. But, whatever they declare, should we believe it?”
No
“Snow disruption” Winners and losers (According to the BBC)
Winners – Energy firms, Supermarkets
Losers – Supermarkets, Online retailers (surprisingly), High Street retailers, Most employers, Delivery and haulage companies, Transport companies, Travellers and commuters
I can think of a few other losers, old age pensioners, bums on the street, insurance companies…
Arent we lucky we didnt have the mild winter promised?

kwik
December 26, 2010 1:21 am

Ronald S says:
December 25, 2010 at 4:49 pm
“The thing that really troubles me is what the Hansen is doing to the reputation of one of America’s all time heroes – Robert Goddard – after whom the institute which Hansen directs is named.”
Wasnt it the New York Times that did a splendid job in dragging Robert Goddards name down into the dirt for many many years?
Wasnt it the New York Times that had a big “We are sorry” article after the Moon-landing?
Now the NYT has been dragging Goddards name into the dirt for years yet again by supporting Hansens regime.
Maybe they must sa “sorry” once again ?

sHx
December 26, 2010 1:34 am

Ira Glickstein says:
December 25, 2010 at 7:47 pm
sHx says:
December 25, 2010 at 7:02 pm (Edit)
The warmest year was 1998. That is so according to the only reliable instrumental data worth our attention, the satellite measurements.
Yes, I forgot about those 1934 satellites. Sorry :^)

JohnH says:
December 25, 2010 at 10:01 pm
sHx says:
December 25, 2010 at 7:02 pm
The warmest year was 1998. That is so according to the only reliable instrumental data worth our attention, the satellite measurements.
Warmest in last 30 years only, in 1934 there were no satellites.
PS Where’s the american Harry when you need him.

E.M.Smith says:
December 25, 2010 at 10:46 pm
sHx says: The warmest year was 1998. That is so according to the only reliable instrumental data worth our attention, the satellite measurements.
I see others have pointed out the 1934 satellite issue. Yes, it’s that pesky “what baseline?” problem.

Geez, Louise!
I am quite flattered by all that attention but I’m not really that scientifically illiterate to think that there were satellites in 1934.
What I meant was that satellites record are the only reliable record. And that only extends back to the last 30 years. The first dedicated instrument to check temp anomalies was put into Earth orbit in 2002, right?. Most conservatively speaking, our best records only stretch back to 2002. I understand that it was somewhat possible to detect the heat signal from earlier, undedicated satellite records that stretch back to 1979. In any case, according to the satellite measurements, 1998 was the warmest year ever.
Land based, ‘value added’ instrumental records are not in the same class in terms of data quality as sat records. Therefore 1934 and 1998 are not suitable candidates for comparison. To the best of our knowledge 1998 was the warmist year.

Grumpy old Man
December 26, 2010 1:40 am

Hey Guys, it’s Christmas. Come out of the trenches and have a soccer kick-around with the enemy!

Peter Whale
December 26, 2010 1:49 am

Onion make me cry, with laughter.

December 26, 2010 2:35 am

bubbagyro says:
December 26, 2010 at 12:29 am
It does not matter how many bad station thermometers there were in 1887. Statistically, more measurements the better! With more measurements, the outliers statistically rectify themselves, with a too high here matched with a too low there. That is the way statistics deals with crude measurements, with lots of readings! that sort each other out.

Nope, incorrect, you made a basic mistake there. There is no large number of measurements at any given point in time in any given place, there is only one. The Law of large numbers does not apply. Think of it this way how many Max readings were taken at State College PA on Jan 21, 1890? Answer one. Now that number was recorded, but due to instrument error that number is not correct and unless you know what the accuracy of that particular instrument is you can never statistically determine the correct mean temp for that day and for every day after that and before that as long as you don’t know the accuracy of that thermometer. From there it rolls down hill. You have no known correct mean temps for that month, so your monthly mean is off and from there your yearly. The same process is happening at every station that you don’t have the proper meta data on. So they also have the same problem which leads to the next point.
Now answer this question: Were every Max/Min Thermometer used in the US the same model and brand. If you can not answer yes then you can not state that all have the same error range. Again this throws off the Law of Large numbers. If only 10% have an error range of +/- .5°C and another 80% have an error range of 1°C and the last 10% have an error of of 2° C. Your proposition that the number of readings higher and lower averaging out falls apart, especially when you take the final fact into account: There was just as few thermometers recording in the world then there is today and all of them didn’t change equipment at the same time to the same brand and model, so your Law of Large numbers falls apart again. (Dr. Spencer has a nice graph on the number of surface stations over time here: http://www.drroyspencer.com/2010/02/new-work-on-the-recent-warming-of-northern-hemispheric-land-areas/ )
You worry about the loss of thermometers that we know the calibration of but think that the start point data taken from less thermometers of unknown calibration, of unknown station changes and even unknown equipment changes as no problem and will not effect the trend. That is illogical thinking since your start variable has a huge impact of your trend.
Also in your example you started a Priori from a false premise, You don’t get multiple readings at the same time, from a known calibrated thermometer in the past, you get one.

Here is an example: I take ten readings at 3:00 PM with a calibrated thermometer.

Max/Min LIG thermometers trip a mechanical device that sticks at the highest and lowest readings. So for that day in 1890 you have 2 variables from an unknown source. You don’t know if it’s calibrated properly, if it is calibrated what the instrument bias was, If the model/brand was changed half way through the year. Nothing but those 2 variables. Sorry the Law of large numbers doesn’t save you. I even ran the numbers for State College Pa with just making 3 changes in equipment over time, each one more accurate then the last. Started with an instrument with a +/- .5°F accuracy, switched to one with a +/- of .3°F in 1930 and switched to one in 1990 that had a +/- of .1°F. When you run the numbers you find that if all of them went one way at max error you get a trend of .6°F, if you go the other way at Max error you get a trend of 1.7°F and you have to do that because you didn’t have a known good thermometer taking side by side readings with each one of those.
Now will they all go in one direction at max? No, but with the changes in equipment over time, becoming more accurate the odds of them all canceling each other out drops significantly because your room for error decreases over time to counter act the larger error in the past.
All those things point to why the Satellites are better even with their own problems. There is only one instrument that takes readings and it is checked against a multiple calibrated PRT’s daily and the background cold temperature of space. Any variances are then easy to account for in the readings. You can read how it is done here:
http://www.drroyspencer.com/2010/01/how-the-uah-global-temperatures-are-produced/
Keep this in mind, almost everyone of the problems you listed for today were present when the Historical Temp Records started (1880 for GISS, 1850 for HadCrut), plus back then they were just looking for a temperature reading since all they were concerned about was the weather. Who cared if it was 94° or 94.5°F on Aug 3rd someplace in New Mexico, it had no impact. It wasn’t until the 60’s when the Climate Whatever we are calling it this week came into being. That was when they realized they should have all the stations situated a specific way but they didn’t have that, they had weather records that are not suitable for what they are using them for.

Vince Causey
December 26, 2010 2:40 am

Of course 2010 will show the warmest year because GISS make the data up. There are no temperature stations in the arctic ocean, and very few in the arctic circle, yet they have shown a map with huge expanses of raging oranges and reds by smearing the few readings over vast distances. Since the few temperature stations that do exist are at airports, it’s not surprising that vast areas of the arctic appear to show warming.
However, despite them pumping up global temperatures, the best they can achieve is a (possibly) slight increase in 12 years. Once you adjust their adjustments, there is still no increase, or even a decrease.

Ian Cooper
December 26, 2010 2:51 am

Just so that I am getting this right, what we are talking about is a temperature anomaly in relation to a long term mean/trend derived from data going back 150 years or so?
As we don’t have a long term New Zealand mean annual temperature set worth talking about I have to go with what I do have, which is my local means for the city of Palmerston North.
Based upon our means, 1998 is the hottest year since records began here in mid 1928, just ahead (0.2 C) of 1971,1974 and 1999. I always have a problem with annual means though when thinking about our weather/climate. Our southern winters are well covered just as the northern summers are, but a third of our hot season (Nov/Dec) is dropped to accommodate the calendar year. As I have stated here in previous posts I like to simplify the seasonal situation into two obvious seasons, hot (Nov-Apr) and cold (May-Oct).
When I look at our data and use the T-Max means we see that the hottest ‘summer’ here was 1934-35, 0.3 C hotter than 1961-62, and 0.4 ahead of 1974 (Both were La Nina summers, which usually equals hot in my part of the world). 1998-99 was 0.58 degrees C colder than 1934-35!
In the end, all of these exercises show that it is critical to take the measurement periods into account. The local weather data that I receive on a daily basis is recorded at 9.am. the following morning. This means that the T-max was for the previous afternoon in general, whilst the T-Min is usually around dawn. Rainfall is taken as being up to 9 a.m. but is adjusted back for the last reading of the month to show that the amount recorded after midnight is alloted to the new month.
There are exceptions to the T-Max/T-Min being at the expected time. We had a situation in September 2009 where the coldest temperature for the day was at 3.30 p.m., while the highest was at the previous midnight, thanks to an unexpected snowstorm that left drifts of 2 metres deep on the lowest part of the mountains behind the city, that hung around for over a week. No one had ever seen anything like it here, especially for that late in the snow season.
Looking at the latest data, 2010 is nowhere near the hottest mean year. The winter was milder than the previous one, but last summer was the 20th coldest of the past 80 years. This summer, starting in Nov, is on track to be in the top dzen or so in the recorded period.
As far as trends go, if you take it from the first full year of 1929 then there is a slight rise of around 0.4 C to the present. On the otherhand if you start from somewhere like the plateau of the 1950’s we see a downward trend of 0.3 C!
Statistics are a lot of fun!
Coops

Magnus A
December 26, 2010 2:52 am

Haven’t read this post or comments as I should (forgive me). 2010 was warm, but I want to recall a paper by Patrick J. Michaels and Ross McKitrick, Quantifying the influence of anthropogenic surface processes and inhomogeneities on gridded global climate data, in Journal of Geophysical Research 2007, which shows that as much as half of the
http://www.uoguelph.ca/~rmckitri/research/jgr07/M&M.JGRDec07.pdf
From an article by Pat Michaels:
“Almost all the socioeconomic variables were important. We found the data were of highest quality in North America and that they were very contaminated in Africa and South America. Overall, we found that the socioeconomic biases ‘likely add up to a net warming bias at the global level that may explain as much as half the observed land-based warming trend.'”
http://spectator.org/archives/2007/12/27/not-so-hot
Happy New Year!

Roger Knights
December 26, 2010 2:59 am

DirkH says:
December 25, 2010 at 5:07 pm
They just want to save the planet, they think they know CO2 is the problem, so they adjust the data until it fits their preconceptions.

I.e., it’s “advocacy research.”

Espen
December 26, 2010 3:09 am

If 2010 is declared the hottest year on record, it would be a good opportunity to discuss if it’s time to get rid of global mean temperature as a measure of warming: temperature of air is NOT a good measure of its heat content! Instead of temperature anomalies, one should have used enthalpy (total energy) anomalies. 2010 is a good illustration of this, since a lot of the positive anomaly comes from arctic deserts, where it takes a lot less energy to heat a given mass of air by e.g. 5 C than it would take in the tropics – or even in Sahara.
(to measure enthalpy changes, we need to measure air pressure and moisture in addition to temperature, so of course even more could go wrong with the record…)

JohnH
December 26, 2010 3:16 am

sHx says:
December 26, 2010 at 1:34 am
What I meant was that satellites record are the only reliable record. And that only extends back to the last 30 years. The first dedicated instrument to check temp anomalies was put into Earth orbit in 2002, right?. Most conservatively speaking, our best records only stretch back to 2002. I understand that it was somewhat possible to detect the heat signal from earlier, undedicated satellite records that stretch back to 1979. In any case, according to the satellite measurements, 1998 was the warmest year ever.
Land based, ‘value added’ instrumental records are not in the same class in terms of data quality as sat records. Therefore 1934 and 1998 are not suitable candidates for comparison. To the best of our knowledge 1998 was the warmist year.
No, all you can say is that 1998 is the warmist year since 1979 (or 2002 if you only want to only rely on dedicated Sat records, but as 1998 is before 2002 that rules 1998 out anyway)
Hardly enough to base any funding on more work on a proof of a theory let alone destroying whole industries.

December 26, 2010 3:53 am

A little off topic, but a small lesson in DIY:
The Huffington Post:
Dorrit Moussaieff
“How Iceland Air Kept Flying When British Airways Was Grounded”
http://www.huffingtonpost.com/dorrit-moussaieff/how-iceland-air-kept-flyi_b_801077.html
When planes were at a standstill in England, and in Europe, the pilots of an Iceland air plane that was about to take off were informed they could not do so, as there was not enough snow clearing equipment to clear a path for the refueling car. Rather then facing an overnight stay, the captain and co-pilot, took matters and shovels into their own hands. Some 15 minutes of shoveling later, the path was cleared, the plane refueled and went airborne.
That is how Icelandic pilots deal with winter challenges. Perhaps Britain and other countries could learn from that example.

Mike Haseler
December 26, 2010 4:01 am

The point I realise that data is pretty meaningless is the point I realise that I could get different results every time I analyse it depending on very small changes in assumptions.
That’s with normal noise data. With 1/f noise you are wasting your time.