Guest Post by Ira Glickstein
According to the latest from NASA GISS (Goddard Institute for Space Studies), 2010 is shaping up to be “the warmest of 131 years”, based on global data from January through November. They compare it to 2005 “2nd warmest of 131 years” and 1998 “5th warmest of 131 years”.
We won’t know until the December data is in. Even then, given the level of noise in the base data and the wiggle room in the analysis, each of which is about the same magnitude as the Global Warming they are trying to quantify, we may not know for several years. If ever. GISS seems to analyze the data for decades, if necessary, to get the right answer.
A case in point is the still ongoing race between 1934 and 1998 to be the hottest for US annual mean temperature, the subject of one of the emails released in January of this year by NASA GISS in response to a FOIA (Freedom of Information Act) request. The 2007 message from Dr. Makiko Sato to Dr. James Hansen traces the fascinating story of that hot competition. See the January WUWT and my contemporary graphic that was picked up by several websites at that time.
[My new graphic, shown here, reproduces Sato’s email text, including all seven data sets, some or all of which were posted to her website. Click image for a larger version.]
The Great Hot 1934 vs 1998 Race
1) Sato’s first report, dated July 1999, shows 1934 with an impressive lead of over half a degree (0.541ºC to be exact) above 1998.
Keep in mind that this is US-only data, gathered and analyzed by Americans. Therefore, there is no possibility of fudging by the CRU (Climategate Research Unit) at East Anglia, England, or bogus data from Russia, China, or some third-world country. (If there is any error, it was due to home-grown error-ists :^)
Also note that total Global Warming, over the past 131 years, has been, according to the IPCC, GISS and CRU, in the range of 0.7ºC to 0.8ºC. So, if 1934 was more than 0.5ºC warmer than 1998, that is quite a significant percentage of the total.
At the time of this analysis, July 1999, the 1998 data had been in hand for more than half a year. Nearly all of it was from the same reporting stations as previous years, so any adjustments for relocated stations or those impacted by nearby development would be minor. The 1934 data had been in hand for, well, 65 years (eligible to collect Social Security :^) so it had, presumably, been fully analyzed.
Based on this July 1999 analysis, if I was a betting man, I would have put my money on 1934 as a sure thing. However, that was not to be, as Sato’s email recounts.
Why? Well, given steadily rising CO2 levels, and the high warming sensitivity of virtually all climate models to CO2, it would have been, let us say inconvenient, for 1998 to have been bested by a hot golden oldie from over 60 years previous! Kind of like your great grandpa beating you in a foot race.
2) The year 2000 was a bad one for 1934. November 2000 analysis seems to have put it on a downhill ski slope that cooled it by nearly a fifth of a degree (-0.186ºC to be precise). On the other hand, it was a very good year for 1998, which, seemingly put on a ski lift, managed to warm up by nearly a quarter of a degree (+0.233ºC). That confirms the Theory of Conservation of Mass and Energy. In other words, if someone in your neighborhood goes on a diet and loses weight, someone else is bound to gain it.
OK, now the hot race is getting interesting, with 1998 only about an eighth of a degree (0.122ºC) behind 1934. I’m still rooting for 1934. How about you?
3) Further analysis in January 2001 confirmed the downward trend for 1934 (lost an additional 26th of a degree) and the upward movement of 1998 (gained an additional 21th of a degree), tightening the hot race to a 28th of a degree (0.036ºC).
Good news! 1934 is still in the lead, but not by much!
4) Sato’s analysis and reporting on the great 1934 vs 1998 race seems to have taken a hiatus between 2001 and 2006. When the cat’s away, the mice will play, and 1998 did exactly that. The January 2006 analysis has 1998 unexpectedly tumbling, losing over a quarter of a degree (-0.269ºC), and restoring 1934‘s lead to nearly a third of a degree (0.305ºC). Sato notes in her email “This is questionable, I may have kept some data which I was checking.” Absolutely, let us question the data! Question, question, question … until we get the right answer.
5) Time for another ski lift! January 2007 analysis boosts 1998 by nearly a third of a degree (+0.312ºC) and drops 1934 a tiny bit (-0.008ºC), putting 1998 in the lead by a bit (0.015ºC). Sato comments “This is only time we had 1998 warmer than 1934, but one [on?] web for 7 months.”
6) and 7) March and August 2007 analysis shows tiny adjustments. However, in what seems to be a photo finish, 1934 sneaks ahead of 1998, being warmer by a tiny amount (0.023ºC). So, hooray! 1934 wins and 1998 is second.
OOPS, the hot race continued after the FOIA email! I checked the tabular data at GISS Contiguous 48 U.S. Surface Air Temperature Anomaly (C) today and, guess what? Since the Sato FOIA email discussed above, GISS has continued their taxpayer-funded work on both 1998 and 1934. The Annual Mean for 1998 has increased to 1.32ºC, a gain of a bit over an 11th of a degree (+0.094ºC), while poor old 1934 has been beaten down to 1.2ºC., a loss of about a 20th of a degree (-0.049ºC). So, sad to say, 1934 has lost the hot race by about an eighth of a degree (0.12ºC). Tough loss for the old-timer.
Analysis of the Analysis
What does this all mean? Is this evidence of wrongdoing? Incompetence? Not necessarily. During my long career as a system engineer I dealt with several brilliant analysts, all absolutely honest and far more competent than me in statistical processes. Yet, they sometimes produced troubling estimates, often due to poor assumptions.
In one case, prior to the availability of GPS, I needed a performance estimate for a Doppler-Inertial navigation system. They computed a number about 20% to 30% worse than I expected. In those days, I was a bit of a hot head, so I stormed over and shouted at them. A day later I had a revised estimate, 20% to 30% better than I had expected. My conclusion? It was my fault entirely. I had shouted too loudly! So, I went back and sweetly asked them to try again. This time they came in near my expectations and that was the value we promised to our customer.
Why had they been off? Well, as you may know, an inertial system is very stable, but it drifts back and forth on an 84 minute cycle (the period of a pendulum the length of the radius of the Earth). A Doppler radar does not drift, but it is noisy and may give erroneous results over smooth surfaces such as water and grass. The analysts had designed a Kalman filter that modeled the error characteristics to achieve a net result that was considerably better than either the inertial or the Doppler alone. To estimate performance they needed to assume the operating conditions, including how well the inertial system had been initialized prior to take off, and the terrain conditions for the Doppler. Change assumptions, change the results.
Conclusions
Is 2010 going to be declared warmest global annual by GISS after the December data comes in? I would not bet against that. As we have seen, they keep questioning and analyzing the data until they get the right answers. But, whatever they declare, should we believe it? What do you think?
Figuring out the warmest US annual is a lot simpler. Although I (and probably you) think 1934 was warmer than 1998, it seems someone at GISS, who knows how to shout loudly, does not think so. These things happen and, as I revealed above, I myself have been guilty of shouting at analysts. But, I corrected my error, and I was not asking all the governments of the world to wreck their economies on the basis of the results.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

No and no. You do not misread things, and they won’t be able to wait till 2086 to discredit 1998 or 2010, which they seem to have artificially inflated. They will have to start discrediting them within the coming decade if we enter a Dalton-like minimum based on David Archibald’s ideas. THANKS for your great comment!
Hugh Pepper says:
December 25, 2010 at 7:08 pm
“Of course we should care! Peer reviewed research from several data sets confirm that the planet is getting warmer. “
And as Martha Stewart would say, “That’s a good thing.” If we keep it up, we can farm Greenland again, eh?
Tactics, people, tactics. Isn’t the point that global warming isn’t a problem? Who claims it isn’t happening?
It’s been getting warmer for a century or two and if that continues for another century or two it’ll be a warm as it was when the Vikings colonized Greenland. That’s not a catastrophe! The recovery from the Little Ice Age may stop but that’s not the basis of our argument.
The alarmists have a onus probandi that global warming will become a problem. This is why they talk about “tipping points” and the like. Don’t let them off the hook by giving a rip about whether the Earth is merely getting warmer.
The alarmists are right only if the warming ACCELERATES. If the warming merely continues but doesn’t become a crisis, then the alarmists are wrong. They want everyone to make heroic efforts to prevent something. What if that something is good? What if Global Warming just means improved agriculture?
Who cares about a few tenths of a degree in the past?
Ira Glickstein said:
While I cannot speak officially for WUWT, it seems to me nearly everyone here accepts that the planet has warmed over the past century or so, and that some of it is due to human activities.
——————————————————————————
I don’t think that generalisation is true, especially the second part.
Not that it matters. Science is not a matter of consensus.
How do the *questionable* New Zealand temperatures affect the over global temp?
Here in Australia, where the weather (ops no – climate) is supposed to be getting hotter and hotter and this is supposed to be very, very bad if not fatal;
Well ——–
There is a continued decade or more long population drift towards the tropical north, top end of our fine continent.
Are Australians all mad this could lead you to ask?
Well no, is the very trueful scientific answer, it’s just that we are not as yet truely accliamatised.
We alian Europeans in our ignorance believe in our very non scientific way, that moving north takes us much closer to the cold, cold artic circle and moving to the cold is our salvation.
In our great ignorance, we may be quite right.
moving to the warm north may yet be our salvation.
Yes, but is it still getting warmer? There seems to be some evidence that the warming trend has ended – is this the beginnning of a cooling trend or just a glitch that Ma Nature is throwing our way?
Headline says “Warmist” — commenter seem to th4eink it says “warmest”.
I am confused. No, I mean more that usual confused.
But if “Warmest” is meant, does that not imply thgat there is no record of it being warmer?
That can’t be right.
One could hope, but I don’t think so.
But I do hope it does get warmer–prosperity and plenty see to correlate positively with warmer.
so much in the following to bring smiles to your faces, including:
21 Dec: Columbia Uni: James Hansen: Q&A with Bill McKibben, cofounder and global organizer, 350.org
Hansen: We scientists create a communications problem by speaking about average global warming in degrees Celsius. Global warming in degrees Fahrenheit is almost twice as large (exact factor is 1.8) and warming is about twice as much over land (where people live!) than over ocean…
http://www.columbia.edu/~jeh1/mailings/2010/20101221_McKibbenQA.pdf
[check the main page – there’s a story, but thanks -moderator]
If 2010 is the warmest on record, what are the odds it will be warm enough to break the 15 year non-trend trend?
Do I correctly read your opinion, Johanna, that our planet has not warmed since the Little Ice Age and that, even if it has, not a bit of that warming is human-caused? You are certainly entitled to your opinion, but it is my opinion that you are wrong. As I wrote, we need to address: 1) Q. How much? A. Not as much as claimed by warmists, and 2) Q. How much of that is human-caused? A. Not much, perhaps 0.1ºC.
Let me add a third question: 3) Q. What, if anything, should we do about it? A. Not much immediately, it is not a crisis, some of it may be good, etc. But, as a conservative, I am concerned with the continuing rise of CO2 to levels unprecedented in human times, and (not within the purview of WUWT) the costs of protecting our access to imported petroleum. Therefore it would be prudent, I believe, to use market-based forces to develop nuclear, clean coal, and renewables, and also reduce energy waste.
Oh, and yes, science is based on consensus to some extent, over the long haul.
If a scientist is correct but nearly all the others say she is wrong, she is, of course, still right. We believe the scientific method will, in time, eventually prove her right and, at that point, the consensus will go her way. The problem with climate science is that it has been, to some extent, hijacked by political forces. It is those forces, not science or the scientific method, that constitute the problem.
“Hugh Pepper says:
December 25, 2010 at 7:08 pm
Peer reviewed research from several data sets confirm that the planet is getting warmer. If you have real data to disprove this, please present it!”
See the Hadcrut3 data set: http://www.cru.uea.ac.uk/cru/data/temperature/hadcrut3gl.txt
Despite the warm 2010, according to Hadcrut3, the average anomaly for the last five years (2006 to 2010) was 0.42. However the average anomaly for the previous five years (2001 to 2005) was 0.46. This basically means it cooled off during the decade.
I realize climate is defined by what happens over 30 years. But something else is going on with the climate. There are huge ocean cycles that form a sine wave every 60 years. And right now, we are at the point where we were in the 1950s where things were getting cooler for a few decades and some thought there would be an ice age in the 1970s.
P.S. Thank you for your comment Ira!
You can’t have global warming if you aren’t continually setting new record highs. It was obvious to GISS and CRU that 1998 was going to be a problem in that regard, setting a temp so high that you couldn’t beat it in subsequent years. This chart appeared on WUWT a couple years ago, and you see their approach to solving the problem.
http://i52.tinypic.com/2airgg1.jpg
GISS had been running a good 0.2° higher than UAH/RSS, but for the perfect storm year of 1998, both GISS and HadCRUT tamped down their numbers to give themselves a chance.
This consideration was also recognized in the USHCN v2 adjusted raw data, where 1998 was lowered slightly, enough to make it reachable, but not as much as 1934 was lowered.
Onion says: “So this race between 1934 and 1998 was in the US temperature record, that comprises 2% of the Earth’s surface.”
I guess so, Onion. A lot like you did in the other thread where you generalized on climate models’ validity based on UK winter temperatures.
D. J. Hawkins says: I took a quick peek at the GISS website to try and understand how they crank out their numbers, and even a cursory glance was daunting.
Yup. Put me off for about 2 years. I kept saying “SOMEBODY needs to look at this!”… then one day I realized “I am somebody.”
I’ve downloaded it to a Linux box and got it to run. It ‘has issues’ but I figured out how to get past them. Details on request and posted on my web site. It required a long slow pass through the code…
Has there been a clear presentation of the methodology somewhere?
No.
There has been a lot of presentation of the methodology. Much of it is in technical papers that present a method, that is used in the code, but the code does more than (or sometimes less than, or sometimes just somewhat different from) what the papers describe.
It’s a convoluted complicated beast. Starter guide here:
http://chiefio.wordpress.com/gistemp/
I would think that once you nail down the method, no matter how many times you run the analysis the results should be the same.
You would think that. I thought that. It’s not that…
The code is designed in such a way that EVERY time you run it (with any change of starting data AT ALL) you will get different results. And both the GHCN and USHCN input data sets change constantly. Not even just monthly, but even mid-month things just ‘show up’ changed in the data sets.
So to talk about “what GIStemp does” at any point in time requires specification of the “Vintage” of data used TO THE DAY and potentially to the hour and minute.
Why?
Infill of missing data, homogenizing via the “Reference station method”, UHI via the “Reference Station Method”. Grid / Box anomaly calculation that uses different sets of thermometers in the grid/box at the start and end times (so ANY change of data in the recent “box” can change the anomalies…) and some more too…
If the assumptions regarding initial conditions are so fungible as to allow a reversal of the relative values of the anomolies at will, you don’t have a scientific analytical tool, you have a propoganda tool.
Can I quote you on that? It’s rather well said…
IMHO, the use of “Grid / Box anomalies” (calculated AFTER a lot of data manipulation, adjustment, averaging , homogenizing, etc done on monthly temperature averages…) mixed with changing what thermometers are in a “grid / box” in the present vs. the past lets you “tune” the data such that GIStemp will find whatever warming you like. It’s cleverly done (or subtile enough they missed the “bug”… being generous) and if a good programmer devotes about 2 years to it they can get to this point of understanding. Everyone else is just baffled by it. Draw your own conclusions…
I’ve tried explaining it to bright folks. A few ‘get it’. Most just get glazed. Some become hostile. I’ll explain it to anyone who wants to put in the time, but it will take a couple of weeks (months if not dedicated) and few folks are willing to ‘go there’.
Judging from the look of the code, it was written about 30 years ago and never been revisited (just more glued on, often ‘at the end’ of the chain). From that I deduce that either Hansen is unwilling to change “his baby” or very few folks are willing to shove their brains through that particular sieve …
The bottom line is that “the GIStemp code” is DESIGNED to never be repeatable and to constantly mutate the results as ANY input data changes and that makes ALL the output shift. It’s part of the methodology in the design. Don’t know if that’s “malice” or “stupidity”…
It is my assertion that this data sensitivity is what GIStemp finds when it finds warming. The simple fact that as new data is added the past shifts to a warmer TREND indicates to me that the METHOD induces the warming, not the actual trend in the data. I’ve gone through the data a LOT and find 1934 warmer than 1998 and with a method that IS repeatable. See:
http://chiefio.wordpress.com/category/dtdt/
http://chiefio.wordpress.com/category/ncdc-ghcn-issues/
Basically, 1934 and 1998 ought to stay constant relative to EACH OTHER even as new data is added even IF you were ‘adjusting the past cooler’ to make up for something or other (nominaly UHI or TOBS – yes, I know, it sounds crazy to make the past cooler for UHI correction, but it’s “what they do” in many cases).
As it is, they jockey for relative position with a consistent, though stochastic, downward drift of the older relative to the newer. That tells me it’s the method, not the data, that’s warming.
Onion says: So this race between 1934 and 1998 was in the US temperature record, that comprises 2% of the Earth’s surface.
Um, no.
The article is a little unclear on that point, but the details are rather devilish, so I’d give them some slack on the details. The reality is rather complex…
GISS uses a code called GIStemp. It is that US CODE that is finding 1998 warmer than 1934 (sometimes).
GIStemp takes as input BOTH the USHCN (USA only data) and the GHCN (Global Historical Climate Network – whole world data). Except that between about Noveber of 2010 and about 2007? it took in the USHCN but only used it up to 2007. Then in November it suddenly started using all of it (having finally added the code to use the newer version)… EXCEPT that the new version of USHCN was all different from the old version (warmer) so direct comparisions of old and new GIStemp are not, er, “valid”? “reasonable”?
OK, in the first step of GIStemp, it does a garbled “half averaging” of USHCN and GHCN but only for the USA stations. Each, you see, has a different ‘adjustment history’ so it tries to undo some of the adjustments in one and put in the adjustments from the other, except where it only has one, then it just uses whatever one it has, adjustments that don’t match and all.
Oh, and it fills in missing data by making it up.
N0, honest. It is called “The Reference Station Method” and it is used both to press fit the data to look like what they think it ought to be (called ‘homogenizing’) and to fill in missing bits with what they think would look nice and fit in well.
THEN that mush goes on the following steps (that are detailed in the links I gave above for anyone courageous enough to ‘go there’).
So, “onion”, they use the whole global data set. It’s just what they DO with it that’s, er, odd.
sHx says:
December 25, 2010 at 7:02 pm
The warmest year was 1998. That is so according to the only reliable instrumental data worth our attention, the satellite measurements.
Warmest in last 30 years only, in 1934 there were no satellites.
PS Where’s the american Harry when you need him.
In the USSR and its slave satellite subject regions that I visited in the early 80s the ordinary unconnected people would look at the state broadcasts of record grain harvests and then look at the massive queues for what tiny quantities of bread was available, the people could easily see the ‘truth deficit’.
The state/establishment/ruling parasite class is desperate to sell the idea of CAGW, so desperate are they that like the USSR they have turned to telling ever bigger lies and deceptions. The gap between the observations of ordinary people and what they are told has reached breaking point, henceforth we will see ordinary people acting like those of the USSR, they will not believe anything the lying regime says whether its true or not. Trust has broken down, the bonds of trust between the political class and the people is broken, we know the political class and their stooge Lysenko’s are lying through their teeth.
The political class need CAGW whether its real or not, it allows them to control carbon and control the masses, it allows the rich to become richer and the powerful to become more powerful. The CAGW fraud is plan A, I can only imagine that a plan B would be a nightmare.
Onion says:
Hansen 2001 titled “2001: A closer look at United States and global surface temperature change” is extremely relevant here. This is the documentation that covers the change made around 2000 that resulted in the US 1934 value going down and 1998 value going up. The abstract reads:
Part of the problem is that the adjustments are appled to all at once while the things being adjusted happened to a few at a time and spread out.
For example:
(2) reclassification of rural, small-town, and urban stations in the United States, southern Canada, and northern Mexico based on satellite measurements of night light intensity [Imhoff et al., 1997], and (3) a more flexible urban adjustment than that employed by Hansen et al. [1999], including reliance on only unlit stations in the United States and rural stations in the rest of the world for determining long-term trends.”
While this sounds nice, what it leaves out is the simple fact that a station is classed as RURAL or URBAN for a single point in time, NOW. That flag is then applied to ALL data for ALL time for that site. That then means, for example, that an Urban site today would have been urban in 1850 even if it was a cow field then. Then you “UHI correct down” that “urban” station and, oh, wait, it’s a cow field we’re cooling off…
This issue is endmic to the data sets. “Condition” flags have no date axis. An airport today was an airport in 1790…
By all means the magnitude could be checked but surely we can only conclude something is wrong after checking?
Absolutely NOT. Mr. Hansen is on record in a court of law testifying that breaking the law “for the greater good” is a worthy thing to do. This set case law precident in the UK. He is first and foremost a political activist who believes, by his own words, breaking the law is a fine thing. That means he has no moral compass about defending his code or data from political bias “for the greater good” either.
The only valid approach is to assume is is doing exactly what he has testified is the right and moral thing to do: Whatever it takes to achieve your goals if you think it is for the greater good.
That means “assume the ‘fellow’ is flat out lying until proven otherwise”.
Hey, HE testified that those were his methods, not me. So “throw rocks” at him if you don’t like it…
Lionsden says:
December 25, 2010 at 7:35 pm
I don’t understand why anyone now bothers with surface temperature stations, when there has been continuous global satellite coverage for the last thirty years. The UAH near surface global temperature average, updated daily, looks to me like it places 2010 somewhere between 2nd and 4th hottest, behind 1998 certainly, and very close to 2005 and 2009.
According to Spencer the UAH MSU is a tie for hottest between 1998 and 2010.
But, but, Steve Mosher assures us there isn’t anything wrong with GIStemp.
In the wake of Climategate I contended that although “the science” hadn’t taken a direct hit, the warmists had suffered the loss of something even greater–the intangible of trust. They were no longer trustworthy. “The shine is off their halos,” I said, predicting that would prove fatal in the long run.
Good work, A C.
I might regret getting into this, but since 1934 is so close to 1998, is there an official explanation as to why one warming is natural and the other is not?
[REPLY – It is for the US. Not so much for global. But, then, CRU hasn’t released its raw data, so who can tell what they did to the data? The average USHCN station raw data shows a +0.14C/century increase. Adjusted data is +0.59C (both figures ungridded). ~ Evan]
sHx says: The warmest year was 1998. That is so according to the only reliable instrumental data worth our attention, the satellite measurements.
I see others have pointed out the 1934 satellite issue. Yes, it’s that pesky “what baseline?” problem.
Richard Sharpe says:
So tell us what those consequences are? Longer growing seasons so we can better feed everyone on the the planet? The greening of the Sahara as occurred during the Holocene Climate Optimum?
Um, no, doesn’t even get close. I can barely make out much change there. It’s really just some better crop land in Canada and Siberia. Even if you assume the IPCC is correct in their overblown numbers. Climate maps here:
http://chiefio.wordpress.com/2010/12/12/what-me-worry/
DirkH says:
You are right, but you missed the most important fact, namely that the Nyquist theorem is grossly violated in all dimensions, and thus, computing an average has no meaning. But that’s just signal processing nitpicking…
Um, it’s actually even worse than that… Temperature is an INTENSIVE variable, so averaging it means nothing anyway. Take two pots of water, one 10 degrees the other 40 degrees. Dump one into the other. What’s the resultant temperature?
Oh, you can’t know… Need relative masses… Now, what if it’s in F instead of C… oh, yeah, heat of fusion.
Yeah, just physics nitpicking 😉
But please folks, take the time to learn what an intensive variable is. This is critical.
Alvin says: How do the *questionable* New Zealand temperatures affect the over global temp?
Take the locations of the stations. Put a 1200 km radius around them. That enclosed area as a percent of total land area is roughly the impact of N.Z. (GIStemp extends temps out to 1200 km in all directions…)
Ira Glickstein says:
Do I correctly read your opinion, Johanna, that our planet has not warmed since the Little Ice Age
It was warmer than now BEFORE the LIA. That’s the problem. Temperatures are semi-cyclical with periods of up to 1500 years (Bond Events / D.O. events) and to some degree fractal as they are based on fractal surfaces. Fractal things vary with the size of ruler you use (so changing the number of thermometers changes the result…) and measuring a long period cycle with a short length ruler gives you any slope you like (just be careful in picking start points and end points…)
Over a 10,000 year period, we’re definitly headed down. Over 2 years, we’re headed down. Over 150 years we’re headed up. Over 50 years we’re headed up. Over 60 years we’ve gone nowhere. Over 1500 years we’re way up (Dark Ages were cold and dark, Bond Event One…) but over 2000 years we’re down…
See the problem?
http://chiefio.wordpress.com/2010/09/13/an-interesting-view-of-temperatures/
So you need to add to any analysis of ‘warming’ “over what period”?
clean coal
Precluded by the AGW mantra… And that’s a major stupidity on their part as it’s the fastest cheapest and best way off of OPEC oil and can be done in 5 to 10 years.
I see some people haven’t grasp the fact that one of the major problems with the “historical temperature record” is the lack of metadata which is something that NCDC admits. Now the question is why is that important? Simple GISS and CRU both use the data in the GHCN dataset compiled by NCDC as their starting point. If you don’t have the complete meta data you miss when equipment was changed, when a station was moved and other factors. Another thing you don’t get is the built in instrument error of the thermometers back then. Go online shopping for a liquid in glass (LIG) thermometer. You will find some have an error of +/- .5°C while others have an error of +/- .3°C.
Why that is important is that they can’t compute the true error bands for the instrumental period back in it’s early years. I stumbled across a paper that NOAA had made dealing with just one station in New Mexico and all these flaws are right there for anyone to see.
On 1 September 1871, the wet bulb thermometer was broken and observations were not taken after that date. The wet bulb thermometer was replaced between July 1873 and November 1873 (exact date unknown due to lack of data in the NCDC database), and measurements resumed.
http://mcc.sws.uiuc.edu/FORTS/histories/NM_Fort_Bayard_Grice.pdf
There is more in the report but that should give you an idea of what was going on. They have no clue where the instruments were, exactly what instruments were used and what the instrument errors were from those unknown instruments. Hell they don’t even know for sure how many station moves occurred back then or how many stations in the area there was.
Now is every station going to be that bad? Unlikely but it is also unlikely that it is the worst record in the NCDC database. Does anyone think that you won’t find the same crap in the late 1800’s and early 1900’s in places like Africa and South America?
So there is the root problem, no matter how good or bad GISS or CRU or NCDC’s methods are the stuff they got to work with just isn’t that great. Basically at that point it’s GIGO. To imagine you can detect a .7°C to .8°C per century trend when even some of the modern LIG thermometers have built in errors of +/- .5°C is silly.