By Steve Goddard
h/t to reader “Phil.” who lead me to this discovery.
In a previous article, I discussed how UAH, RSS and HadCrut show 1998 to be the hottest year, while GISS shows 2010 and 2005 to be hotter.
But it wasn’t always like that. GISS used to show 1998 as 0.64 anomaly, which is higher than their current 2005 record of 0.61.
You can see this in Hansen’s graph below, which is dated August 25, 1999
But something “interesting” has happened to 1998 since then. It was given a demotion by GISS from 0.64 to 0.57.
http://data.giss.nasa.gov/gistemp/graphs/Fig.A2.lrg.gif
The video below shows the changes.
Note that not only was 1998 demoted, but also many other years since 1975 – the start of Tamino’s “modern warming period.” By demoting 1998, they are now able to show a continuous warming trend from 1975 to the present – which RSS, UAH and Had Crut do not show.
Now, here is the real kicker. The graph below appends the post 2000 portion of the current GISS graph to the August 25, 1999 GISS graph. Warming ended in 1998, just as UAH, RSS and Had Crut show.
The image below superimposes Had Crut on the image above. Note that without the post-1999 gymnastics, GISS and Had Crut match quite closely, with warming ending in 1998.
Conclusion : GISS recently modified their pre-2000 historical data, and is now inconsistent with other temperature sets. GISS data now shows a steady warming from 1975-2010, which other data sets do not show. Had GISS not modified their historic data, they would still be consistent with other data sets and would not show warming post-1998. I’ll leave it to the readers to interpret further.
————————————————————————————————————-
BTW – I know that you can download some of the GISS code and data, and somebody checked it out and said that they couldn’t find any problems with it. No need to post that again.




It appears then we are comparing apples to oranges.
What’s to say that in 10 years MORE station data won’t get put into 2010 and suddenly 2010 is no longer as warm as we thought it was? And isn’t it amazing when you put in more data, the older data ALWAYS gets cooler?
That is a CRAPPY method, if you ask me…
Steven Mosher says:
August 30, 2010 at 9:25 am
“THEN in 2005, you do have 20 years of overlap, so the
station gets “included” in the reference station time series and the past ( 2000-2004) will change. what changes is the ESTIMATE of temps. It changes BECAUSE the data changed. What changed in the data?”
=======================================================
ROTFL thank God my bank doesn’t work that way!
What a novel concept, the present changing the past……..
Which means that current temp “estimates” are not accurate either, because tomorrow they will be in the past, and tomorrow’s temp data will change today’s “estimate” too.
Matt N
“Can you explain to me how a post-2000 data change to US TEMPS affected the GLOBAL anomaly number from 1998?”
its more than that change. Its how the algorithm works to include more data as time marches forward. If steven actually read the code or tried to understand how RSM works he would see how RSM works.
I’ll give you a simple example:
Station 1 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
Station2 NA,NA,NA,1,1,NA,NA,NA, 1,1,1,1,1,1,1,1,1,1,1,1
Long series: short series. if those two stations are close together, the RSM will stitch them into a LONGER series. But in the case above the overlap is short, 12 years.
SO, Gisttemp will not include the short series. Station 1 will just be ZERO.
TIME MARCHES ON. more years get added to station 2 and station 1. When you have 20 years
of overlap THEN you go back in time and stitch them together. Station 2 gets included. This changes the “estimate” for that REFERENCE STATION (station1&2). a reference station is merely the stitching together of stations with gaps. Its done according to a set rule. dont like that rule? download GISStemp and change it. see if it matters.
IF you dont like this approach ( Jones does not) Then you can use CAM. You get the same general answer.
“IF you dont like this approach ( Jones does not) Then you can use CAM. You get the same general answer.”
I don’t, and I don’t see Hadley data changing like this over time….
Mr. Mosher;
What you are stating here is that you do not have an accurate means of reliably estimating global mean surface temperature.
If that is the case then all arguments about AGW really ought to end until such time as we can tell whether the temperature is increasing or if our means of estimation is statistically unstable.
There is no CO2 mitigation strategy which will stabilize our temperature estimating methodology.
Your cult has dug its grave, now you need to sit down and be quiet while scientists try to figure out what is really happening to our atmosphere and climate.
” latitude says:
August 30, 2010 at 7:09 am (Edit)
I stopped reading when I came to this:”
The theme song of the ignorati. You know a lot of the things that bother Steven and others about GISSTEMP bothered me. So, in 2007 we lobbied GISS to release the code. You can go back and see me pester gavin to “free the code” You can go to CA and see me help people who were trying to compile it. In the meantime I took the effort to actually read the code. SteveMc took the effort to actually emulate things. John G took the time to go through data by hand. WORK to back up or dash some ARMCHAIR opinion. The result of that real work is a better understanding of the math. Now some 3 years later people are “discovering” what we discussed long ago. But they don’t have enough conviction to doubt their doubt and check for themselves. They are lazy thinkers. They cant read a blog comment, much less the code. They cannot understand the mathematical differences bwteen RSM,CAM,FDM and a least squares approach, BUT they draw conclusions BASED ON THEIR ignorance. Doubt, if followed rigorously, just leads to doubt. If you want to say you “doubt” the methods, that is consistent. But some people want to say, I dont understand the method, I doubt the method, I refuse to study the method, THEREFORE it is ‘wrong’. These are the lazy skeptics. There skepticism is not an intellectual discipline, it is a political expedient. They generally, say things like ” I got to this sentence and stopped reading”
And typically they stop because of some combo of the following
1. They actually continued reading and cant find anything wrong with the argument
2. They didn’t understand or would not understand
3. Nobody in the past has ever called them on their declaration of ‘stopped reading’
4. They learned as a child to stick their fingers in their ears.
5. They were afraid of being educated
6. They never learned to doubt their doubt, the final stage of skepticism.
One could generate some fake data, where all the various trends would be known absolutely…THEN start removing data from the data set and apply these various algorithms. From there one could get a feel for how well the infilling procedures worked, what sort of artifacts they produced, when they broke down and under what conditions. Now that would give considerable insight into what methods one should be using to account for missing data. Seems like a good master’s thesis to me.
Steven Mosher says:
August 30, 2010 at 10:15 am
” latitude says:
August 30, 2010 at 7:09 am (Edit)
I stopped reading when I came to this:”
The theme song of the ignorati
==============================================
What an uppity mess you are.
MattN says:
August 30, 2010 at 9:42 am (Edit)
“IF you dont like this approach ( Jones does not) Then you can use CAM. You get the same general answer.”
I don’t, and I don’t see Hadley data changing like this over time….
#################
CAM DOES NOT CHANGE OVER TIME!. do you read? do you understand why it doesnt? and WHY this is a limitation of the approach?
CAM works like this: a station gets INCLUDED if it has 15 years in the period 1961-1990. If it meets that criteria, it gets added to the list. Then you calculate the
average for that station during that period. Then all the years get baselined as deviations from that average. If a new station comes on line in 1985 and has reports from 1985 to 2500, It will not get added. So CAM only accepts new data, IF it has old data in the1961-1990 period. The past can only change if new data comes in during the base period. SO if GHCN were to update the database and find more 1961-1990 data, then CAM would change. That is a Limitation of CAM. the reliance on a base period. In fact Jones discusses this in the mails. Did you read them all? There are only a couple thousand.
RSM uses MORE DATA than CAM. As new data comes in it can be used as long as there is a long station that it Corellates with. The test for merging stations into long series involves an overlap test and some statistical tests that determine how well the two stations match. So two stations a few miles apart can be merged into one estimate of the local temperature trend. But to do this they have to be well correlated and they have to have at least 20 years of overlap. Don’t like that, explain why from a statistical perspective. Write some code. SHOW how the method biases the result, and under what conditions. Or you can read up on the method
You may not “like” RSM. but generally, you would be well advised when dealing with math to explain why you dont “like” a method. I’ll give you an example. FDM. The first difference method is one used by NCDC. Also, Willis likes it and EM smith likes it. Unfortunately, it has deep statistical flaws when applied to smallish numbers of stations. It requires thousands. The reason for this is described by JeffId in an actual statistical test of the method on synthetic data. That’s how we test methods. Up until Jeff’s work I rather “liked” FDM. After his work I understand its limitations. CAM has limitations ( new data cant come in) RSM has limitations ( new data can change estimates of the past) The least squares method doesnt have any of these problems, but its hard for the non statistian to grasp. if we comapre all the methods on the data in question, they all give comparable results. As an analyst that means the method choice is immaterial. Could it be different? yes. The data could change (more new stations) and you might see effects. But as it stands the methods give the same answer. OF COURSE there are some small differences. there SHOULD BE. the question is NOT who has the hottest year ( thats a statiscal estimate and not a FACT)
For the technically inclined, these minor differences are a fun thing to look at. BUT we understand the math and understand what difference matter and what differences are in the noise. Trying to make NEWS about the noise, distracts from the credibility of your skepticism. You should make strong arguments, not weak ones. You should focus on the real uncertainties where the error bars are wide ( hockey stick and GCMS) rather than the meaningless uncertainities. Leave the meaningless uncertainties to those who understand them for what they are: meaningless uncertainties that are fun to characterize and understand. Pure research. as applied analysis, these small differences make no difference. THAT SAID, I criticize GISS for making any definitive statements about “warmest” year or “decade” that does NOT have an attendent statement of confidence. Like so, “we can be 95% confident that 2005 is the warmest year in the record.” But that criticism is a criticism of their PR and not Global warming science. The science can be correct AND they can say incorrect/misleading things about it. The existence of bad PR, manipulative PR, does NOT change the scientific facts. It can’t. Its only PR. So if somebody wants to attack the PR, then they need to be clear. This isnt about the science, its about the twisted presentation. That’s why we wanted the code. To get at the real math and real numbers and not the words in a press release. Should they clean up their PR act? Yes. Does their misguided use of PR justify the same behavior on “our” side? Nope, it just makes us like them.
Jaye Bass says:
August 30, 2010 at 10:41 am (Edit)
One could generate some fake data, where all the various trends would be known absolutely…THEN start removing data from the data set and apply these various algorithms. From there one could get a feel for how well the infilling procedures worked, what sort of artifacts they produced, when they broke down and under what conditions. Now that would give considerable insight into what methods one should be using to account for missing data. Seems like a good master’s thesis to me.
###################
or you could read the papers already published, or look at chads site were he did this, or steveMc exploration of the problem, or JeffIds recent work.
So the fact is we have no way to reliably estimate global mean temperature anomaly, yet are still expected to believe that GISS know how it changes to one hundredth of a degree C. We also cannot rely on previous estimates, as when new data is included the history changes.
I have been involved in science all my life, and this must be the best example of cargo cult science I have seen in forty or so years! GISS temperature anomalies = FAIL.
Steven Mosher says:
August 30, 2010 at 9:25 am
Steven Mosher says:
August 30, 2010 at 9:38 am
Steven Mosher says:
August 30, 2010 at 10:15 am
Steven, I understand the algorithm, I also understand why it is done in this manner. My problems with this approach is there is obviously a quality control difficulty. When the results of running the code illustrates a global anomaly difference of over 10% after only little more than a decade, (0.64-0.57), then there is a problem. Maybe they need to let the time elapse longer before they merge. Maybe they need tighter constraints on the data they allow into the db. Obviously, the statements and assertions coming from GISS should come with the caveat that it is all subject to change and shouldn’t be viewed as a reflection of reality. But when the result is predictable, one should pause. Especially seeing we are passing laws all around with world effecting the global population based on the certainty of the various assertions, GISS being a great contributor.
Further, your response to latitude is a
bitover the top. There seems to be indeterminable amounts of fields of study related to the climate change discussion. From algorithms to coding to astrophysics to chemistry to ecology…..ect.. Before you cast dispersions towards others, can you state you’ve educated yourself on all questions of our climate? Get a grip. Personally, I’ve learned an incredible amount of information that is only useful to me while conversing with an alarmist. At some point, when one determines the CAGW/CC issue is a hoax/power grab for totalitarian socialists, there is a point of diminishing returns to force oneself of learning irrelevant topics. For instance, without the CAGW/CC discussion I wouldn’t give a rat’s ass about the bands of absorption or the multi-directional emission of CO2. Is it your assertion we all should learn chemistry/physics before we can possibly understand CAGW is nothing to worry about? Describing people as stupid and lazy because they can come to a intelligent conclusion without having to have every bit of minutia explained to them in excruciating detail doesn’t say much for the person offering the description. Believe it or not, people understood about triangles, geometry and algebra before Pythagoras explained his theory. Now, who is smarter, the ones that understood before Pythagoras or the one that needed Pythagoras to explain before they could understand?“CAM DOES NOT CHANGE OVER TIME!. do you read?”
Yes, I do read, and WHY IN THE HELL ARE YOU YELLING AT ME?
“RSM uses MORE DATA than CAM”
Shouldn’t that read ‘more ESTIMATED data’ than CAM? It in incomprehensible to me as to how, as time goes on, MORE data just suddenly appears. That sounds like magic, or making stuff up to me. That “more data” really is just a guess, right?
Why is that more accurate? Why is that better? More data is not necessarily ‘better’. Better data is better. How, after 12 years, it the actual data we measured in 1998 now better?
Sean says:
August 29, 2010 at 8:31 am
I wonder if hanson took a data plotting seminar from the financial folks at GE?
___________________________________________________
Yeah, Maybe Hansen has a “greenbelt” from GE. (I absolutely hate that program BTW)
So it is man-made warming after all.
There is no warming in nature, and then men come along, and make the past cooler and the present warmer, and there you are. Global warming.
Sure one could also do those things. Regardless, something of the sort, whether its been done or not, would be the logical first step in understanding the behavior of these algorithms. So, if I were really interested in this topic, then that is where I would go first. I think this could also be discussed without being so…cynical, snarky, impatient, etc. A little quick on the lecture trigger aren’t you?
Bob Kutz says:
August 30, 2010 at 10:05 am (Edit)
Mr. Mosher;
What you are stating here is that you do not have an accurate means of reliably estimating global mean surface temperature.”
Wrong. You have several methods that give the same answer. For example, with the CAM method, it doesnt matter whether I use 5000 stations or 1000. The answer is the same. The answer is the same because for the most part the changes in trend at any GIVEN location mirror the changes in trend at any other location. All the methods give different estimates ( within a small range) Its like taking a poll at election time. one pollster says Obama will win 52% to 48, another says, 53%to 47 another says 51% to 49, another says 54% to 46. They are ALL WRONG. we know with almost near certainty that the point estimate will be wrong. It will end up at 52.09876543% which no one will predict, YET, they are all Right, in that each predicted a win. They will differ marginally in ways that dont matter. They will poll different people and use different methods. THEY HAVE NO WAY of ‘accurately’ estimating the results, yet they are all correct.
“If that is the case then all arguments about AGW really ought to end until such time as we can tell whether the temperature is increasing or if our means of estimation is statistically unstable.”
1. that is an illogical conclusion where the conclusion does not follow from the antecendent.
The means of estimation is stable. Multiple methods, same data, same results. The changes to the METRIC THAT MATTERS ( trend) are in the 1/100s of C.
2. The principle arguments for AGW are made from first principles physics. Nothing in that physics predicts a monotonic increase in temps over short periods. Over long periods, it predicts a wide variety of increasing trends, from small increases to large. As such, it is hard to confirm the theory with short term trends and hard to disconfirm the theory. Disconfirming the theory would REQUIRE a whole rewriting of some fundamental physics. Physics that is known to work.
3. There is no evidence whatsoever that the globe has cooled since 1850. All evidence points to warming. That evidence when compiled WILL have uncertainty associated with it. it MUST. however, that uncertainty is not so broad as to include a zero trend. It’s getting warmer. AGW predicts this. The evidence is consistent with the prediction. Do the predictions MATCH the observations perfectly? of course not. THEY CANNOT, they can only match more or less. That is where the real issue lies. NOT in the issue of how much EXACTLY it has warmed, but rather in how well the predictions ( with their SPREAD) match the observations ( with there spread)
“There is no CO2 mitigation strategy which will stabilize our temperature estimating methodology.”
C02 mitigation has nothing to do with this argument. I tell you that if the FED prints more money the inflation rate will increase. One economist says 2.4%, another says, 2.8%, a third says 2.3% .month to month their numbers change, BUT NONE of them makes the argument that printing money will cause dis inflation. Your argument is this. because we dont know the EXACT inflation rate, that we should therefore print money. You use ignorance of the exact answer to justify your flauting of known economics. Your wife tell’s you that she over drew the account by 200 dollars. You check her math and found out that according to your math, its 195 dollars. Your CPA says that it is 196.34. And you conclude that you can continue to write checks because the various accountings all give different answers.
Your cult has dug it’s grave, now you need to sit down and be quiet while scientists try to figure out what is really happening to our atmosphere and climate.
Opps, I missed the cult part.
Sorry, but I dont belong to the ‘cult’ of global warming. I’m a Luke warmer, which means this:
1. I hold to the KNOWN physics. GHGs cause warming. Just like spenser, lindzen, christy, watts, willis, jeffid. We dont deny the phsyics that is known to work.
2. The key arguments are about sensitivity. we hold to lower end estimates. It wont warm as bad as most people think. BUT we are open to being convinced.
3. Harms are hard to quantify if they exist at all
4. Solutions are hard to quantify, if they exist at all.
5. There are some things we should do REGARDLESS, like reduce our dependence on fossile fuels.
Mosh, you’re a smart guy, too smart to be resorting to name calling and trying to paint people as ignorant because they don’t agree with you..
Why reduce dependence on fossil fuel? Currently, it is the best energy source when trying to optimize for price and efficiency. Fossil fuels have improved the lives of probably billions of people cause they have allowed us to break our dependency on land intensive “renewable energy” that has limited societies in the past to Malthusian limits. When you can come up with an energy source that is not land intensive and meets the energy requirements of the globe, then dependency on fossil fuel will take care of itself. It’ll likely be something we haven’t thought of yet.
Jaye Bass says: August 30, 2010 at 11:39 am
I think this could also be discussed without being so…cynical, snarky, impatient, etc. A little quick on the lecture trigger aren’t you?
Mosh gets it from both sides here.
Steven Mosher says:
August 30, 2010 at 11:45 am
“…….one pollster says Obama will win……..One economist says 2.4%, another says……”
I guess it was a little much to expect climate science to be a little more definitive than political polling(wrong often) and economics (wrong more often than political pollsters). Wasn’t it some economists that told us to spend $trillions in the U.S. and we wouldn’t see unemployment over 8%? And didn’t the pollsters have Mrs. Clinton hands down Democratic nominee? Didn’t they say the Reagan/Carter election was too close to call on election night, but when the dust settled, it was 51%-41%? I really don’t think we can afford to act on that sort margin of error.
Cult? Steven, what the heck? Are you really going there?
Crap, ok, sorry Steven, I was a bit confused about who said what. My bad.
MattN
I will make simple. overly simple
station 1: 0, 0, 0, 0, 0,0 ( six years)
station2: NA,NA,NA,1,1,1 ( 3 years)
In RSM you look to estimate the trend for a station. When stations are close to each other you “consider” combining them to get a better estimate of the local trend. 2 data sources as opposed to one. Now in the case above, station 2 has a short record. So we ONLY USE station 1 data and the answer is:
0,0,0,0,0,0
Now Time marches forward. We add data as years go by. After 3 years it looks like this
station 1: 0, 0, 0, 0, 0,0 ,0,0,0 ( 9 years)
station2: NA,NA,NA,1,1,1,1,1,1 ( 6 years)
Now we run our algorithm. Suppose the algorithm said ‘ 6years of overlap required’ Well Now station #2 GETS INCLUDED. we can use that data that we always had, but could never use, because of our algorithm. because of our desire to make good estimates.
And the answer is:
0,0,0,.5,.5,.5,.5,.5,.5.
Wow! the past changed. Well actually NOT. our estimate of the past changed because we can NOW consider data that was not included before. So we have a better estimate. RSM ( in some cases) can change the past and should change the past, beacause the “data” changed, that is you can now use more of the data that you used to throw out. and you threw it out because there was not enough of it. you threw it out because you didnt want to include high frequency error. now with a longer series for that station, you get to include the past data. And it changes the record. usually in a very slight manner.
Mosh.
Fossile/fissile let’s call the whole thing off.