The Temperature of Science (PDF available here)
James Hansen
My experience with global temperature data over 30 years provides insight about how the science and its public perception have changed. In the late 1970s I became curious about well known analyses of global temperature change published by climatologist J. Murray Mitchell: why were his estimates for large-scale temperature change restricted to northern latitudes? As a planetary scientist, it seemed to me there were enough data points in the Southern Hemisphere to allow useful estimates both for that hemisphere and for the global average. So I requested a tape of meteorological station data from Roy Jenne of the National Center for Atmospheric Research, who obtained the data from records of the World Meteorological Organization, and I made my own analysis.
Fast forward to December 2009, when I gave a talk at the Progressive Forum in Houston Texas. The organizers there felt it necessary that I have a police escort between my hotel and the forum where I spoke. Days earlier bloggers reported that I was probably the hacker who broke into East Anglia computers and stole e-mails. Their rationale: I was not implicated in any of the pirated e-mails, so I must have eliminated incriminating messages before releasing the hacked emails.
The next day another popular blog concluded that I deserved capital punishment. Web chatter on this topic, including indignation that I was coming to Texas, led to a police escort.
How did we devolve to this state? Any useful lessons? Is there still interesting science in analyses of surface temperature change? Why spend time on it, if other groups are also doing it? First I describe the current monthly updates of global surface temperature at the Goddard Institute for Space Studies. Then I show graphs illustrating scientific inferences and issues. Finally I respond to questions in the above paragraph.
Current Updates
Each month we receive, electronically, data from three sources: weather data for several thousand meteorological stations, satellite observations of sea surface temperature, and Antarctic research station measurements. These three data sets are the input for a program that produces a global map of temperature anomalies relative to the mean for that month during the period of climatology, 1951-1980.
The analysis method has been described fully in a series of refereed papers (Hansen et al., 1981, 1987, 1999, 2001, 2006). Successive papers updated the data and in some cases made minor improvements to the analysis, for example, in adjustments to minimize urban effects. The analysis method works in terms of temperature anomalies, rather than absolute temperature, because anomalies present a smoother geographical field than temperature itself. For example, when New York City has an unusually cold winter, it is likely that Philadelphia is also colder than normal. The distance over which temperature anomalies are highly correlated is of the order of 1000 kilometers at middle and high latitudes, as we illustrated in our 1987 paper.
Although the three input data streams that we use are publicly available from the
organizations that produce them, we began preserving the complete input data sets each month in April 2008. These data sets, which cover the full period of our analysis, 1880-present, are available to parties interested in performing their own analysis or checking our analysis. The computer program that performs our analysis is published on the GISS web site.

Responsibilities for our updates are as follows. Ken Lo runs programs to add in the new data and reruns the analysis with the expanded data. Reto Ruedy maintains the computer program that does the analysis and handles most technical inquiries about the analysis. Makiko Sato updates graphs and posts them on the web. I examine the temperature data monthly and write occasional discussions about global temperature change.
Scientific Inferences and Issues
Temperature data – example of early inferences. Figure 1 shows the current GISS
analysis of global annual-mean and 5-year running-mean temperature change (left) and the hemispheric temperature changes (right). These graphs are based on the data now available, including ship and satellite data for ocean regions.
Figure 1 illustrates, with a longer record, a principal conclusion of our first analysis of temperature change (Hansen et al., 1981). That analysis, based on data records through December 1978, concluded that data coverage was sufficient to estimate global temperature change. We also concluded that temperature change was qualitatively different in the two hemispheres. The Southern Hemisphere had more steady warming through the century while the Northern Hemisphere had distinct cooling between 1940 and 1975.
It required more than a year to publish the 1981 paper, which was submitted several times to Science and Nature. At issue were both the global significance of the data and the length of the paper. Later, in our 1987 paper, we proved quantitatively that the station coverage was sufficient for our conclusions – the proof being obtained by sampling (at the station locations) a 100-year data set of a global climate model that had realistic spatial-temporal variability. The different hemispheric records in the mid-twentieth century have never been convincingly explained. The most likely explanation is atmospheric aerosols, fine particles in the air, produced by fossil fuel burning. Aerosol atmospheric lifetime is only several days, so fossil fuel aerosols were confined mainly to the Northern Hemisphere, where most fossil fuels were burned. Aerosols have a cooling effect that still today is estimated to counteract about half of the warming effect of human-made greenhouse gases. For the few decades after World War II, until the oil embargo in the 1970s, fossil fuel use expanded exponentially at more than 4%/year, likely causing the growth of aerosol climate forcing to exceed that of greenhouse gases

Flaws in temperature analysis. Figure 2 illustrates an error that developed in the GISS analysis when we introduced, in our 2001 paper, an improvement in the United States temperature record. The change consisted of using the newest USHCN (United States Historical Climatology Network) analysis for those U.S. stations that are part of the USHCN network. This improvement, developed by NOAA researchers, adjusted station records that included station moves or other discontinuities. Unfortunately, I made an error by failing to recognize that the station records we obtained electronically from NOAA each month, for these same stations, did not contain the adjustments. Thus there was a discontinuity in 2000 in the records of those stations, as the prior years contained the adjustment while later years did not. The error was readily corrected, once it was recognized. Figure 2 shows the global and U.S. temperatures with and without the error. The error averaged 0.15°C over the contiguous 48 states, but these states cover only 1½ percent of the globe, making the global error negligible.
However, the story was embellished and distributed to news outlets throughout the country. Resulting headline: NASA had cooked the temperature books – and once the error was corrected 1998 was no longer the warmest year in the record, instead being supplanted by 1934.
This was nonsense, of course. The small error in global temperature had no effect on the ranking of different years. The warmest year in our global temperature analysis was still 2005.
Conceivably confusion between global and U.S. temperatures in these stories was inadvertent. But the estimate for the warmest year in the U.S. had not changed either. 1934 and 1998 were tied as the warmest year (Figure 2b) with any difference (~0.01°C) at least an order of magnitude smaller than the uncertainty in comparing temperatures in the 1930s with those in the 1990s.
The obvious misinformation in these stories, and the absence of any effort to correct the stories after we pointed out the misinformation, suggests that the aim may have been to create distrust or confusion in the minds of the public, rather than to transmit accurate information. That, of course, is a matter of opinion. I expressed my opinion in two e-mails that are on my Columbia University web site
Click to access 20070810_LightUpstairs.pdf
http://www.columbia.edu/~jeh1/mailings/2007/20070816_realdeal.pdf.
We thought we had learned the necessary lessons from this experience. We put our
analysis program on the web. Everybody was free to check the program, if they were concerned that any data “cooking” may be occurring.
Unfortunately, another data problem occurred in 2008. In one of the three incoming data streams, the one for meteorological stations, the November 2008 data for many Russian stations was a repeat of October 2008 data. It was not our data record, but we properly had to accept the blame for the error, because the data was included in our analysis. Occasional flaws in input data are normal in any analysis, and the flaws are eventually noticed and corrected if they are
substantial. Indeed, we have an effective working relationship with NOAA – when we spot data that appears questionable we inform the appropriate people at the National Climate Data Center – a relationship that has been scientifically productive.
This specific data flaw was a case in point. The quality control program that NOAA runs on the data from global meteorological stations includes a check for repetition of data: if two consecutive months have identical data the data is compared with that at the nearest stations. If it appears that the repetition is likely to be an error, the data is eliminated until the original data source has verified the data. The problem in 2008 escaped this quality check because a change in their program had temporarily, inadvertently, omitted that quality check.
The lesson learned here was that even a transient data error, however quickly corrected provides fodder for people who are interested in a public relations campaign, rather than science.
That means we cannot put the new data each month on our web site and check it at our leisure, because, however briefly a flaw is displayed, it will be used to disinform the public. Indeed, in this specific case there was another round of “fraud” accusations on talk shows and other media all around the nation.
Another lesson learned. Subsequently, to minimize the chance of a bad data point
slipping through in one of the data streams and temporarily affecting a publicly available data product, we now put the analyzed data up first on a site that is not visible to the public. This allows Reto, Makiko, Ken and me to examine maps and graphs of the data before the analysis is put on our web site – if anything seems questionable, we report it back to the data providers for them to resolve. Such checking is always done before publishing a paper, but now it seems to be necessary even for routine transitory data updates. This process can delay availability of our data analysis to users for up to several days, but that is a price that must be paid to minimize disinformation.
Is it possible to totally eliminate data flaws and disinformation? Of course not. The fact that the absence of incriminating statements in pirated e-mails is taken as evidence of wrongdoing provides a measure of what would be required to quell all criticism. I believe that the steps that we now take to assure data integrity are as much as is reasonable from the standpoint of the use of our time and resources.

Strong correlation of global SST with the Nino index is obvious. Global land-ocean
temperature is noisier than the SST, but correlation with the Nino index is also apparent for global temperature. On average, global temperature lags the Nino index by about 3 months.
During 2008 and 2009 I received many messages, sometimes several per day informing me that the Earth is headed into its next ice age. Some messages include graphs extrapolating cooling trends into the future. Some messages use foul language and demand my resignation. Of the messages that include any science, almost invariably the claim is made that the sun controls Earth’s climate, the sun is entering a long period of diminishing energy output, and the sun is the cause of the cooling trend.
Indeed, it is likely that the sun is an important factor in climate variability. Figure 4 shows data on solar irradiance for the period of satellite measurements. We are presently in the deepest most prolonged solar minimum in the period of satellite data. It is uncertain whether the solar irradiance will rebound soon into a more-or-less normal solar cycle – or whether it might remain at a low level for decades, analogous to the Maunder Minimum, a period of few sunspots that may have been a principal cause of the Little Ice Age.
The direct climate forcing due to measured solar variability, about 0.2 W/m2, is
comparable to the increase in carbon dioxide forcing that occurs in about seven years, using recent CO2 growth rates. Although there is a possibility that the solar forcing could be amplified by indirect effects, such as changes of atmospheric ozone, present understanding suggests only a small amplification, as discussed elsewhere (Hansen 2009). The global temperature record (Figure 1) has positive correlation with solar irradiance, with the amplitude of temperature variation being approximately consistent with the direct solar forcing. This topic will become clearer as the records become longer, but for that purpose it is important that the temperature record be as precise as possible.

Frequently heard fallacies are that “global warming stopped in 1998” or “the world has been getting cooler over the past decade”. These statements appear to be wishful thinking – it would be nice if true, but that is not what the data show. True, the 1998 global temperature jumped far above the previous warmest year in the instrumental record, largely because 1998 was affected by the strongest El Nino of the century. Thus for the following several years the global temperature was lower than in 1998, as expected.
However, the 5-year and 11-year running mean global temperatures (Figure 3b) have continued to increase at nearly the same rate as in the past three decades. There is a slight downward tick at the end of the record, but even that may disappear if 2010 is a warm year.
Indeed, given the continued growth of greenhouse gases and the underlying global warming trend (Figure 3b) there is a high likelihood, I would say greater than 50 percent, that 2010 will be the warmest year in the period of instrumental data. This prediction depends in part upon the continuation of the present moderate El Nino for at least several months, but that is likely.
Furthermore, the assertion that 1998 was the warmest year is based on the East Anglia – British Met Office temperature analysis. As shown in Figure 1, the GISS analysis has 2005 as the warmest year. As discussed by Hansen et al. (2006) the main difference between these analyses is probably due to the fact that British analysis excludes large areas in the Arctic and Antarctic where observations are sparse. The GISS analysis, which extrapolates temperature anomalies as far as 1200 km, has more complete coverage of the polar areas. The extrapolation introduces uncertainty, but there is independent information, including satellite infrared measurements and reduced Arctic sea ice cover, which supports the existence of substantial positive temperature anomalies in those regions.
In any case, issues such as these differences between our analyses provide a reason for having more than one global analysis. When the complete data sets are compared for the different analyses it should be possible to isolate the exact locations of differences and likely gain further insights.
Summary
The nature of messages that I receive from the public, and the fact that NASA
Headquarters received more than 2500 inquiries in the past week about our possible “manipulation” of global temperature data, suggest that the concerns are more political than scientific. Perhaps the messages are intended as intimidation, expected to have a chilling effect on researchers in climate change.
The recent “success” of climate contrarians in using the pirated East Anglia e-mails to cast doubt on the reality of global warming* seems to have energized other deniers. I am now inundated with broad FOIA (Freedom of Information Act) requests for my correspondence, with substantial impact on my time and on others in my office. I believe these to be fishing expeditions, aimed at finding some statement(s), likely to be taken out of context, which they would attempt to use to discredit climate science.
There are lessons from our experience about care that must be taken with data before it is made publicly available. But there is too much interesting science to be done to allow intimidation tactics to reduce our scientific drive and output. We can take a lesson from my 5- year-old grandson who boldly says “I don’t quit, because I have never-give-up fighting spirit!”
Click to access 20091130_FightingSpirit.pdf
There are other researchers who work more extensively on global temperature analyses than we do – our main work concerns global satellite observations and global modeling – but there are differences in perspectives, which, I suggest, make it useful to have more than one analysis. Besides, it is useful to combine experience working with observed temperature together with our work on satellite data and climate models. This combination of interests is likely to help provide some insights into what is happening with global climate and information on the data that are needed to understand what is happening. So we will be keeping at it.
*By “success” I refer to their successful character assassination and swift-boating. My interpretation of the e-mails is that some scientists probably became exasperated and frustrated by contrarians – which may have contributed to some questionable judgment. The way science works, we must make readily available the input data that we use, so that others can verify our analyses. Also, in my opinion, it is a mistake to be too concerned about contrarian publications – some bad papers will slip through the peer-review process, but overall assessments by the National Academies, the IPCC, and scientific organizations sort the wheat from the chaff.
The important point is that nothing was found in the East Anglia e-mails altering the reality and magnitude of global warming in the instrumental record. The input data for global temperature analyses are widely available, on our web site and elsewhere. If those input data could be made to yield a significantly different global temperature change, contrarians would certainly have done that – but they have not.
References
Frölich, C. 2006: Solar irradiance variability since 1978. Space Science Rev., 248, 672-673.
Hansen, J., D. Johnson, A. Lacis, S. Lebedeff, P. Lee, D. Rind, and G. Russell, 1981: Climate
impact of increasing atmospheric carbon dioxide. Science, 213, 957-966.
Hansen, J.E., and S. Lebedeff, 1987: Global trends of measured surface air temperature. J.
Geophys. Res., 92, 13345-13372.
Hansen, J., R. Ruedy, J. Glascoe, and Mki. Sato, 1999: GISS analysis of surface temperature change. J. Geophys. Res., 104, 30997-31022.
Hansen, J.E., R. Ruedy, Mki. Sato, M. Imhoff, W. Lawrence, D. Easterling, T. Peterson, and T. Karl, 2001: A closer look at United States and global surface temperature change. J. Geophys. Res., 106, 23947-23963.
Hansen, J., Mki. Sato, R. Ruedy, K. Lo, D.W. Lea, and M. Medina-Elizade, 2006: Global
temperature change. Proc. Natl. Acad. Sci., 103, 14288-14293.
Hansen, J. 2009: “Storms of My Grandchildren.” Bloomsbury USA, New York. (304 pp.)
nanuuq (23:38:36) :
I have been reading many of the web sites, including this one discussing the current climate warming issue. In many ways I am disgusted with both of the extreme viewpoints. On this site we see an almost lynch mob, drown the witch mentality. Not looking for the scientific realtiy, but *GLOATING* in finding some minor fault in somebody elses analysis.
Please provide a list of the other sites
In many cases, ad hominem attacks are made, and comments carried on without any justifcation in science or analysis. I want to see both sides settle down and show SCIENTIFICALLY that their side has merit.
How many cases do you count? I see plenty of analysis on Anthony’s blog. Maybe you should spend more time here.
To my mind (I am a trained chemist and computer scientist) I see a mentality of a bunch of school yard children running around and trying to prove they are the best.
Alynski rule 5. Ridicule is man’s most potent weapon. Ok, now we have the insult
Here we find a good analysis from the side of those worried about global warming, and the effects it has on the future of our world. I expect the usual crap from the usual commentators here who in no way try to really analyze the data or results.
You are loosing me but I enjoy a good laugh, continue…
It saddens me to see the politization of science, but this is not new, just ask Galileo.
Alynski rule 4. Make the enemy live up to his/her own book of rules. Punishment of Galileo for his battle against the ESTABLISHMENT. Love the irony
EVEN if humans are not responsible for global warming, how are we to minimize the effects of the REAL TRUE WARMING we are currently seeing? Even if CO2 is not the main culprit, could reducing the emiision reduce the overall warming going on?
ZING! Alynsky rule 12. The price of a successful attack is a constructive alternative. Keep shooting for an intermedia goal if your primary fails. Keep pushing that Progressive (Socialist) agenda.
I need a spellchecker!
Question:
Would James Hansen and his brethren be considered to be like those who believed that Earth was the center of the universe? Nothing from outside affects the Earth? And that even the Sun only adds light to the Earth’s energy balance?
And that the “deniers” are the ones pushing the belief that the Sun is the center of our system while only comprising a small part of a larger galaxy of systems that are all interconnected and affecting each other?
Who’s locking who in the tower for trying to express their scientific findings?
Side question for nanuk: If the warming were indeed found to be natural, you still believe that we need to do something to stop it?!?!? HOW!?!? How do humans stop natural warming when, based on this statement, humans can’t affect global temperature change?
There’s real ‘true’ warming happening?
Huh, I see the graphs; they show cooling.
There’s no politics involved in James Hansen’s climate view?
……scratches head…..
have a look at this video
If dr Hansen is reading this blog, I recommend that he spends some time meditating on the plots provided in
http://wattsupwiththat.files.wordpress.com/2009/12/noaa_gisp2_icecore_anim_hi-def3.gif
I do not believe that any scientifically trained person, seeing this sweep of chronology and temperature data in two specific locations ( not global, not massaged after the data are gathered) can still deduce that the tiny temperature increase, represented by red in the plots, has much to do with the inexorable march of climate on this earth. Let alone blow the horn of the apocalypse.
Such a fine fantasy world he describes… I don’t think I have enough month left to even begin poking holes in it. For now, just this:
the proof being obtained by sampling (at the station locations) a 100-year data set of a global climate model that had realistic spatial-temporal variability.
I’m sorry, but a “model” can never be proof about reality. Causality runs the other way…
Oh and he also conveniently left out the omission of USHCN from May 2007 to a month or so ago AND the fact that the USHCN.v2 they put in a couple of months ago not only filled in the missing 2007-to-date, but rewrote all the past with a more extreme “adjustment” history… But little details like the actual data don’t really matter to the results produced…
@E.M.Smith (21:56:35) :
Yep he runs a real nice sausage factory, just yesterday I looked at Gistemp’s version of Christchurch NZ and some how 25 years of data disappeared from the time it went into the black box and when the data cam out. You know something is screwy when for all combined stations the record runs from 1905 to the present but after final “Homogeniztion” you only get data from 1931 on.
scienceofdoom (01:22:46) : We’ve seen recently from the posts from Willis about North Australian temperatures that the adjustments to the raw data have a significant effect in the last 100 years temperature trend – at least in North Australia.
Not just the adjustments to the data, but location bias by leaving cold stations in the baseline time period and deleting them from the recent past.
GISS now publish their data and their source code – hats off to them. Does this show how the *raw* data is processed and turned into the gridded temperatures or does it only show how “already adjusted” temperatures from each station are turned into the final gridded temperatures?
There is no “raw” data in the historical data sets used by GIStemp. It all comes with various levels of pre-processing. They generally use a ‘slightly processed’ form called “GHCN unadjusted” but even that has been processed by the Australian BOM prior to release and with some processing at NCDC prior to inclusion into GHCN. But yes, the code states what is done to the data. And is shows from “temperature readings” through to anomaly grids.
Thanks in advance for anyone who can explain this.
I can describe it. It think it is not possible to explain it…
Some Australian oriented postings:
http://chiefio.wordpress.com/2009/11/13/ghcn-pacific-islands-sinking-from-the-top-down/
Where we see the Australian altitudes wash away.
Year -MSL 20 50 100 200 300 400 500 1000 2000 Space ... DAltPct: 1889 19.2 13.2 28.2 11.4 11.9 5.7 0.0 10.4 0.0 0.0 0.0 DAltPct: 1899 20.0 13.1 24.7 12.9 10.2 8.8 1.4 8.8 0.0 0.0 0.0 DAltPct: 1909 21.2 9.7 16.5 17.7 9.7 12.0 4.3 8.2 0.7 0.0 0.0 DAltPct: 1919 18.3 8.0 11.0 20.4 13.1 11.8 4.4 10.8 2.2 0.0 0.0 DAltPct: 1929 17.3 8.1 10.7 22.0 13.2 11.8 4.2 10.3 2.3 0.0 0.0 DAltPct: 1939 16.3 7.8 10.9 23.2 13.8 12.0 4.1 9.8 2.2 0.0 0.0 DAltPct: 1949 18.2 8.6 11.0 22.2 13.6 11.1 3.8 9.7 1.7 0.0 0.0 DAltPct: 1959 21.2 9.6 10.7 20.0 12.0 11.3 4.5 9.6 1.1 0.0 0.0 DAltPct: 1969 24.1 11.7 11.3 17.3 12.0 9.2 4.0 9.1 1.3 0.0 0.0 DAltPct: 1979 22.9 11.2 11.0 17.6 12.6 10.1 4.3 8.9 1.3 0.0 0.0 DAltPct: 1989 24.4 10.9 11.3 16.4 12.6 10.2 4.3 8.9 1.1 0.0 0.0 DAltPct: 1999 26.3 11.5 11.5 15.4 11.8 9.7 4.1 8.9 0.7 0.0 0.0 DAltPct: 2009 35.4 14.5 12.8 14.7 5.1 7.6 2.3 7.6 0.0 0.0 0.0 For COUNTRY CODE: 501from:
http://chiefio.wordpress.com/2009/10/29/ghcn-pacific-basin-lies-statistics-and-australia/
which shows that the ‘global warming’ in the Pacific region is non-existent if you leave out the location bias in the Australian and Kiwi thermometers
Another UPDATE: I’ve added a table of “Pacific without Australia and without New Zealand”. It’s dead flat. The Pacific Ocean an attendant islands are NOT participating in “Global” warming. Changes of thermometers in Australia and New Zealand are the source of any “change”.
and in:
http://chiefio.wordpress.com/2009/10/23/gistemp-aussy-fair-go-and-far-gone/
We watch the Australian thermometers march north toward the equator while the northern hemisphere thermometers march south:
Yes, this is GHCN, not GIStemp, but Hansen and GIStemp accept this built in bias as being just fine. Then they layer on some broken UHI that uses airports as rural (but those things are not Australia specific).
Hasn’t Hansen mentioned ‘death trains’?
Doesn’t Hansen not use satellite temperatures?
Doesn’t Hansen not use the Argo SST buoys, instead of relying on many different measurements via different modus operandi?
Doesn’t Hansen correct for UHI by reducing past temperatures instead of current temperatures?
If urban sites are so poor that they need human intervention to correct them why wouldn’t Hansen use rural sites and ignore urban sites compketely?
Why did Hansen eliminate so many sites from northern Canada and Russia, when they still exist, are certainly rural, and report monthly. I thought the collapse of the USSR caused the drop out, but it didn’t, it’s Hansen cherry picking the northern Canada/Russia sites he wants to use — I guess adjusting several thousand sites was too hard, so he reduced it to a select few.
Steve Goddard (17:44:13) :
“The numbers Hansen generates from these extrapolations are not credible.”
Rather than eyeballing the difference between GISTEMP and RSS, why not actually compare the global averages derived from these times series? The numbers are readily available and should quantify any discrepancy between the satellite and derived surface temperatures.
In fact the correlation coefficient between monthly average temperatures from 1979 to date for the two datasets is 0.82, a strong agreement by anyone’s standards. As for the trend, GISTEMP gives +0.161 C/decade while RSS gives +0.153 C/decade.
These two quite independently derived datasets, one from satellites and the other from surface stations, are therefore strongly correlated and give very good agreement for the global warming trend.
An interesting observation was made some months ago concerning responses in Climate Audit and WUWT threads discussing “Team” authored or derived material:
The handle “TomP” and the handle Tom P” (note the space in the latter) were almost never active authors on the same thread at the same time.
Both went silent when the CRU letters went into citculation:
http://www.eastangliaemails.com/index.php
They both wrote in a similar style, used the same talking points (they especially want YOU to prove an element of a Team report “wrong”), and material by the “Team” is almost always defended.
Credible reports by authors other than “Team” members running contrary to Team material were almost always picked apart in a similar fashion by either TomP or Tom P. In these responses, they always relied heavily on “peer-reviewed” Team papers. Again, both almost never wrote at the same time
Are we again seeing a tag-Team identities? Is there a common association that fosters coordination between or among particular identities ?
We know it’s not a cabal. Madonna is a member of a cabal. Cabals are into numerology of a different sort.
Best holiday wishes…
Count me in on what Willis Eschenbach had to say to nanuuq. Hansen has sown the wind, now he reaps the whirlwind. His rambling article once again proves what a charlatan he is. He is the one who has distorted and politicized the science and prevented any reasonable, scientific discussions of the issue. I have said it before, and I say it again – as a Federal employee, Hansen has so many times violated policies regarding political advocacy or activity in his job that he should be discharged from his position. He has been an irresponsible government employee, regardless of the various merits of his views.
R.S.Brown (01:18:22)
Obviously fake initials – your first name is Dan!
Looks like you’re already well into the research for the next novel.
Re: savethesharks (19:56:47)
Thanks for the backup Chris. I hope you didn’t scare the assailant way – I was looking forward to the fight.
Man, I’ve got some dope new research results. I had shelved the work when the money ran out, but somethin’ hit me yesterday.
You might remember the contrasts of interannual & annual aa index I was working on. UPDATE: stunning 5-way near-synchrony involving QBO at the AMO bottom-out, breaks right at the ’76 change, right where the SSD/EOP/LNC aberration commences. The contrast tracks regional precipitation variables and then switches to following minimum temperatures around the ’30 change – looks like a piece of the Chandler-reversal grail.
The switch-timing fits pattern-breaks here:
http://www.sfu.ca/~plv/PhasePr..-r..MorletPi3a12a.PNG
http://www.sfu.ca/~plv/PolarMotionPeriodMorlet2piPower.PNG
http://www.sfu.ca/~plv/ChandlerPeriodAgassizBC,CanadaPrecipitationTimePlot.PNG
http://www.sfu.ca/~plv/sqrtaayoy.sq22.png
This thing is starting to make sense:
http://www.sfu.ca/~plv/-LOD_aa_Pr._r.._LNC.png
(…including the 90s hook)
phenomenally complex – but undeniably nonrandom – with absolute certainty — it’s like a bunch of switches flipping & tripping interdependencies off & on….
This is the nature I know – not the soundbite-logic ‘climate’ being pushed on the msm market.
Tom P:
“In fact the correlation coefficient between monthly average temperatures from 1979 to date for the two datasets is 0.82, a strong agreement by anyone’s standards. As for the trend, GISTEMP gives +0.161 C/decade while RSS gives +0.153 C/decade.”
Do you have a reference for the correlation coefficient or did you calculate this yourself?
save the sharks and paul vaughan,
Its the holier than thou BS to which I was reacting.
Say what you want about the science, the policy, the scientists, the weather, or the Chicago Cubs. But spare me the I’m sooo squeaky clean and righteous (and real smart) nonsense.
BTW, how do I discreminate riff-raff from the more ordinary cess-pool creatures.
I’d say Paul launched a pre-emptive ad-hominem on any who don’t ascribe to his agenda. This is acceptable why?
Perhaps I misunderstoond. Feel free to clarify.
Frank K. (05:30:41)
I downloaded the data into a spreadsheet and calculated it.
Here’s the data file: http://www.woodfortrees.org/data/gistemp/from:1979/plot/rss
Excel will do the rest.
Here are the plots with the mean anomaly for RSS shifted to match in 1995:
http://img51.imageshack.us/img51/4870/gissvsrss.png
Looks like Hansen has produced a valid dataset in GISTEMP.
By the way the GISTEMP code has been externally examined and rewritten under the Clear Climate Code Project. You can find their validation here: http://clearclimatecode.org/
Hansen “Indeed, given the continued growth of greenhouse gases and the underlying global warming trend (Figure 3b) there is a high likelihood, I would say greater than 50 percent, that 2010 will be the warmest year in the period of instrumental data.”
That’s a funny statement for someone so sure of what is going on;
50.0000000000001 % is greater than 50% but for all practical purposes it is no different than the odds of flipping a coin.
Hansen the man that basically is telling us that he can “walk on water”, lacks
the ability to predict the temperature trend next year and yet he expects the world’s population to spend trillions of dollars and change their basic living ways based on his longer term predictions. LMAO, never heard anything so funny.
I sure hope he reads all the comments here and responds to some of them!
What a dork!
Tom P,
“0.53 is firmly within the accepted range of a correlation coefficient for moderate agreement, between 0.3 and 0.7. Looks like you have you own personal “Yikes” metric here.”
How reasonable is 0.53 then? I decided to run a test in a spreadsheet. These values represent degrees C multiplied by 10 :
Set A Set B
170 170
175 172
180 174
185 176
182 178
180 174
172 173
175 178
178 176
182 175
176 177
175 178
172 179
170 177
165 172
163 170
168 169
170 171
Correlation = 0.542780607 which you describe as “reasonable” but a quick eyeballing shows that set B has a much smaller range than set A and even diverges at one point.
It doesn’t take a genius to see the effect this would have on temperatures that are looking for precision to within tenths of a degree.
Tom P;
“Looks like Hansen has produced a valid dataset in GISTEMP.”
Depends on your definition of “valid”…
“By the way the GISTEMP code has been externally examined and rewritten under the Clear Climate Code Project. You can find their validation here: http://clearclimatecode.org/”
So why doesn’t GISS use it?? Oh that’s right…
Vincent (10:01:36) :
“It doesn’t take a genius to see the effect this would have on temperatures that are looking for precision to within tenths of a degree.”
Indeed. The effect would be the agreement between GISTEMP and RSS seen above.
Frank K. (10:28:19) :
“Depends on your definition of “valid”…”
I might suggest a high level of agreement with an independently derived dataset indicates a valid derivation. Do you disagree?
“So why doesn’t GISS use it?? Oh that’s right…”
The goal of the Clear Climate Code project is not to produce a replacement for GISTEMP, but perform the process of an open source validation which gives identical results and can be used by the widest community. What did you have in mind?
“”” Tom P (16:30:21) :
George E. Smith (15:36:04) :
“…when I look at a daily SF Bay area weather map, with max and mins for dozens of places some just 2-3 km apart; they clearly don’t show much strong correlation.”
Correlation doesn’t mean the temperatures are the same, just that they vary in synchronicity. GISS temperatures between San Jose and San Francisco have a correlation of 0.76.
Have you actually done any numerical analysis on this? “””
Well since I don’t have easy access to the raw data sources, the answer to your question is no.
So SF and SJ temperatures vary in “synchronicity”; does that mean that any place that is in between SF and SJ can be obtained by simple interpolation from those two. Because the daily published numbers would have a hard time convincing me of that. I would expect daily max temps to “interpolate”, and daily mins to interpolate, since this “synchronicity” would seem to imply causal relationship between the two.
There’s a whole branch of mathematics that deals with the construction of unrelated data sets that neverthe less show greater than 50% correlation, as if there was some cause and effect connection.
If you told me that the temperature in San Francisco was always identical to the temperature in San Jose (it may be for all I know) that still doesn’t prove that everywhere in between would track as well.
Any mountain has points on both sides of the mountain; in fact for 360 degrees around the mountain, that are at exactly the same altitude. Tells us nothing about points in between; only proper sampling can reveal that.
The rubber skin on a sphere experiment demonstrates that correlation between two points doesn’t ensure correlation for any other points.
If the question of whether climate is headed for a catastrophic precipice, and whether man is responsible for causing that, rests on some statistical mathematics wizardry, rather than some sound Physics and Chemistry, or even some Biological science; then I would say that Mother Nature’s experiment in survival through intelligence, has been a gigantic flop; and it is time for her to try something else.
Wow you can get the same result when using the same data and same code, that’s like using the same numbers in an algorithm as the next guy and getting the same results, goosh, why didn’t I think of using that as proof before.
Of course you’ll end up with the same type of result if you use the same data with the same code and algorithms.
What happens when you use the actual raw data, not the pre-chosen and pre-chewed data, with some other equally well adjusted code and, or, algorithms for choosing and chewing…. but of course nothing else seem to be allowed to exist.
And how about the calibration? You end up with different results when adjusting the time period and cycle length. And the cycle periods, and even time periods, or series, five years, 11 years, or 30 years, or a hundred years, doesn’t even seem to be adjusted to natural cycles (that are known to exist historically), but still everything is just so so, very well calibrated sir…. just you don’t mind the shifting going on. Nothing can be said to be calibrated if the slope goes up, or down, all depending on which year you start with, or cycle length you use, or whole time series you use.
But please explain to the lay person why something is deemed to be well calibrated, and hoopla hop and all that, when stuff goes up, or down, all depending on if you start with 99 or 98, or use 17 chosen stations instead of, oh I don’t know, how ’bout all that were functioning the same or better for the time period?
pleease (05:32:48) “Perhaps I misunderstoond.”
Indeed – & naively.
See the handbook on stereotype demolition. Gordon Brown is shamelessly blowing fake market bubbles, inching us towards $100 light-bulbs, required by law – and yet political partisans like you maintain that it is always counterstrategic to draw crossfire in no man’s land.
My advice to your handlers: Don’t hire strategists who flunked Paradox 101.
On another topic, here’s a draft phase-contrast that points to serious data-homogenization issues, conditional decadal Arctic/North-Pacific coupling, conditional interannual geomagnetic/climate-index coupling, & conditional multi-decadal orographic diurnal precipitation dynamics:
http://www.sfu.ca/~plv/TMin_X_TMax_draft.PNG
The flashing neon light at 1930 marks the flipping of a switch (change in coupling status) that KILLS linear correlations, throwing naive investigators – who can’t see paradox – off the track…
…costing us billions.