Hansen on the surface temperature record, Climategate, solar, and El Nino

The Temperature of Science (PDF available here)

James Hansen

My experience with global temperature data over 30 years provides insight about how the science and its public perception have changed. In the late 1970s I became curious about well known analyses of global temperature change published by climatologist J. Murray Mitchell: why were his estimates for large-scale temperature change restricted to northern latitudes? As a planetary scientist, it seemed to me there were enough data points in the Southern Hemisphere to allow useful estimates both for that hemisphere and for the global average. So I requested a tape of meteorological station data from Roy Jenne of the National Center for Atmospheric Research, who obtained the data from records of the World Meteorological Organization, and I made my own analysis.

Fast forward to December 2009, when I gave a talk at the Progressive Forum in Houston Texas. The organizers there felt it necessary that I have a police escort between my hotel and the forum where I spoke. Days earlier bloggers reported that I was probably the hacker who broke into East Anglia computers and stole e-mails. Their rationale: I was not implicated in any of the pirated e-mails, so I must have eliminated incriminating messages before releasing the hacked emails.

The next day another popular blog concluded that I deserved capital punishment. Web chatter on this topic, including indignation that I was coming to Texas, led to a police escort.

How did we devolve to this state? Any useful lessons? Is there still interesting science in analyses of surface temperature change? Why spend time on it, if other groups are also doing it? First I describe the current monthly updates of global surface temperature at the Goddard Institute for Space Studies. Then I show graphs illustrating scientific inferences and issues. Finally I respond to questions in the above paragraph.

Current Updates

Each month we receive, electronically, data from three sources: weather data for several thousand meteorological stations, satellite observations of sea surface temperature, and Antarctic research station measurements. These three data sets are the input for a program that produces a global map of temperature anomalies relative to the mean for that month during the period of climatology, 1951-1980.

The analysis method has been described fully in a series of refereed papers (Hansen et al., 1981, 1987, 1999, 2001, 2006). Successive papers updated the data and in some cases made minor improvements to the analysis, for example, in adjustments to minimize urban effects. The analysis method works in terms of temperature anomalies, rather than absolute temperature, because anomalies present a smoother geographical field than temperature itself. For example, when New York City has an unusually cold winter, it is likely that Philadelphia is also colder than normal. The distance over which temperature anomalies are highly correlated is of the order of 1000 kilometers at middle and high latitudes, as we illustrated in our 1987 paper.

Although the three input data streams that we use are publicly available from the

organizations that produce them, we began preserving the complete input data sets each month in April 2008. These data sets, which cover the full period of our analysis, 1880-present, are available to parties interested in performing their own analysis or checking our analysis. The computer program that performs our analysis is published on the GISS web site.

Fig. 1. (a) GISS analysis of global surface temperature change. Open square for 2009 is 11- month temperature anomaly. Green vertical bar is 95 percent confidence range (two standard deviations) for annual temperature. (b) Hemispheric temperature change in GISS analysis.

Responsibilities for our updates are as follows. Ken Lo runs programs to add in the new data and reruns the analysis with the expanded data. Reto Ruedy maintains the computer program that does the analysis and handles most technical inquiries about the analysis. Makiko Sato updates graphs and posts them on the web. I examine the temperature data monthly and write occasional discussions about global temperature change.

Scientific Inferences and Issues

Temperature data – example of early inferences. Figure 1 shows the current GISS

analysis of global annual-mean and 5-year running-mean temperature change (left) and the hemispheric temperature changes (right). These graphs are based on the data now available, including ship and satellite data for ocean regions.

Figure 1 illustrates, with a longer record, a principal conclusion of our first analysis of temperature change (Hansen et al., 1981). That analysis, based on data records through December 1978, concluded that data coverage was sufficient to estimate global temperature change. We also concluded that temperature change was qualitatively different in the two hemispheres. The Southern Hemisphere had more steady warming through the century while the Northern Hemisphere had distinct cooling between 1940 and 1975.

It required more than a year to publish the 1981 paper, which was submitted several times to Science and Nature. At issue were both the global significance of the data and the length of the paper. Later, in our 1987 paper, we proved quantitatively that the station coverage was sufficient for our conclusions – the proof being obtained by sampling (at the station locations) a 100-year data set of a global climate model that had realistic spatial-temporal variability. The different hemispheric records in the mid-twentieth century have never been convincingly explained. The most likely explanation is atmospheric aerosols, fine particles in the air, produced by fossil fuel burning. Aerosol atmospheric lifetime is only several days, so fossil fuel aerosols were confined mainly to the Northern Hemisphere, where most fossil fuels were burned. Aerosols have a cooling effect that still today is estimated to counteract about half of the warming effect of human-made greenhouse gases. For the few decades after World War II, until the oil embargo in the 1970s, fossil fuel use expanded exponentially at more than 4%/year, likely causing the growth of aerosol climate forcing to exceed that of greenhouse gases

Fig. 2. Global (a) and U.S. (b) analyzed temperature change before and after correction of computer program flaw. Results are indistinguishable except for the U.S. beginning in year 2000. in the Northern Hemisphere. However, there are no aerosol measurements to confirm that interpretation. If there were adequate understanding of the relation between fossil fuel burning and aerosol properties it would be possible to infer the aerosol properties in the past century. But such understanding requires global measurements of aerosols with sufficient detail to define their properties and their effect on clouds, a task that remains elusive, as described in chapter 4 of Hansen (2009).

Flaws in temperature analysis. Figure 2 illustrates an error that developed in the GISS analysis when we introduced, in our 2001 paper, an improvement in the United States temperature record. The change consisted of using the newest USHCN (United States Historical Climatology Network) analysis for those U.S. stations that are part of the USHCN network. This improvement, developed by NOAA researchers, adjusted station records that included station moves or other discontinuities. Unfortunately, I made an error by failing to recognize that the station records we obtained electronically from NOAA each month, for these same stations, did not contain the adjustments. Thus there was a discontinuity in 2000 in the records of those stations, as the prior years contained the adjustment while later years did not. The error was readily corrected, once it was recognized. Figure 2 shows the global and U.S. temperatures with and without the error. The error averaged 0.15°C over the contiguous 48 states, but these states cover only 1½ percent of the globe, making the global error negligible.

However, the story was embellished and distributed to news outlets throughout the country. Resulting headline: NASA had cooked the temperature books – and once the error was corrected 1998 was no longer the warmest year in the record, instead being supplanted by 1934.

This was nonsense, of course. The small error in global temperature had no effect on the ranking of different years. The warmest year in our global temperature analysis was still 2005.

Conceivably confusion between global and U.S. temperatures in these stories was inadvertent. But the estimate for the warmest year in the U.S. had not changed either. 1934 and 1998 were tied as the warmest year (Figure 2b) with any difference (~0.01°C) at least an order of magnitude smaller than the uncertainty in comparing temperatures in the 1930s with those in the 1990s.

The obvious misinformation in these stories, and the absence of any effort to correct the stories after we pointed out the misinformation, suggests that the aim may have been to create distrust or confusion in the minds of the public, rather than to transmit accurate information. That, of course, is a matter of opinion. I expressed my opinion in two e-mails that are on my Columbia University web site

Click to access 20070810_LightUpstairs.pdf

http://www.columbia.edu/~jeh1/mailings/2007/20070816_realdeal.pdf.

We thought we had learned the necessary lessons from this experience. We put our

analysis program on the web. Everybody was free to check the program, if they were concerned that any data “cooking” may be occurring.

Unfortunately, another data problem occurred in 2008. In one of the three incoming data streams, the one for meteorological stations, the November 2008 data for many Russian stations was a repeat of October 2008 data. It was not our data record, but we properly had to accept the blame for the error, because the data was included in our analysis. Occasional flaws in input data are normal in any analysis, and the flaws are eventually noticed and corrected if they are

substantial. Indeed, we have an effective working relationship with NOAA – when we spot data that appears questionable we inform the appropriate people at the National Climate Data Center – a relationship that has been scientifically productive.

This specific data flaw was a case in point. The quality control program that NOAA runs on the data from global meteorological stations includes a check for repetition of data: if two consecutive months have identical data the data is compared with that at the nearest stations. If it appears that the repetition is likely to be an error, the data is eliminated until the original data source has verified the data. The problem in 2008 escaped this quality check because a change in their program had temporarily, inadvertently, omitted that quality check.

The lesson learned here was that even a transient data error, however quickly corrected provides fodder for people who are interested in a public relations campaign, rather than science.

That means we cannot put the new data each month on our web site and check it at our leisure, because, however briefly a flaw is displayed, it will be used to disinform the public. Indeed, in this specific case there was another round of “fraud” accusations on talk shows and other media all around the nation.

Another lesson learned. Subsequently, to minimize the chance of a bad data point

slipping through in one of the data streams and temporarily affecting a publicly available data product, we now put the analyzed data up first on a site that is not visible to the public. This allows Reto, Makiko, Ken and me to examine maps and graphs of the data before the analysis is put on our web site – if anything seems questionable, we report it back to the data providers for them to resolve. Such checking is always done before publishing a paper, but now it seems to be necessary even for routine transitory data updates. This process can delay availability of our data analysis to users for up to several days, but that is a price that must be paid to minimize disinformation.

Is it possible to totally eliminate data flaws and disinformation? Of course not. The fact that the absence of incriminating statements in pirated e-mails is taken as evidence of wrongdoing provides a measure of what would be required to quell all criticism. I believe that the steps that we now take to assure data integrity are as much as is reasonable from the standpoint of the use of our time and resources.

Fig. 3. (a) Monthly global land-ocean temperature anomaly, global sea surface temperature, and El Nino index. (b) 5-year and 11-year running means of the global temperature index. Temperature data – examples of continuing interest. Figure 3(a) is a graph that we use to help provide insight into recent climate fluctuations. It shows monthly global temperature anomalies and monthly sea surface temperature (SST) anomalies. The red-blue Nino3.4 index at the bottom is a measure of the Southern Oscillation, with red and blue showing the warm (El Nino) and cool (La Nina) phases of sea surface temperature oscillations for a small region in the eastern equatorial Pacific Ocean.

Strong correlation of global SST with the Nino index is obvious. Global land-ocean

temperature is noisier than the SST, but correlation with the Nino index is also apparent for global temperature. On average, global temperature lags the Nino index by about 3 months.

During 2008 and 2009 I received many messages, sometimes several per day informing me that the Earth is headed into its next ice age. Some messages include graphs extrapolating cooling trends into the future. Some messages use foul language and demand my resignation. Of the messages that include any science, almost invariably the claim is made that the sun controls Earth’s climate, the sun is entering a long period of diminishing energy output, and the sun is the cause of the cooling trend.

Indeed, it is likely that the sun is an important factor in climate variability. Figure 4 shows data on solar irradiance for the period of satellite measurements. We are presently in the deepest most prolonged solar minimum in the period of satellite data. It is uncertain whether the solar irradiance will rebound soon into a more-or-less normal solar cycle – or whether it might remain at a low level for decades, analogous to the Maunder Minimum, a period of few sunspots that may have been a principal cause of the Little Ice Age.

The direct climate forcing due to measured solar variability, about 0.2 W/m2, is

comparable to the increase in carbon dioxide forcing that occurs in about seven years, using recent CO2 growth rates. Although there is a possibility that the solar forcing could be amplified by indirect effects, such as changes of atmospheric ozone, present understanding suggests only a small amplification, as discussed elsewhere (Hansen 2009). The global temperature record (Figure 1) has positive correlation with solar irradiance, with the amplitude of temperature variation being approximately consistent with the direct solar forcing. This topic will become clearer as the records become longer, but for that purpose it is important that the temperature record be as precise as possible.

Fig. 4. Solar irradiance through October 2009, based on concatenation of multiple satellite records by Claus Frohlich and Judith Lean (see Frohlich, 2006). Averaged over day and night Earth absorbs about 240 W/m2 of energy from the sun, so the irradiance variation of about 0.1 percent causes a direct climate forcing of just over 0.2 W/m2.

Frequently heard fallacies are that “global warming stopped in 1998” or “the world has been getting cooler over the past decade”. These statements appear to be wishful thinking – it would be nice if true, but that is not what the data show. True, the 1998 global temperature jumped far above the previous warmest year in the instrumental record, largely because 1998 was affected by the strongest El Nino of the century. Thus for the following several years the global temperature was lower than in 1998, as expected.

However, the 5-year and 11-year running mean global temperatures (Figure 3b) have continued to increase at nearly the same rate as in the past three decades. There is a slight downward tick at the end of the record, but even that may disappear if 2010 is a warm year.

Indeed, given the continued growth of greenhouse gases and the underlying global warming trend (Figure 3b) there is a high likelihood, I would say greater than 50 percent, that 2010 will be the warmest year in the period of instrumental data. This prediction depends in part upon the continuation of the present moderate El Nino for at least several months, but that is likely.

Furthermore, the assertion that 1998 was the warmest year is based on the East Anglia – British Met Office temperature analysis. As shown in Figure 1, the GISS analysis has 2005 as the warmest year. As discussed by Hansen et al. (2006) the main difference between these analyses is probably due to the fact that British analysis excludes large areas in the Arctic and Antarctic where observations are sparse. The GISS analysis, which extrapolates temperature anomalies as far as 1200 km, has more complete coverage of the polar areas. The extrapolation introduces uncertainty, but there is independent information, including satellite infrared measurements and reduced Arctic sea ice cover, which supports the existence of substantial positive temperature anomalies in those regions.

In any case, issues such as these differences between our analyses provide a reason for having more than one global analysis. When the complete data sets are compared for the different analyses it should be possible to isolate the exact locations of differences and likely gain further insights.

Summary

The nature of messages that I receive from the public, and the fact that NASA

Headquarters received more than 2500 inquiries in the past week about our possible “manipulation” of global temperature data, suggest that the concerns are more political than scientific. Perhaps the messages are intended as intimidation, expected to have a chilling effect on researchers in climate change.

The recent “success” of climate contrarians in using the pirated East Anglia e-mails to cast doubt on the reality of global warming* seems to have energized other deniers. I am now inundated with broad FOIA (Freedom of Information Act) requests for my correspondence, with substantial impact on my time and on others in my office. I believe these to be fishing expeditions, aimed at finding some statement(s), likely to be taken out of context, which they would attempt to use to discredit climate science.

There are lessons from our experience about care that must be taken with data before it is made publicly available. But there is too much interesting science to be done to allow intimidation tactics to reduce our scientific drive and output. We can take a lesson from my 5- year-old grandson who boldly says “I don’t quit, because I have never-give-up fighting spirit!”

Click to access 20091130_FightingSpirit.pdf

There are other researchers who work more extensively on global temperature analyses than we do – our main work concerns global satellite observations and global modeling – but there are differences in perspectives, which, I suggest, make it useful to have more than one analysis. Besides, it is useful to combine experience working with observed temperature together with our work on satellite data and climate models. This combination of interests is likely to help provide some insights into what is happening with global climate and information on the data that are needed to understand what is happening. So we will be keeping at it.

*By “success” I refer to their successful character assassination and swift-boating. My interpretation of the e-mails is that some scientists probably became exasperated and frustrated by contrarians – which may have contributed to some questionable judgment. The way science works, we must make readily available the input data that we use, so that others can verify our analyses. Also, in my opinion, it is a mistake to be too concerned about contrarian publications – some bad papers will slip through the peer-review process, but overall assessments by the National Academies, the IPCC, and scientific organizations sort the wheat from the chaff.

The important point is that nothing was found in the East Anglia e-mails altering the reality and magnitude of global warming in the instrumental record. The input data for global temperature analyses are widely available, on our web site and elsewhere. If those input data could be made to yield a significantly different global temperature change, contrarians would certainly have done that – but they have not.

References

Frölich, C. 2006: Solar irradiance variability since 1978. Space Science Rev., 248, 672-673.

Hansen, J., D. Johnson, A. Lacis, S. Lebedeff, P. Lee, D. Rind, and G. Russell, 1981: Climate

impact of increasing atmospheric carbon dioxide. Science, 213, 957-966.

Hansen, J.E., and S. Lebedeff, 1987: Global trends of measured surface air temperature. J.

Geophys. Res., 92, 13345-13372.

Hansen, J., R. Ruedy, J. Glascoe, and Mki. Sato, 1999: GISS analysis of surface temperature change. J. Geophys. Res., 104, 30997-31022.

Hansen, J.E., R. Ruedy, Mki. Sato, M. Imhoff, W. Lawrence, D. Easterling, T. Peterson, and T. Karl, 2001: A closer look at United States and global surface temperature change. J. Geophys. Res., 106, 23947-23963.

Hansen, J., Mki. Sato, R. Ruedy, K. Lo, D.W. Lea, and M. Medina-Elizade, 2006: Global

temperature change. Proc. Natl. Acad. Sci., 103, 14288-14293.

Hansen, J. 2009: “Storms of My Grandchildren.” Bloomsbury USA, New York. (304 pp.)

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
380 Comments
Inline Feedbacks
View all comments
1DandyTroll
December 22, 2009 2:09 pm

@Anders L. (06:33:58),
‘If you had read what Dr. Hansen actually wrote, you would have noticed that he was talking about temperature ANOMALIES, not temperatures. So you missed the whole point.’
If you had read what Mr Hansen wrote, and what I wrote, you’d’ve understood it had to do with the _correlation_ between stations 1000 km apart.
If you want to nit pick then a temperature anomaly is still a temperature reading, it just goes above or below the calculated median for a chosen time period.

Kitefreak
December 22, 2009 2:13 pm

Oh yeah, now I think of it ‘nanuuq’. Is that setting us up for the next mortal scare from the ‘scientists’.
Namely, that computer scientists have confirmed that the internet is the biggest threat the the US nation faces at this time. You know, cyberterrorism. Makes them think they might have to shut parts of the internet down, because, you know, all the intelligence sources suggest that the threat level is really high just now.
I can see it now:
“Computer scientists have confirmed that increasing and uncontrolled emissions of free speech on the internet are causing a discernable and potentially catastrophic effect on the psychological viewpoint of large swathes of the population. Due to the unprecedented threat level at this time, the Ministry of Truth has reluctantly taken the decision to effectively shut down certain sections of the internet.
This is probably only a temporary measure, but we will have to monitor the situation on a continual basis – as new data comes in – and make adjustments as necessary.”
This is the new battleground – to put it in their own terms: the cybersphere….
What I’m saying is: watch out for measures to stifle free speech on the internet. Please all do some research on it because it is very pertinent and timely, IMHO.

Pleeease!
December 22, 2009 2:24 pm

Paul Vaughan says: “I feel like I’m chronically swimming in a cesspool, but due to my fierce commitment to parks, natural forests, the scientific truth, & toxic-pollution-elimination (preferably achieved via honest means), I hold my nose & endure.”
You’re one sick puppy.

Jeef
December 22, 2009 2:27 pm

Dear old Jim Hansen (and why do I think of the muppets whenever I see his name?) has a severe cse of me-me-itis.

Jeef
December 22, 2009 2:27 pm

*cse=case.
sorry!

mpaul
December 22, 2009 3:22 pm

What’s odd is that Hansen and RC continue to insist that all the data is publicly available while simultaneously rejecting FOIA requests on grounds that the data is confidential/private/lost etc. Is Jim lying now or was he lying when he rejected the FOIA request?
NASA’s got to have some sort of rules against making false statements in public. Surely they must.

Tilo Reber
December 22, 2009 3:29 pm

“A simple question: Are you Thomas C. Peterson from NOAA, mentioned in CRU mails?”
This is interesting. Now we know of another team member that defended the practice of cherry picking proxy series that match the surface temperature records. Tom P went to bat for that practice when he was trying to defend Briffa. We tried to explain why this was a bad practice, but somehow he just didn’t get it.
In any case, getting back to Tom’s defense of Hansen, if I remember right, someone posted the information for a few of the things that he found in the Hansen code. I don’t know if this still applies but it applied about a year ago. I believe the number that was being used in the code then was 1000 Km. But Hansen had a two pass system. First he would try to find stations within 500 Km of those that he wanted to adjust. And he would weigh them from 1 to zero depending on distance. If he didn’t find the stations that he needed he would make a second pass out to 1000 Km and again weigh them from 1 to zero, depending on distance. The result of this proceedure meant that a site at 450 Km would get the same weighing as a site at 900 Km. If that makes sense to anyone here, please explain it to me.
The other device that Hansen used was the hinged adjustment. An artificial hinge was created at a certain date and data was adjusted up or down depending on which side of the hinge it was on and how far from the hinge point that it was. This is where we saw many samples of past data being adjusted downward – some of it drastically in the far past, in order to create a stronger slope. I’d like to know what element of natural variation corresponds to a hinge point in the data? If Hansen wants to do more than propagandize, maybe he could answer questions about his concentric circle weighting and his hinge point. Is the hinge point there so that he can squeeze more slope out of the data without showing greater divergence with the satellite record? Hey, Tom P, you’re a cabal member; maybe you can get a rationalization from Hansen and we can talk about it.

George E. Smith
December 22, 2009 3:36 pm

“”” Steve Goddard (13:00:25) :
Tom P,
I have a thermometer on my bicycle and sometimes see 10-15 degrees difference between a wooded area and a paved area one fourth mile away. The surface temperature record can not be interpolated at any distance more than about 10 feet. “””
“”” Frank K. (13:48:00) :
Tom P.
The weighted correlations in figure 3 are between 0.4 and above 0.9 – not “terrible”.
Yes they are. Anything below 0.8 is not strongly correlated. It’s particularly bad at lower lattitudes. But to each their own…I’ll let others read the paper for themselves and decide. “””
Well I have to say that I pretty much agree with Steve’s position on this. Steve might be a little tongue in cheek with his ten feet; but when I look at a daily SF Bay area weather map, with max and mins for dozens of places some just 2-3 km apart; they clearly don’t show much strong correlation.
!200 km gets me from San Jose to down below Loreto Mexico on the sea of Cortez, and no way in hell is there temperature correlated with ours.
I look at those wild scatter plots on Hansen’s paper thats oembody up there referenced; , and if all those places are correlated, shouldn’t that mean that any one of them could be used as a proxy for all of them. Well just look at the spread of those numbers, and try to convince yourself that any one of those dots is a good value to use for all of those places.
Rmember we are looking for hundredths of a degree variations.
You do enough smoothing on random noise, and pretty soon you can convince anyone it is all correlated.
I happen to believe that inadequate sampling is at the root of this whole problem.
Yes you can plot the thermometer outside your back door for 150 years; and get a pretty good basis for believeing what it will read tomorrow. Just don’t go claiming that you are recording the temperature at the White House; or even your next door neighbor’s back door.
GISStemp is a fairly good record of GISStemp. Lousy record of the mean global surface temperature. Besides, there’s too many places on earth that don’t even have a Weber grill alongside their official thermometer; so how could their data be any good.

Tom P
December 22, 2009 4:03 pm

lichanos (12:36:45) :
The major question with a sparsely sampled dataset is whether it is representative. Hansen first demonstrates a match of the climate model variability to the areas of greater station density, and then shows that such model variability can be picked up with a sparser station density. The conclusion is that the temperature is being sampled at sufficient spatial resolution to derive hemispherical and global temperatures.
It should be noted that the ability of the climate models to incorporate global warming is peripheral to this validation exercise.
Roger Sowell (12:53:18) :
Rather than just assume two areas so far apart bear no relationship to each other you should have done the maths.
All the GISS data is published at http://data.giss.nasa.gov/gistemp/
The nearest long-term station data to your two locations are at UofA, Tucson and Berkeley. The correlation for all the available annual temperature data from 1895 to date is 0.53, which indicates a moderate level of agreement for two stations right at the 1200 km GISS limit. This is pretty close to a typical pair of stations at this latitude (see fig. 3 in the Hansen 1987 paper).

December 22, 2009 4:25 pm

DirkH (08:10:03) : …My personal feeling is the guy who said the correlation between sunspot number and SST anomaly has broken down after 1980 is wrong.
Dirk, I don’t think you want to go there.

Tom P
December 22, 2009 4:30 pm

George E. Smith (15:36:04) :
“…when I look at a daily SF Bay area weather map, with max and mins for dozens of places some just 2-3 km apart; they clearly don’t show much strong correlation.”
Correlation doesn’t mean the temperatures are the same, just that they vary in synchronicity. GISS temperatures between San Jose and San Francisco have a correlation of 0.76.
Have you actually done any numerical analysis on this?

Tom P
December 22, 2009 5:15 pm

Tilo Reber (15:29:24) :
“Hey, Tom P, you’re a cabal member”
You’ve confused yourself – see my previous answer at 13:30:39 to TKI.

December 22, 2009 5:23 pm

EVEN if humans are not responsible for global warming, how are we to minimize the effects of the REAL TRUE WARMING we are currently seeing? Even if CO2 is not the main culprit, could reducing the emiision reduce the overall warming going on?
And if there is no global warming natural or otherwise then what?

December 22, 2009 5:23 pm

To Dr. James Hansen, PhD, of NASA GISS
Dr. Hansen,
I read your article above, and note that you complain of numerous FOIA requests to reveal the base data and methodologies. You also complain that others have reviewed your published work and made things inconvenient for you when errors were found.
Sir, you need to understand a few things.
When anyone, in this case yourself, makes such extraordinary claims of imminent catastrophe that not only is severe in calamity, but world-wide in scope, that person must expect the highest level of scrutiny. Nothing less is permissible.
You have not merely predicted a large tropical cyclone, as devastating as those can be. Nor have you merely predicted a famine, or a drought, as equally devastating as those can be. You, sir, have predicted nothing less than immense and unfavorable changes to the entire world’s population, including hotter temperatures, melting polar ice caps, massive sea level rise with vast low-lying areas inundated, population transfers, acidified oceans, immense and intense storms, and many others. These predictions of yours are supposed to affect the entire globe, not just one area.
In addition, you have postulated an imminent threat in the form of CO2 and other greenhouse gases placed into the atmosphere by man’s activities, and given a very short time-frame for massive action to reduce those. Else, the doom you have predicted. The reduction of CO2 will radically change the economic course of many, perhaps all, nations.
Seldom, perhaps never, has anyone made such wide-ranging, all-encompassing, draconian predictions of the future of life on earth as have you. Not even the Black Death, in which one-third of the population of an entire continent died, can compare to your predictions.
Now, let me explain how the legal system in the United States views various levels of scrutiny for matters that come before the courts. There are three levels of scrutiny, from lowest to highest, with names of rational basis scrutiny, intermediate scrutiny, and strict scrutiny. For matters of ordinary concern, rational basis is used, as this is the lowest level. For matters that involve more important concerns, intermediate scrutiny is used, and this places a higher burden on the defendant (in this case the government) to defend its actions. Finally, where very grave and important matters are at issue, strict scrutiny is used. Strict scrutiny places a very high burden on the defendant to justify his actions. These may be verified in any number of websites on constitutional law and scrutiny.
Given the extremely high importance of your predictions, and the remedy you prescribe in curtailing fossil fuel consumption or other means to reduce CO2 emissions, it is only proper that the entire group of those who review your work should apply the highest level of scrutiny. We are not talking about whether or not the orange crop will fail in Florida next year, as important as that is to those affected by growing oranges. As I wrote above, you have predicted massive and world-wide calamity. Therefore, you should expect that every piece of the raw data you used must be made public, every FOIA request must be complied with in infinite detail, and quickly, and every step of the data manipulation must be made clear, transparent, and readily available to all.
This is the only reasonable, rational approach to examine extraordinary claims, such as you have made.
Roger E. Sowell, Esq.

Frank K.
December 22, 2009 5:32 pm

Tom P.
“It should be noted that the ability of the climate models to incorporate global warming is peripheral to this validation exercise.”
But the ability of climate models to model the ** real earth ** is not peripheral…
“The correlation for all the available annual temperature data from 1895 to date is 0.53, which indicates a moderate level of agreement for two stations right at the 1200 km GISS limit.”
Moderate level of agreement? Yikes…

Steve Goddard
December 22, 2009 5:44 pm

Tom P,
Look at the RSS anomaly image : http://www.remss.com/data/msu/graphics/tlt/medium/global/ch_tlt_2009_11_anom_v03_2.png
It is very common for the anomaly to vary by 4-7C across 1200km, and the patterns shift significantly every month. Some months GISS only has one or two functional stations across the entire Canadian Arctic, yet they take the liberty to extrapolate across the entire country. The numbers Hansen generates from these extrapolations are not credible.

Tom P
December 22, 2009 5:54 pm

Roger Sowell (17:23:39) :
You could have saved the ink: the link I gave above gets you the data and code for GISTEMP.
Frank K. (17:32:12) :
“Moderate level of agreement? Yikes…”
0.53 is firmly within the accepted range of a correlation coefficient for moderate agreement, between 0.3 and 0.7. Looks like you have you own personal “Yikes” metric here.

December 22, 2009 6:14 pm

Tom P (16:03:45) :
“Roger Sowell (12:53:18) :
Rather than just assume two areas so far apart bear no relationship to each other you should have done the maths.
All the GISS data is published at http://data.giss.nasa.gov/gistemp/
The nearest long-term station data to your two locations are at UofA, Tucson and Berkeley. The correlation for all the available annual temperature data from 1895 to date is 0.53, which indicates a moderate level of agreement for two stations right at the 1200 km GISS limit. This is pretty close to a typical pair of stations at this latitude (see fig. 3 in the Hansen 1987 paper).” [bold added – RES]
Tom P, thank you, as you have just made my point. Accepting it at face value, your correlation of 0.53 is rather poor. Anyone who would suggest that temperature fluctuations in Berkeley and Tucson follow each other is, to say the least, amusing. I expect better from those who are predicting the end of the world through massive global warming.
(for non-US readers, Tucson is in southern Arizona in the middle of a hot desert, and Berkeley is a coastal town on the east coast of San Francisco Bay – noted for mild weather, cool fogs, and much rain).
See my comment above at 17:23:39
As for accepting anything published by GISS, I advise you to read the postings at Chiefio’s blog – http://chiefio.wordpress.com. Nothing from GISS has any credibility whatsoever. None.

Charly
December 22, 2009 6:37 pm

Dear Roger E. Sowell, Esq., now that you have settled the case of Jim Hansen how do you deal with FOIA requests to Al Gore? In terms of doom and gloom claims he is in the same league as Hansen but if the authorities seize his computer they may find it is a virtual (hot air) machine.

Frank K.
December 22, 2009 7:04 pm

Tom P (17:54:38) :
“0.53 is firmly within the accepted range of a correlation coefficient for moderate agreement, between 0.3 and 0.7. Looks like you have you own personal “Yikes” metric here.”
No – you have your own personal definition of “moderate” correlation. And, again, if you look at Figure 3 in Hansen’s 1987 paper, the agreement is in general quite bad…especially at lower latitudes. To characterize the data as having r = 0.53 is pointless as there is quite a bit of variation on a global scale.
But, this is all just “peripheral” in any case. Hansen is free to define a “temperature anomaly index” any way he wants. This is simply a data interpolation exercise (and a poor one at that). Whether the index means anything is also subject to interpretation. Thermodynamically, it is quite meaningless…

Paul Vaughan
December 22, 2009 7:25 pm

Pleeease! (14:24:29) “sick”
There’s preaching to the choir – and there’s outreach.
Opposing pollution is not equivalent to supporting AGW. Warming isn’t the issue; the issue is that we don’t know natural climate variations the way we know tides.
Let me know if you disagree.

December 22, 2009 7:28 pm

Tom P (17:54:38) :
“Roger Sowell (17:23:39) :
You could have saved the ink: the link I gave above gets you the data and code for GISTEMP.”
Are you being deliberately obtuse? GISS is a laughingstock. Chiefio has demonstrated the entire botched up process, including the datasets and the computer code. If you can refute what Chiefio has written, then please, do so. Show us all where he is wrong.
If you want to be taken seriously, find a long-term dataset made entirely of non-urban sensors, and not the garbage used in GISS.
No longer do knowledgeable skeptics believe anything published by climate scientists. As I wrote earlier in a comment to Dr. Hansen, extraordinary, end-of-the-world claims will result in the utmost skepticism. The doom-predicting alarmist must show his data and his calculations – all of it. There are those of us in the world with internet access who can and will pass judgement on such claims.
An honest scientist will have nothing to hide, and much to gain from showing his data and calculations.

savethesharks
December 22, 2009 7:41 pm

Icarus: “If he’s right about the science then under the circumstances the activism is justified, isn’t it? In that case he’s part of the solution, not a problem.”
Well the easy answer question to your red-herring, Icarus, is, that, your “IF” obviously begs the negative:
He is most certainly NOT right about the science. But that is a side point.
Right or not….a TAXPAYER-funded scientist [in other words a “public servant] never….NEVER has any business participating in ANY activism of any kind whatsoever, period.
It is illegal…and for good reason.
If he wants to study this **** on his own accord and his own or private money, then he can be a scientist-activist all the **** he wants!!
But not on the public dole.
What he is doing (and Gavin and others like them) presents a criminal conflict of interest.
Hansen is only getting away with it because he knows he can.
Chris
Norfolk, VA, USA

ErnieK
December 22, 2009 7:52 pm

Hunter – are you referring to “The Science is settled” quote that Gore made during a 2007 congressional hearing?
NPR (hardly a right-wing source) reported it at the time.
http://www.npr.org/templates/story/story.php?storyId=9047642

savethesharks
December 22, 2009 7:56 pm

Pleeease! (14:24:29) :
Where did that ad hominem come from dude??
Paul Vaughan is one of the smartest guys on here….and let me let you in a little secret:
he’s on your side.
If you can’t keep the ad homs out of the argument, then just refrain from posting at all.
[My “ad homs” against Hansen excluded har har].
But seriously….Paul has a point. There are plenty of people on here who might not fit your political bent.
We have a common enemy: it is the odious, error-ridden, politically-driven science of CAGW. Stop muddying up the water.
Chris
Norfolk, VA, USA