Many readers here marvel at the scope of adjustments that NASA GISS performs on weather station data.
Along those lines, Michelle at Read N Say points out something interesting in Jim Hansen’s NASA page.

Below is an excerpt from her post:
This is his background copied from the official NASA GISS web page:
Research Interests:
As a college student in Iowa, I was attracted to science and research by James Van Allen’s space science program in the physics and astronomy department. Since then, it only took me a decade or so to realize that the most exciting planetary research involves trying to understand the climate change on earth that will result from anthropogenic changes of the atmospheric composition.
One of my research interests is radiative transfer in planetary atmospheres, especially interpreting remote sounding of the earth’s atmosphere and surface from satellites. Such data, appropriately analyzed, may provide one of our most effective ways to monitor and study global change on the earth. The hardest part is trying to influence the nature of the measurements obtained, so that the key information can be obtained.
I am also interested in the development and application of global numerical models for the purpose of understanding current climate trends and projecting humans’ potential impacts on climate. The scientific excitement in comparing theory with data, and developing some understanding of global changes that are occurring, is what makes all the other stuff worth it.
He actually says, in the second paragraph, “The hardest part is trying to influence the nature of the measurements obtained, so that the key information can be obtained.”
To me this sounds like spin for “The hardest part is making the numbers show what I want them to”. Let’s see how long it takes for that sentence in the NASA GISS website to get changed.
The above in italics is from Michelle’s post.
In Hansen’s defense, perhaps what he meant was something along the lines of trying to extract useful information from a noisy signal.
On the other hand, with a plethora of issues with GISS data, including adjustments to pristine data, failing to catch obviously corrupted data, significant errors in splicing and reporting pointed out by bloggers, and pronouncements from the man himself that such people are “jesters” and that vandals in England should be defended and energy company executives should be put on trial, one wonders if Hansen really wasn’t just speaking his mind.

Blink comparator of GISS USA temperature anomaly – h/t to Zapruder
UPDATE 1/26 Lucia at The Blackboard wrote to Jim Hansen to get his take on it. Surprisingly, he emailed back.
Lucia,
This sentence refers to satellite measurements. You could look at the report “Long-Term Monitoring of Global Climate Forcings and Feedbacks”, which is available from my office — but you could also find several papers that I wrote in the early 1990s if you go to www.giss.nasa.gov, then Publications, Authors, my name.
Jim Hansen
But now a new question arises. Why doesn’t then GISS embrace satellite measurements?
“”” Mike D. (21:12:53) :
“… the danger that we face is the Venus syndrome. There is no escape from the Venus Syndrome. Venus will never have oceans again. … If the planet gets too warm, the water vapor feedback can cause a runaway greenhouse effect. The ocean boils into the atmosphere and life is extinguished.” “””
Mmmm, now let’s compare Earth’s situation to that of Venus:
1. Avg distance from the Sun for Earth = 93,000,000 miles, for Venus = 67,000,000 miles; so Venus is around about 26,000,000 miles closer to the Sun than we are…now that’s going to give you one hell of a sun tan.
2. Earth’s rotation about its own axis (an Earth day) approx 24 hours, or 1/365th of it’s year. Keeps things nice and fresh. Venus’ rotation about its own axis (a Venutian day) approx 240 (earth) days or approx one Venutian year. What does this mean? Effectively Venus presents the same side to the sun all year (almost), now that’s going to cause some interesting weather effects with one half baking and one half not…
Cheers
Mark.
“interesting weather effects with one half baking and one half not…”
Sounds like a cheech and chong movie! lol
Hehe, Sorry just a silly joke. 🙂
Off topic, here’s one to ponder,
It’s said that the moon controls the tides and gets it’s reflection from the earth and is always facing the one side to earth, it’s just the angle you look at it that makes it a full moon/new moon on it’s cycle.
That doesn’t make sense to me, and like Venus, I belive the moon bright side always points toward the sun. You can see it just with impact craters from dark to bright side. Tons of deep impacts holes on the dark side and smoothed out craters on the bright side. The sun controls the moon and pulls it away at 1/2 inch(or so, or was it 1cm?lol) a year.
Ever wonder why there is two tides in a day, one small and one large?
A smaller lunar and a larger solar tide, extreme tides happen on the full moon/new moon.
That pesky sun trying to control us again. ; )
Oh and yes, I’m a fountain of useless knowledge . lol
George E. Smith (11:42:01) :
And just what is meant by correlated out to distances of 1000 km. Does that mean that what happens here in down town San Jose, a quarter mile from the international airport, is representative of what is going on in the Sea of Cortez off Santa Rosalia, or meybe even Loreto Bay ?
What does correlated mean in the sense used in that statement ?
As near as I can tell from the source code, it means that Hansen is quite happy to use the temperature ‘anomaly’ from 1000 km inland to ‘adjust’ the real data from San Francisco. So yes, Loreto Bay tells you everything you need to know about temperature changes in San Jose at the airport…
I’m only about 1/4 through the detailed source review, but the rough pass through the anomaly code looked like they were doing just that kind of thing.
Sidebar: The guy who wrote the only section in Python (STEP1) seems to have a clue how to write production code. IMHO, he ought to have been given a contract to re-write all of it. He even had ‘pride of authorship’ enough to put his name in the comments in the code that GISS publishes.
The name was found in the “C” extension (also well written) for Python in that section. Assuming he’s the guy who wrote all of it:
Kudos to: Jay Glascoe, SSAI, NASA/GISS
Professional job.
(That does not mean I endorse the ideas that the code embodied. It does mean that he writes good, clean, tight code and I’d hire him in a heartbeat.)
Now if only the FORTRAN were as well done… and the ideas behind it…
For those wondering where that 1000 km is in the code; notice the line that says “rad=1000 ;”… Notice that this can be passed as a parameter so one can ‘play with’ the value to see if the output changes in ‘interesting’ ways…
From Step2, PApars script:
if [[ $# -lt 1 ]]
then echo “Usage: $0 source (e.g. GHCN.CL) radius(km) overlap_cond(20)”
exit; fi
rad=1000 ; if [[ $# -gt 1 ]] ; then rad=$2 ; fi
lap=20 ; if [[ $# -gt 2 ]] ; then lap=$3 ; fi
i=”./ANN.dTs.$1″
echo “inputfiles: $i.1-6 rural neighborhood radius:$rad km overlap_cond:$lap”
Also, from the FORTRAN program PApars.f (one of the best commented and best written parts of the FORTRAN code) we have:
C*********************************************************************
C *** unit#
C *** Input files: 31-36 ANN.dTs.GHCN.CL.1 … ANN.dTs.GHCN.CL.6
C ***
C *** Output file: 78 list of ID’s of Urban stations with
C *** homogenization info (text file)
C *** Header line is added in subsequent step
C*********************************************************************
C****
C**** This program combines for each urban station the rural stations
C**** within R=1000km and writes out parameters for broken line
C**** approximations to the difference of urban and combined rural
C**** annual anomaly time series.
Joel Shore (12:36:24) :
Well, I would recommend reading the reference that Hansen cites if you want to get the full scoop.
No, I would read the source code to ‘get the full scoop’… Oh wait, I did that already! … see below…
However, what I assume he means is that, yes, there is a positive correlation between the temperature anomaly at one place and the temperature anomaly at another place within a distance of roughly 1000km. That doesn’t mean that stations separated by distances less than that agree perfectly but it does mean that, over such distances, the correlation in the temperature anomalies between two stations tends on average to be positive and statistically significant.
The problem I see in the code is that it doesn’t use ‘correlated’ it uses ‘linear’. That is, there is no scaling for non-linear correlation, a fixed slope is subtracted. This comes in one of two flavors. A single line, or two line segments with a ‘knee’. This, IMHO, is a lethal flaw in GISStemp and explains a lot of the ‘rewrite history by a degree or two in strange ways’ behaviour; especially as it relates to coastal urban areas moderated by the seas when compared to more volatile inland areas. “Damning” doesn’t even come close (IHMO, of course…)
Since most folks don’t really want to see the FORTRAN, I’ll include the comments in it (that do reflect what the code acutally does.) I’ve bolded a couple of interesting bits…
From STEP2 PApars.f file:
C**** The homogeneity adjustment parameters
C**** =====================================
C**** To minimize the impact of the natural local variability, only
C**** that part of the combined rural record is actually used that is
C**** supported by at least 3 stations, i.e. heads and tails of the
C**** record that are based on only 1 or 2 stations are dropped. The
C**** difference between that truncated combination and the non-rural
C**** record is found and the best linear fit and best fit by a broken
C**** line (with a variable “knee”) to that difference series are found.
C**** The parameters defining those 2 approximations are tabulated.
C****
C**** Note: No attempt is made to find the longterm trends for urban
C**** and rural combination separately; using the difference only
C**** minimizes the impact of short term regional events that
C**** affect both rural and urban stations, hence cancel out.
Squidly (22:11:06) :
1) Rather recently, a company has successful begun the manufacturing of a solar film that is extremely efficient by comparison to typical silicon, and much cheaper.
Yeah, very promising. Saw projections of 5 cents / kWhr. IF they can hit the numbers, this is a serious ‘game changer’.
2) Researchers at MIT recently discovered a polymer that very efficiently separates H and O from H2O with very low electrical power consumption.
Interesting. Had not heard about it. I presume it is this:
http://www.sciam.com/article.cfm?id=hydrogen-power-on-the-cheap
turned up in a google search. For a ‘downer man…’ point of view, see:
http://www.dailykos.com/story/2008/8/5/143320/8009
Now, the idea. Take #1 to collect electrical power and couple that to #2 to produce hydrogen that one could store almost indefinitely. Obtain an efficient hydrogen combustion electrical generator, and Viola! One could theoretically run their entire household off of the sun in a practical and affordable manner. Then, purchase an electric car, or plug-in hybrid, and you have essentially provided for all of your energy needs indefinitely.
The issues I see:
1) It’s all new. Often the road from idea to product dashes many hopes.
2) You don’t want hydrogen combustion, you want a hydrogen fuel cell. Much better though more expensive.
3) There is an engineering question of what’s more efficient: Plug in HEV or a direct H fuel cell vehicle. Implementation nit at most.
Generally, yes. IFF what they project is valid, you can do this. IF. Hydrogen is not an energy source, but it is an ‘ok’ battery. (Bit low on density and storage either takes BIG tanks, fancy hydrides, or one heck of a lot of pressure or cold. Better for stationary than for vehicles; but workable for both.)