Guest Post by Willis Eschenbach
OK, this post has ended up having two parts, because as usual, I got side-tractored while looking at the first part. It’s the problem with science, too many interesting trails leading off the main highway …
Part The First
I wanted to point out an overlooked part of Dr. James Hansen’s 1988 oral testimony to the US Senate. At the time Dr. Hansen was the Director of GISS, the Goddard Institute of Space Studies. He told the Congresspersonages, or whatever the modern politically correct term is for that class of Politicritters, the following:
The observed warming during the past 30 years, which is the period when we have accurate measurements of atmospheric composition, is shown by the heavy black line in this graph. The warming is almost 0.4 degrees Centigrade by 1987 relative to climatology, which is defined as the 30 year mean, 1950 to 1980 and, in fact, the warming is more than 0.4 degrees Centigrade in 1988. The probability of a chance warming of that magnitude is about 1 percent. So, with 99 percent confidence we can state that the warming during this time period is a real warming trend.
Here is his accompanying graphic …

Now, I am either cursed or blessed with what I call a “nose for bad numbers”. It is a curious talent that I ascribe inter alia to using a slide rule when I was growing up. A slide rule has no decimal point. So if an answer from the slide rule is say 3141, you have to estimate the answer in order to decide if it means 314.1, or 3.141, or .003141, or 31,410. After doing this for years, I developed an innate sense about whether a result seems reasonable or not.
So when I saw Hansen’s claim above, I thought “Nope. Bad numbers”. And when I looked deeper … worse numbers.
First thing I did was to see if I could replicate Hansens’ results. Unfortunately, he was using the old GISS temperature record, made before they were as adjusted as they are today. His statement was that “The warming is almost 0.4 degrees Centigrade by 1987“. But in the modern GISS data, I found slightly more warming, 0.5°C.
OK, fair enough. So I went and digitized the dataset above so I could use Dr.Hansen’s data, and it turns out that his “almost 0.4 degrees Centigrade” increase by 1987″ is actually 0.32°C. You can see it in the graphic above. Hmmm … Dr. Hansen’s alarmism is unquenchable. Also, note that Dr. Hansen has spliced into the graphic and discussed the 1988 “annual” average even though at the time he only had a few months of 1988 data … bad scientist, no cookies. Comparisons gotta be apples to apples.
Next, his claim is that there is only one chance in a hundred that the 1987 warmth is a random result. That means his 1987 temperature should be 2.6 standard deviations warmer than the 1951-1980 mean. But once again, Dr. Hansen is exaggerating, although this time only slightly—it’s only 2.5 standard deviations away from the mean, not 2.6.
However, that’s not the real problem. In common with most climate-related temperature datasets, the GISS temperature dataset Hansen used has a high “Hurst Exponent”. This means that the GISS temperature dataset will be what has been called “naturally trendy”. In such datasets, large swings are more common than in purely random datasets.
How much more common? Well, we can actually test that. He’s comparing the 30-year “climatology” period 1951-1980 to the year 1987. So what I did was the exact same thing, but starting in different years, e.g. comparing the thirty-year period 1901-1930 to the year 1937, seeing how unusual that result is, and so on.
When we do that for all possible years of the GISS 1988 dataset, we find that being 2.5 standard deviations away from the climatological mean is not uncommon at all, occurring about one year out of fourteen.
And if we do the same analysis on the full GISS dataset up until today, we find it’s even more common. It has occurred in the historical record about one year out of seven. So Hansen’s “one percent chance” that the 1988 temperature was unusual was actually a fourteen percent chance … more alarmist misrepresentation, which is no surprise considering the source.
Conclusions the First
Regarding the warmth of 1987, which was 2.5 standard deviations warmer than the 30-year climatology average, Hansen claimed that “The probability of a chance warming of that magnitude is about 1 percent.”
In actuality, this kind of warming occurred in the record that he used about once every fourteen years or so … and it occurs in the modern GISS record about once every seven years. So the probability of a chance warming of that magnitude in the GISS temperature record is not one percent, it is between seven and fourteen percent … which means that it is not unusual in any way.
Part The Second
In the process of researching the first part of this post, I realized why there is so much debate about whether Hansen’s predictions were right or wrong. The problem is that we’re living in what the most imaginative and talented cartoonist yclept “Josh” calls “The Adjustocene” …

The problem is that Dr. James Hansen is not only the guy who made the 1988 alarmist predictions. He’s also the guy who has been in charge of the GISS temperature record that he has long been hoping would make his prediction come true.
So … here are the changes between the version of the GISS temperature record that Hansen used in 1988, and the 2018 version of the GISS temperature record.

(GISS 2018 data available here. )
Gotta say, those are some significant changes. In the old GISS record (red), 1920 to 1950 were much warmer than in the new record. As a result, in the old record temperatures cooled pretty radically from about 1940 to 1970 … but in the new record that’s all gone.
And things don’t get any better when we add another modern record to the mix. Here’s the Hadley Center’s HadCRUT global average temperature, shown in blue …

Note that HadCRUT (blue) shows the same drop in temperature 1940-1970 that we see in the 1988 version of the GISS temperature record (red). More to the current point, the post-1988 divergence between the HadCRUT and the GISS record is enough to rule out any possibility of determining whether Hansen was right or wrong. The overall trend in the GISS 2018 data is about 40% larger than the trend in the HadCRUT data, so you can get the answer you wish by simply picking the right dataset.
Conclusions the Second
Depending on the dataset chosen, someone can show that Dr. Hansen’s predictions either did or did not come true … it’s the perfect Schrodinger’s Cat of predictions.
Finally, as an aside, just what is an “Institute of Space Studies” doing studying the climate? I’ve heard of “mission creep” before, but that’s more than mission creep, that is extra-terrestrial movement. Don’t know if the Goddard folks have noticed, but there is no climate in space … how about if they go back to, you know, studying the myriad of fascinating things that happen in space, and leave studying the climate to less alarmist folk?
Best regards to all,
w.
Short Version Of My Usual Request:
QUOTE THE EXACT WORDS YOU ARE DISCUSSING.
Digitized Hansen Data from Figure 1:
Year, Anom 1880, -0.403 1881, -0.366 1882, -0.427 1883, -0.464 1884, -0.729 1885, -0.541 1886, -0.461 1887, -0.547 1888, -0.388 1889, -0.184 1890, -0.38 1891, -0.438 1892, -0.44 1893, -0.481 1894, -0.382 1895, -0.408 1896, -0.274 1897, -0.177 1898, -0.38 1899, -0.223 1900, -0.025 1901, -0.086 1902, -0.282 1903, -0.357 1904, -0.493 1905, -0.254 1906, -0.175 1907, -0.45 1908, -0.317 1909, -0.334 1910, -0.313 1911, -0.289 1912, -0.316 1913, -0.254 1914, -0.053 1915, -0.009 1916, -0.258 1917, -0.474 1918, -0.363 1919, -0.197 1920, -0.154 1921, -0.079 1922, -0.143 1923, -0.128 1924, -0.119 1925, -0.097 1926, 0.133 1927, -0.006 1928, 0.066 1929, -0.165 1930, -0.002 1931, 0.085 1932, 0.049 1933, -0.158 1934, 0.047 1935, -0.016 1936, 0.055 1937, 0.17 1938, 0.188 1939, 0.052 1940, 0.111 1941, 0.126 1942, 0.094 1943, 0.034 1944, 0.108 1945, -0.027 1946, 0.035 1947, 0.152 1948, 0.034 1949, -0.018 1950, -0.136 1951, 0.02 1952, 0.071 1953, 0.2 1954, -0.028 1955, -0.069 1956, -0.184 1957, 0.094 1958, 0.113 1959, 0.061 1960, 0.006 1961, 0.077 1962, 0.027 1963, 0.022 1964, -0.264 1965, -0.174 1966, -0.09 1967, -0.024 1968, -0.128 1969, 0.028 1970, 0.034 1971, -0.117 1972, -0.077 1973, 0.168 1974, -0.09 1975, -0.039 1976, -0.235 1977, 0.164 1978, 0.1 1979, 0.131 1980, 0.267 1981, 0.359 1982, 0.058 1983, 0.305 1984, 0.096 1985, 0.053 1986, 0.173 1987, 0.325 1988, 0.562 (five months only)
A contradiction in terms:
>After doing this for years, I developed an innate sense…
I am going to watch and see. I say AGW is over and this year will be the transitional year. The present – next few years will be telling because we have increasing CO2( warmer), and very low solar( cooling). Prior to this time 1850-2005 natural climatic factors favored warming. Only in 2005 did that start to change and lag times have to be factored which brings us to year 2018.
In year 2018 the solar criteria the two solar conditions I have called for in order for solar to exert a more direct influence on the climate are in, which are 10+ years of sub solar activity in general followed by a time of very low average value solar parameters which are equal to or exceed in magnitude and duration of time typical solar minimums between so called normal solar cycles.
The geo magnetic field moderates given solar activity and because it happens to be in sync with solar (both magnetic fields are weakening) this will compound given solar activity.
My theory is very low prolonged solar conditions will result in overall oceanic cooling which is happening for the past year along with a slightly higher albedo the result cooling.
An decrease in UV /Near UV light equates to lower overall surface sea temperatures.
An increase in global cloud coverage, snow coverage, major volcanic activity equates to a slightly higher albedo.
The above tied to very low solar activity due to an increase in GALACTIC COSMIC RAYS ,decrease in EUV ,AP INDEX ,and the SOLAR WIND.
SOLAR IRRADIANCE – decreases by a very slight amount and is not the main reason for the cooling climate just a small part of it.
In ending I say it happens now moving forward and if it does not happen now( the cooling that is) I do not think it is going to happen.
Will find out now moving forward.
“Now, I am either cursed or blessed with what I call a “nose for bad numbers”.”
Presenting a single number for “global temperature”, by anyone, is use of bad numbers.
Thanks, Willis.
Australia is similar, with older, plausible, official temperature data sets mismatched with modern ones. We have studied the Government Year Book records and CSIR summaries published before the 1950s, to the stage of using the same stations. These Year Books were like the National Bible for the state of the Nation. No matter how we try for a match, we find those years before 1950 or so being warmer than the modern reconstructions, with the effect of giving Australia global warming 1910 to 2010 of 0.9C versus 0.5C at most by the older data sets. The extra half degree of alleged warming comes mainly from recent official adjustments to pre-1950 data, often for no supportable reason.
In our case as well, there is official reluctance to do a proper comparison and explanation, with various trite explanations then dismissal.
If only it was not serious, as we see our industrial base has crumbled via huge increases in cost of electricity because politicians of all shapes are wedded to Paris agreement CO2 reductions. Geoff.
Geoff,
I think you should list any record you have found in Year Books which are not also in the modern records.
Acorn data is of course adjusted. But I’m pretty sure you could calculate the same average using modern unadjusted data. Provided of course that you used the same station set, area weighting etc.
Nick,
Several of us worked together a few years ago to get these results that Chris Gilham summarised on his web site.
http://www.waclimate.net/year-book-csir.html
It is a pretty thorough analysis with no adjustments or cherry picking by us. There are other documents if you find gaps in this one.
Our introduction is –
“Unadjusted temperatures published by the Weather Bureau in the mid 20th century indicate a warmer Australian climate before 1940 than calculated with RAW or ACORN adjustments and suggest warming from the 1800s to 2014 at approximately half the rate calculated by ACORN since 1910.”
Geoff.
Nick,
It is all here in great detail.
Our summary intro was “Unadjusted temperatures published by the Weather Bureau in the mid 20th century indicate a warmer Australian climate before 1940 than calculated with RAW or ACORN adjustments and suggest warming from the 1800s to 2014 at approximately half the rate calculated by ACORN since 1910.”
http://www.waclimate.net/year-book-csir.html
Geoff,
I have looked at the spreadsheet. It is big, with many sheets. But I could not find anywhere that you have calculated averages with old and new data over the same period. It seems to be all calculating old data over an old period, and new data against some period finishing recently. So the differences could well be just warming over time.
I really would like to see evidence that you are finding old data that isn’t already in the current BoM dataset. They can read year-books too. In fact, the year books would have got the data from BoM.
Nick,
It is not as easy as that because of the different ways the data were aggregated, with different start and finish dates, some daily, some monthly, missing values etc.
As a guide to what can be extracted, have a look at the highest recorded temperatures for each site pre-1930s from the CSIR lists, then compare with the hottest day in the last decade or so of our 2014 analysis. The hotter of the pair, by a 2:1 majority, is the pre-1910 data from CSIR. Yet we are repeatedly told that that recent decade had the hottest number of the hottest days in Australia, a piece of propaganda not supported by the CSIR figures.
Geoff.
Geoff,
“As a guide to what can be extracted, have a look at the highest recorded temperatures for each site pre-1930s from the CSIR lists, then compare with the hottest day in the last decade or so of our 2014 analysis.”
This again mixes up climate and measurement issues. You can’t learn much about measurement by looking at non-matching periods. You need to look at alternate measurements of the same thing. Are those hot days the same in the CSIR and unadjusted BoM? I suspect they are. As I said, the reports you are looking at really came from the BoM anyway. A test would be just one case, one day or month, where CSIR and modern records differ in the same place. Then we could see if there was a reason, or if modern unadjusted records really had been altered. I have not found any cases where they were. Sometimes it is unclear whether the stations are really the same.
I did some of this in looking at Melbourne records here. I looked at hot days from long ago in the current unadjusted record, and compared with old newspaper reports. They always matched. One calibration point I keep in mind is Melbourne, 13 Jan 1939. I have known for 60 years that the temperature was 114.1°F. So I check records to see if they say that. The unadjusted modern records always do (OK, now it is usually 45.6°C).
For Hansen to describe a modern warming rise a significant event within the directly measured data set may indeed be significant within that tiny time span. But it is a gnat’s ass view of Earth’s history of temperature variations. This kind of science is nothing more or better than humanity’s thoughts of its first experience of a mass flood event. They would say it was significant, and likely the consequence of human related behavior that needed to be changed. Hansen’s legacy will, if future generations care about him at all, be remembered as the silly dance of a witch doctor, with an equally silly group-dance of fans.
HADCRUT4 and GISTEMP are now worthless datasets due to all the data tampering.
Rather than adjusting CAGW’s global warming projections to better reflect the empirical evidence, CAGW advocates decided to adjust the empirical evidence to fit the CAGW projections, which isn’t how science actually works…
UAH6 global satellite data is the only reliable dataset remaining, and CAGW advocates have an impossible task of explaining the huge disparity that diets between UAH6 and GISTEMP:
http://www.woodfortrees.org/plot/uah6/from:1979/plot/uah6/from:1979/trend/plot/esrl-co2/from:1979/normalise/trend/plot/esrl-co2/from:1979/normalise/plot/gistemp-dts/from:1979/plot/gistemp-dts/from:1979/trend/plot/gistemp-dts/from:1979/trend
When a scientist makes a huge public prediction,
and then is in charge of the data that will test that prediction,
and then multiple times adjusts that data so that it is closer to his prediction…
… kinda makes you wonder about that data, right?
Statistical significance only references sample error such as sample size or sample selection techniques,for example, which could cause the sample to not be a true representation of the universe one is attempting to describe and does not include possible instrumental and other error and does not imply causality. End of story
Common sense should tell one that looking at paleo data, even with the limitations caused by the proxies used for that data, there is no climate relationship to cause co2 to raise temperature that is of a causal nature. Some periods of time do show a potential for temperature to increase co2 in a potentially causal manner and that has physical sciences to back it up much more so than the supposed radiative effects of co2 upon temperature. Warmer oceans release co2 and we are a 70% water covered planet.
Willis,
In defense of Goddard (something few should attempt)
If…a warming atmosphere expands and forces the envelope further away from the planet…
Then…I could see them following temps for the reason of satellite altitude and potential atmospheric drag imparted on the lowest orbiting satellites.
Not that they could elevate any existing potentially affected satellites to a higher orbit.
At least the Geostationary (GOES) Satellites are far away from this potential.
Since adjustments to global temps (proper, of course) have obviously lessened the warming trend, why do so many people still think that adjustments did the opposite?
Wow, Alley. As Barbie said, “Math is hard.”
Adjust the temperatures from the 1920s to 1940s down and adjust the recent temperatures up. How does that “lessen” the warming trend?
Because the new temperature data that I have seen removed the warming in the ’30s, increasing the long-term warming trend.
“Because the new temperature data that I have seen removed the warming in the ’30s, increasing the long-term warming trend.”
No, as Alley said – not on the global index.
There is a decreased long-term trend of GMT’s, via warming the 30’s.
Great post. Thanks.
“Finally, as an aside, just what is an “Institute of Space Studies” doing studying the climate?”
I wonder if it did not start with the issue of “drag on satellites.”
This would be of interest to many groups and could have been contracted out, or done in-house. Here I include this phrase “I really don’t care. Do U?”** — only because, well, just because I do care, but not enough to research the history of mission creep within GISS.
** I think we need to use this phrase as much as possible.
I think I might put it on a personal ID card along with a few other pithy sayings.
Suggestions accepted.
Where’s the blip?
JF
I don’t think there’s anybody back there!
according to the best record of satellite measurements the temperature today is exactly the same as 30 years ago when Hansen made his prediction ,
https://goo.gl/wy79AY
Call me when the 13 month mean has spent a year or two below zero.
Call me when the RATE of warming per decade exceed the MINIMUM .30C/ Decade.
Which part of “temperature today is exactly the same as in 1988” you don’t understand ???
Willis:
“How much more common? Well, we can actually test that. He’s comparing the 30-year “climatology” period 1951-1980 to the year 1987. So what I did was the exact same thing, but starting in different years, e.g. comparing the thirty-year period 1901-1930 to the year 1937, seeing how unusual that result is, and so on.
When we do that for all possible years of the GISS 1988 dataset, we find that being 2.5 standard deviations away from the climatological mean is not uncommon at all, occurring about one year out of fourteen.
And if we do the same analysis on the full GISS dataset up until today, we find it’s even more common. It has occurred in the historical record about one year out of seven. So Hansen’s “one percent chance” that the 1988 temperature was unusual was actually a fourteen percent chance … more alarmist misrepresentation, which is no surprise considering the source”.
While all of that is true, I don’t think it invalidates Hansen’s “1%” claim, if it is interpreted in the obvious (to me) way that he meant, i.e. that the probabilities would be 1% IF it weren’t because of humans emitting carbon dioxide. He could claim that you can find it in one every seven years in the data set because we have been emitting carbon dioxide throughout the whole data set. A whole different thing is whether Hansen can PROVE or not his claim of the 1%. He can’t. But the data set does not disprove it either.
I haven’t been able to emulate the result exactly. But what I do find is that occasions where years were above the 1987 level relative to the 30 year period ending 7 years ago, were in the decade leading up to 1987. IOW it was indeed a rare event prior to about 1980.
Nylo,
Please take a look at my take on this issue:
https://wattsupwiththat.com/2018/06/30/analysis-of-james-hansens-1988-prediction-of-global-temperatures-for-the-last-30-years/
This is why Willis went back to the turn of the century, to show that this 2.5 std’s effect happened 1 in 14 years no matter which time span in the century you chose. Now, the fact that it went from 1 in 14 to 1 in 7 is interesting. Lets look at prior centuries data and chart the range of standard deviations. Use Hansens own proxies.
But Willis said 1 in 14 years, and didn’t mention WHEN those years actually happened. I don’t really know if they were at the turn of the century or much later. Do you? In any case, humans were already emitting CO2 at the beginning of the 20th century, even if in much smaller ammounts. Hansen cannot be disproven because he is just talking about a what-if scenario (if we hadn’t emitted CO2) that didn’t happen and for which we don’t have data. Which also means that his claim has no merit at all.
Nylo,
I’m not sure if I’m calculating the same thing as Willis. I took each year and compared with the 30-year period ending 7 years earlier. so I compared 1987 with (1951-1980), 1986 with (1950-1979) etc. I got 72 years, of which 7 were t>2.5, so that is 1 in 10. The years were 1973 1977 1979 1980 1981 1983 1987. But I got t= 3.12 for 1987, and only 1980,1981 and 1983 exceeded that. So it looks like the distribution varies in time. But there are plenty of ways I could be doing something different. The 1987 t-values don’t match.
Nick and Nylo,
I have a suggestion for you. Create a synthetic time series of noise-free data. A simple progression of {1, 2, 3,…n}, with n at least 20, which is a line with a slope of 1, is adequate. Calculate the standard deviation of the set and the number of standard deviations from the mean of the last data point, n. Now, increase n, say to 30, and recalculate the mean,SD, and t-value of the last number in the series. It should be instructive. What I hope that you would conclude is that both the slope and the length of time time series (n) determine the calculated SD, and hence the ‘probabilities.’ Now, in the real world we have noise. The noise, or annual variance, is what is actually interesting because Hansen’s claim was that random variation alone had a 1% probability of accounting for the early 1988 temperature anomalies. However, to obtain a normal distribution (and be justified in using parametric statistics), one has to de-trend (a good start, but not necessarily sufficient) the anomaly data. From that, one can see which annual anomalies have a large standard deviation and, hence, low probability.
Thanks Nick but I think that you didn’t get my point. I say that you cannot evaluate Hansen’s statement by evaluating the existing data set because his claim is that, should there be NO human interference (no CO2 emitted), the probabilities of this happening would be 1%. To evaluate that, you would need a temperature data set which is NOT affected by human interference (CO2), and see if this phenomenom happens in THAT data set. But such a data set does not exist.
Regards.
When I was about to click on the link for this article the advertisement beneath was a picture of a bag full of £50 and £20 notes. Why do I feel that was so appropriate?
Hansen’s worst professional misconduct was in not showing all of the medium and long run ocean temperature cycles that give context for an uninformed general audience. That too is a sure sign of bias that bias spotters first notice. When one focuses on a particular model selection bias, they also act to shut out doubt and ‘possible confusion’ from other factors. Throw in activists and political donations and you have the makings of a landmark policy distortion.
Great article, but given that astrophysicists are now talking about “Space Weather” in re solar wind, solar flares, etc, there obviously must be space climate relating to long term changes in such conditions. If only Hansen would stick to solar wind, but of course, there are no solar windmills for him to tilt against quixotically.
Thanks for posting your digitised values Willis, very helpful.