Jim Hansen’s 99% Surety on Global Warming Doesn’t Hold Up

Guest essay by Pat Frank –

When Jim Hansen testified before the Senate Committee on Energy and Natural Resources, on June 23, 1988, he said that he was 99% sure human-caused global warming was already happening.

Ever wonder how he got so sure?

I discovered the answer while researching the validity of the global surface air temperature record.

The story is worth attention because Jim Hansen’s 1988 testimony set the low evidentiary standard subsequently adopted by consensus climatology.

The background is well-known. Senator Tim Wirth arranged to have the committee meeting on the historically hottest day of summer. And the record 98 F that day fully met his needs. Senator Wirth also ensured that the meeting-room windows were left open over-night, so that the air conditioning was ineffective. The room was sweltering. Jim Hansen was a hit. It was a fine victory of cynicism and circumstance over scruples and science.

1. The 99% Solution: The substance of Jim Hansen’s testimony that day is provided in the little Appendix at the bottom of this essay; see [1] for the full record. But the essence of 99% is in the next Figure, the GISS 1987+ global air temperature record, complete with ludicrously small error bars (1sigma = ±0.035 C, or ±0.025 C).

clip_image002

Original Caption: “Global surface air temperature change for the past century, with the zero point defined as the 1951-1980 mean. Uncertainty bars (95% confidence limits) … [are] the result of incomplete spatial coverage by measurement stations, primarily in ocean areas. The 1988 point compares the January-May temperature to the mean for the same 5 months in 1951-1980.

This is pretty much Jim Hansen’s Figure 1 presented to the senate committee. I’ve added the green box, showing the ±0.13 C 1sigma jitter of global temperature during the 1951-1980 reference period.

The 1987 record was Figure 1 in Hansen and Lebedeff published in April 1988, about 3 months before his testimony, [2] and was Figure 6 of Hansen and Lebedeff, November 1987. [3]

In his testimony Jim Hansen implied that this 1sigma = ±0.13 C jitter was the full sum total of natural climate variability. The rise air temperature by mid-1988, nearly 0.4 C, was then 3s beyond nature. Obviously, that made the trend 99% unnatural.

That’s the whole ball of wax. Don’t believe it? Check out the quotes in the Appendix.

Somehow the 1884 and 1937 trend was overlooked by both Jim Hansen and the Senators. Right before their eyes was a 0.84 C global air temperature increase. Let’s see, that’s more than 6sigma beyond nature. In Jim Hansen world, that makes the trend more than 99.99966 % likely to be unnatural. Hmmm … what could possibly have caused that?

What about the probable ~1 C, unnaturally 7.7 sigma, increase in global air temperature between the Little Ice Age, 1650, and 1900? [4] Humans couldn’t have done it. Climate gremlins, maybe?

And those darn Dansgaard-Oeschger and Heinrich events, with their trends of multiple degrees Centigrade of global air temperature change per decade. Unnatural, too?

Or maybe they never happened. There’s an exciting new challenge the AGW stalwarts can take up for the cause: ‘We have to get rid of the Dansgaard-Oeschger and Heinrich periods.

2. Enter Physical Causality: But, testimony didn’t end there. Jim Hansen next offered his GISS Model II global warming scenarios A, B, and C to prove that the recent 99% unnatural warming was caused by CO2 emissions. After all, physics provides causality. The next Figure shows what the senators saw and what JGR published, after peer-review and all. [5]

The committee saw, and peer-reviewed JGR published, predictions without error bars. Pace JGR, but that makes them physically meaningless. They can not and do not signify any physical causality, at all.

If one goes ahead and imports scientific credibility by computing physically valid error bars (±8.9 C in 1988), the scenarios show themselves to be, well, physically meaningless. [6] Oh, well. No rescue there.

clip_image004

From the testimonial legend: “Annual mean global surface air temperature computed for trace gas scenarios A, B and C described in reference 1 (reference [5] below – P). … The shaded range is an estimate of global temperature during the peak of the current and previous interglacial periods, about 6,000 and 120,000 years before present, respectively. …

So there you have it, Jim Hansen’s 99 % surety: for his purposes the entire 1sigma range of natural global variability in air temperature is ±0.13 C. The fact that there is no physical justification at all for his choice didn’t seem to bother anyone, including a trained Ph.D. astrophysicist. It is a very opportune statistic, though.

Jim Hansen’s physical causality? Established by reference to warming scenarios of unrevealed, unremarked, and almost certainly uncalculated accuracy, computed using a model that was (and remains) unvetted by any published critical physical analysis.

In my view, the analysis is horridly incompetent. But it set the standard of consensus climatology that has remained in force right up to the present.

Appendix

Jim Hansen’s oral proof testimony to the committee: “[The] global temperature … is the highest of the period of record (then about 100 years). The rate of warming over the past 25 years … is the highest on record. 1988 will be the warmest year on the record.

“Causal association requires first that the warming be larger than natural climate variability and, second, that the magnitude and naturel of the warming be consistent with the greenhouse mechanism.

 

“The warming is almost 0.4 degrees Centigrade by 1987 relative to climatology, which is defined as the 30 year mean, 1950 to 1980 and, in fact, the warming is more than 0.4 degrees Centigrade in 1988. The probability of a chance warming of that magnitude is about 1 percent. So, with 99 percent confidence we can state that the warming during this time period is a real warming trend.

 

“The main point to be made here is that the expected global warming [Jim Hansen’s Model II Scenarios A, B, and C – P] is of the same magnitude as the observed warming. Since there is only a 1 percent chance of an accidental warming of this magnitude, the agreement with the expected greenhouse effect is of considerable significance.” [1]

Jim Hansen’s written proof testimony to the committee: “The present observed global warming is close to 0.4 oC, relative to … the thirty year (1951-1980) mean. A warming of 0.4 oC is three times larger than the standard deviation of annual mean temperature in the 30-year climatology. The standard deviation of 0.13 oC is a typical amount by which the global temperature fluctuates annually about its 30 year mean; the probability of a chance warming of three standard deviation is about 1%. Thus we can state with about 99% confidence that current temperatures represent a real warming trend rather than a chance fluctuation of the 30 year period.” [1]

And, just to lock it in, here’s what the GRL authoritatively peer-reviewed Hansen and Lebedeff say about the trend: What is the significance of recent global warming? The standard deviation of annual-mean global-mean temperature about the 30-year mean is 0.13 oC for the period 1951-1980. Thus the 1987 global temperature of 0.33 oC, relative to the 1951-1980 climatology, is a warming of between 2s and 3s. If a warming of 3s is reached, it will represent a trend significant at the 99% confidence level. However, causal connection of the warming with the greenhouse effect requires examination of the expected climate system response to a slowly evolving climate forcing, a subject beyond the scope of this paper.” [2]

The “expected climate response” was Hansen’s Model II A, B, and C scenarios, both published, [5] and presented before the committee, [1] without any error bars.

From the testimony scenario Figure legend: “[Scenario A assumes continued growth rates of trace gas emission rates typical of the past 20 years, i.e., about 1.5 % yr-1 emission growth; scenario B has emission rates approximately fixed at current rate; scenario C drastically reduces trace gas emissions between 1990 and 2000].”

 

[1s, (2s, 3s, etc) changed to 1sigma for clarity.  ]


 

References:

1. Hansen, J. Statement of Dr. James Hansen, Director, NASA Goddard Institute for Space Studies. 1988 [Last accessed: 11 August 2014; Testimony before the US Senate Committee on Energy and Natural Resources: The Greenhouse Effect: Impacts on Current Global Temperature and Regional Heat Waves]. Available from: http://image.guardian.co.uk/sys-files/Environment/documents/2008/06/23/ClimateChangeHearing1988.pdf.

2. Hansen, J. and S. Lebedeff, Global Surface Air Temperatures: Update through 1987. Geophys. Res. Lett., 1988. 15(4): p. 323-326.

3. Hansen, J. and S. Lebedeff, Global Trends of Measured Surface Air Temperature. J. Geophys. Res., 1987. 92(D11): p. 13345-13372.

4. Keigwin, L. Bermuda Rise Box Core Data. IGBP PAGES/World Data Center-A for Paleoclimatology Data Contribution Series # 96-030. 1996 [Last accessed: 14 September 2007; Available from: ftp://ftp.ncdc.noaa.gov/pub/data/paleo/paleocean/by_contributor/keigwin1996/.

5. Hansen, J., et al., Global Climate Changes as Forecast by Goddard Institute for Space Studies Three‐Dimensional Model. J. Geophys. Res., 1988. 93(D8): p. 9341-9364.

6. Frank, P., Propagation of Error and the Reliability of Global Air Temperature Projections; Invited Poster, in American Geophysical Union Fall Meeting. 2013: San Francisco, CA; Available from: http://meteo.lcd.lu/globalwarming/Frank/propagation_of_error_poster_AGU2013.pdf (2.9 MB pdf).

Advertisements

157 thoughts on “Jim Hansen’s 99% Surety on Global Warming Doesn’t Hold Up

  1. Emissions are following scenario A, but observations are following scenario C. So in the real world we have achieved Hansen’s climate benefits of cutting emissions, without actually having to cut them.

      • co2 emissions follow seem to fall between A (exponential) and B (slowing linear rise). there was a dip in 2008 due to the gfc, but the rebound was quick.

      • CO2 concentration when up from about 360 ppm to nearly 400 ppm. If the natural level were pre-industrial 280 ppm, that’s fully a third of the rise.

        Emissions rose from about 6000 MT/yr to about 9000 MT/yr, and are continuing to accelerate.

        How that can translate into constant forcing by your lights, I have no idea.

      • CO2 concentration when up from about 360 ppm to nearly 400 ppm. If the natural level were pre-industrial 280 ppm, that’s fully a third of the rise.

        Not in ‘forcing’ it isn’t, if you’d read the paper (or just about any other one on the subject) you’d know that forcing depends on ln(CO2). Hansen projected 368ppm by 2,000, so that’s an increase since then of ~30ppm, or ~8% so a very small increase in forcing.

        How that can translate into constant forcing by your lights, I have no idea.
        Because unlike you Bart I read the paper and understood it!
        Scenario C was based on elimination of CFC emissions by 2000 and reduction of ‘Trace gas’ emissions so that the annual growth rates were zero by 2000. In fact CFCs were eliminated earlier than he expected and CH4 stopped growing by 2000 at a lower level than his projection (1916 ppb).

        By now Scenarios A & B showed an increase in forcing due to CO2 alone of about 0.2K since 2000 but an increase due to ‘Trace gases’ of ~0.4K.

        In Scenario C the forcing term due to ‘Trace gases’ is in fact less than anticipated by Hansen (0K) so will compensate in part for the small increase in CO2 forcing.

      • What in the world are you talking about? The ratio of warming since 2000 to before 2000 should have been

        (log(400/360))/log(360/280)*100 X 100% = 42%

        It is, in fact, 0%.

      • You said f ~ log(CO2), where by ~ I mean to indicate proportionality. Assuming T ~ f, then

        T(now) = T(long ago) + alpha*log(CO2(now))

        for some alpha. Then,

        T(now) – T(2000) = alpha*log(CO2(now)/CO2(2000))

        T(2000)-T(long ago) = alpha*log(CO2(2000)/CO2(long ago))

        (T(now) – T(2000))/(T(2000) – T(long ago)) = log(CO2(now)/CO2(2000))/log(CO2(2000)/CO2(long ago))

        The thing I have gotten out of our conversation is that Hansen did not really address the situation we have today, so none of his projections are applicable. Arguing about it is like arguing about whether apples or lemons make better orange juice.

        That’s a great gig – make catastrophic prognostications that you can never be pinned down on because you can always argue that the actual conditions didn’t match your model. Then, feast on the grants flowing your way by people who want to use threats of catastrophe to force people to do as they like. It’s not unlike tinpot dictators who cite the ever present threat of invasion from a real or imagined foe to maintain power. Brilliant, but not very honorable. Not something I’d want to dedicate my life to aiding and abetting.

        But, in the end, there is no indication from the temperature record that anything is happening which wasn’t laid in well before humans could have been having any impact. The temperature record is dominated by a long term trend and a ~60 year cyclical phenomenon which have been in evidence for well over a century. Nothing is changing, nothing is accelerating, relative to that long established pattern.

      • The thing I have gotten out of our conversation is that Hansen did not really address the situation we have today, so none of his projections are applicable.

        That’s because you don’t understand the distinction between CO2 and Trace gases, and insist on only considering CO2.
        Hansen in Scenario C estimated CO2 at 368ppm which is about a 30ppm gain which is compatible with an increase in forcing of about 0.2K. As pointed out above the trace gases were expected to yield a forcing of about 0.4K in Scenario B so since the trace gases are in fact less than projected then that would yield a negative change in forcing thus canceling out the gain from the CO2.

  2. Obviously the yellow stained area in Fig 1 has been proven beyond a shadow of a doubt and with geometric logic to be the optimum temperature for life.

      • Wirth it was who said, “We’ve got to ride the global warming issue. Even if the theory of global warming is wrong, we will be doing the right thing — in terms of economic socialism and environmental policy.”.
        That ought to have given the game away 20 years ago!

      • Wirth did not say “economic socialism.” It does no good to put false words in someone’s mouth.

      • I’ve searched and there are numerous citations of Sen. Wirth saying precisely what has been quoted, ‘economic socialism’ and all.

      • I searched out the “economic socialism”, too, and found the quote in multiple forms, including ‘economic socialism’, ‘economic policy’, ‘economic and environmental policy’, etc. Lots of variations. Too bad. Without a voice recorder, we have no way of knowing what was actually said. However, the intent is still clear.

    • There are incompetents (see Mann), there are dishonest incompetents (see Lew), and then there are the simply dishonest (see Hansen).

      • That’s why they are so sure this is human caused, because they are the humans causing it by fudging the data! Hansen should be 100% sure as he is one of the instigators.

  3. I infer, without being certain, that you are using ‘1 s’ to refer to one standard deviation?
    Also, can I ask for a ‘take-home message’ at the end of the article? I THINK I know what you want me to take away from this, but I am not completely sure of that either.
    And, since the issue seems to be that Hansen was wrong about the Standard Deviation, what is the correct value, and how do you know?

    • Take home message: Hansen’s testimony that started this ball rolling in the USA was completely unjustified.

      It took a small period in the mid-20th Century as the only example of natural variation, when the world is older than that.

      It took unvalidated climate models as evidence for man’s responsibility without any reference at all to the real world.

      And climate science accepted that level of stupid as the right level of stupid to work to.

    • Lee,

      You ask “And, since the issue seems to be that Hansen was wrong about the Standard Deviation, what is the correct value, and how do you know?”

      Actually, not is not the issue. Hansen was wrong in assuming white noise (power independent of frequency) when in fact the climate displays red noise (more power at lower frequencies). As a result, he drastically underestimates the possible natural,variation over any period of time much longer than one year.

      To accurately assess natural climate variability, one would need to accurately determine the frequency distribution of the variation. A crude attempt at that has been attempted by Lovejoy, but I think his determination of the spectrum leaves much to be desired; so I don’t trust his result. The full reference for his paper is:

      S. Lovejoy, Scaling fluctuation analysis and statistical hypothesis testing of anthropogenic warming, Clim. Dyn. (2014) 42:2339–2351 DOI 10.1007/s00382-014-2128-2.

    • Hi Leo – you’re right, all the “1s” and “7s,” etc., should be 1-sigma and 7-sigma, etc. I used the Greek letter in the original, but it apparently didn’t translate into HTML.

      The issue, is, first, Jim Hansen’s implication that (+/-)0.13 C defines natural variability, and that anything outside of that range is necessarily unnatural. There is zero physical justification for that.

      Second, of course, is the presentation of model projections without any reliability metric.

      His entire presentation was a charade of science.

      The (+/-)0.13 C 1951-1980 standard deviation he presented was correct. I’ve verified it. It just has no particular climatological meaning beyond that it represents the air temperature jitter between 1951 and 1980. It has nothing whatever to say about the limits of natural temperature variability.

      The reason we know, is that there is no valid physical theory of climate that can tell us of the serious importance of any (+/-)0.13 C jitter. That, of course, and the known large rapid jumps in air temperature in evidence throughout geological time.

      • Sorry Pat.

        You are right, yes, there is no physical theory of climate that can tell us of a serious importance of any 0.13C variation, but in the other hand there is evidence and data in paleo climate that suggest there is only a ~0.4C natural variation, either in a climate equilibrium at a 1K year scale, or in a transient climate condition at just about above 1 century mark at max-min.\\\\

        Problem with Hansen is that he has put that at a decadal scale of 0.13C, which according to the last century is the appropriate share of ~0.4 C warming observed and considered as above the 1s (above the other 0.4C warming remaining and considered as 1s).

        cheers

      • whiten, there are no paleo-climate studies that actually produce physical data in Celsius. The entire field of paleo-temperature reconstruction is statistical flummery. I’ll have a paper out in E&E later this year (I hope) discussing that, titled, “Negligence, Non-science, and Consensus Climatology.”

        On the other hand, ice-core records show large natural swings in dO18 that reflect temperature at least in part. Exactly what part remains unknown. Ice-rafted debris fields in the north Atlantic also tell of much colder past climates. So, I very much doubt that “there is only a ~0.4C natural variation” on any scale except a tendentiously chosen one. Note the ~0.8 C increase between 1884 and 1937; twice your natural variation.

        Not to get too carried away by the conversation, debates about global temperature swings are in any case made moot in light of the generally ignored ~(+/-)0.5 C (869.8 KB pdf) uncertainty in the record.

    • First, I like the paper and agree with the synopsis, thank you PatFrank. But, as Leo Morgan asked…could somebody calculate the widest standard deviation, narrowest (is that a word?). Using data available on that date. Or direct me to the data and i will figure myself. Why did they pick the 1950 – 1981 period?

      • It looks like to me that they picked that period because it gave them a low standard deviation.

        Admittedly they could use the rationalization that they picked it because it was the most recent approximately 30 year period. They could say that recent data is preferable to older data, and 30 years is long enough. But pretending that a given 30 year period provides a way to capture the extent of natural variation is either incredibly foolish or intentionally deceptive.

        Thanks for this excellent article. I wasn’t aware of how blatantly bad Hansen’s case was, even before subsequent observations blew up his “scenarios”. My question is: has he ever responded to the obvious questions?

      • I wrote (Dudley Horscroft February 18, 2015 at 9:54 am)

        What is the current level of atmospheric CO2? To answer my question I went Googling and found this: http://www.esrl.noaa.gov/gmd/ccgg/trends/

        The current NOAA level at Mauna Loa for Jan 2015 is 399.96 ppm. It turns out that the 400 ppm level was the peak for June 2013 – widely trumpeted as the ‘tipping point – or some such nonsense! In 1998, as very roughly estimated by eyeball of the graph, it was about 364 ppm. From 364 to 400 is a 10% increase (within the limits of my guess at the 1998 level). So the sensitivity to a CO2 increase of 10% is near enough zero. On this basis – extrapolating as climate scientists are wont to do – the sensitivity of the atmosphere to CO2 is still near enough zero.

        QED – or something!

      • @ Dudley Horscroft February 18, 2015 at 7:59 pm

        The current NOAA level at Mauna Loa for Jan 2015 is 399.96 ppm. It turns out that the 400 ppm level was the peak for June 2013 – widely trumpeted as the ‘tipping point – or some such nonsense!

        Atmospheric CO2 normally ALWAYS peaks in mid to late May of each and every year and has been doing said since 1958 as per Mauna Loa measurements ….. although there are a few instances that it peaked in April, such as in1999 and 2000.

        NOAA’s monthly CO2 data: ftp://aftp.cmdl.noaa.gov/products/trends/co2/co2_mm_mlo.txt

      • The way I see it, 30-year records are chosen because the Central Limit Theorem says that 30 randomly chosen instances of an over-all iid process is enough to produce a reasonably good normal distribution with a pretty accurate mean. I’ve checked that a few times with temperature-time records, and 30 year averaging does remove most weather noise.

        That being true, then any 30-year mean temperature will have very low random noise residuals. Systematic noise is entirely a different matter.

  4. I personally believe that both Hansen of the US, and Suzuki of Canada, know that the theory of CAGW is full of crap. Both have officially renounced science because leads to conclusions that don’t meet their activist views. Their words. I strongly believe that they both view any human effect on the planet in the negative, and Suzuki is for certain an outright marxist. CAGW is for them the best way halt human progress, and reduce the population, not just the growth in population.

    • Suzuki didn’t even know what GIIS, CRUTEM or HADCRUT (did I get those right?) was, that they are some of the top global temperature datasets. He has revealed that he doesn’t know much climate science so he could very well believe in CAGW.

  5. This is why they have worked so insidiously to ‘cool the past’ so much. This makes it look warmer today so they can triumphantly say, ‘It was the warmest year EVAH‘ recently.

    And this whole thing is a fraud, openly a fraud.

  6. Funny how 8 years of data was enough for alarmists to claim drastic global changes were required to save the world.

  7. Slightly off topic here – it’s time to start checking out the Great Lakes Ice Cover page, under the Reference Page section on this site.

    Last year, the max of 92.2% coverage was achieved on March 3rd, making it the second highest point on the chart (1973-2014). Previous records were 94.7% in 1979 and 90.7% in 1994.

    As of February 17, the coverage was 82.3%. So, will last year’s record be broken? When?

    While on vacation last summer, I passed by the Keneenaw Snow Thermometer (Upper Peninsula of Michigan) . The most snowfall was 390.4 ” in 1979 and they had about 375″ last year. They already have 212″ for this year. (web page http://www.pasty.com/snow/index.html)

    • Liz, as someone who lived for a few years on the shores of Lake Superior, ice levels and snow fall are opposites. So much of the snow is lake effect and when the ice is miles from the shore line, the lake effect falls on the ice and does not make it to land. Check 1994, where ice percent reached 90.7 % and snow at Marquette was 175″

    • I have been following the ice cover as well. I expect it to jump even more in the next few days. Near 10 below F should pop up the percentage a bit.

      It is interesting that Lake Michigan lags Superior even though Superior is much deeper. I don’t have a good answer unless just proximity to outflows from many urban areas along its shores.

      • sleepingbear dunes

        It is interesting that Lake Michigan lags Superior even though Superior is much deeper. I don’t have a good answer unless just proximity to outflows from many urban areas along its shores.

        Some of that delay (between lakes Michigan and Superior) is due to the lower latitude of Michigan (Superior is further north) , and the deeper water of Superior, but not a significantly greater surface area. Lake Michigan is modestly deep, but runs north south.

        Take a rough triangle at the south end: top is at 43 north (Milwaukee to Grand Rapids to Chicago), bottom is at 41.5.
        Weighted average area would be a little north of 42.25. Compare it to Lake Superior, whose entire surface is up at 47 north latitude.

        	Lat 42.25	Lat 47
        Hour	 South  Lk      Lake
                 Michigan	Superior
        0.0	        0	0
        1.0	        0	0
        3.0	        0	0
        5.0	        0	0
        7.0	        1	0
        9.0	      295	228
        11.0	      548	456
        12.0	      582	487
        13.0	      548	456
        15.0	      295	228
        17.0	        1	0
        19.0	        0	0
        21.0	        0	0
        23.0	        0	0
        
        		
        	     1688	1368  =  Sum of every 2 hour period
        

        Today, day of year 49, 18 February, that means a total of 320 watts/m^2 x 2 hour interval = 640 watts/m^2 extra heat received in south L. Michigan. Or thereabouts.

        The deep water of Lake Superior, compared to the shallow south end of Michigan will keep the upper surface water (that part which will freeze!) much colder as well, so Superior upper water “starts off” much colder in November, December, and January.

        Lake Eire is also at this 42.25 latitude. And, and as you see each year, it also takes a long time to freeze – and rarely freezes completely over.

      • And here I thought it was, as Gordon Lightfoot sang:
        Lake Huron rolls, Superior sings
        In the rooms of her ice-water mansion
        Old Michigan steams like a young man’s dreams
        The islands and bays are for sportsmen
        And farther below, Lake Ontario
        Takes in what Lake Erie can send her
        And the iron boats go as the mariners all know
        With the gales of November remembered

  8. Ever wonder how he got so sure?

    No because thanks thanks to his little tricks before the committee meeting , I assumed he was fanatic that cared nothing for honest science and was more than willing to do what ever it took to ‘save the planet ‘ while at the same time lining is own pockets . And his action post this have just proved me right .
    Still if he is lucky he will be dead before the whole thing blows up and his ‘heroic status ‘ is seen for the BS it always was . Thankfully Mann is to young for that . so he gets to have this ‘day in the sun’ for which no factor of sunscreen will protect him from the well deserved ‘burning ‘.

  9. Another point of distortion used by Hansen – the first part of 1988 was the tail end of a major El Nino. This was the largest El Nino since the 1972 spike in global temperatures that is apparent in his own graph. But 1988 was also the start of a major La Nina in the back half of the year. This use of just the first five months is a total distortion at an opportune time which I am sure he was well aware.

    What is unfortunate is that our elected officials did not/could not/were not capable of seeing through this and even just challenge the assertions based on the documented 0.6C rise between 1880 and 1940. One question asking how this large increase took place naturally and why it wasn’t part of the “calculation” of what the 1 sigma of natural variability would have stuck a fork in Hansen. And maybe much of the current state of affairs would not exist.

    How I yearn for more engineers in Congress.

    • Bear in mind politicians were (as they are today) looking for something to save us from.
      Humans are stupid and a filthy pox on the earth.
      CAGW promised more tax revenue and control.
      What’s not to like?

    • No science necessary, just jump on the bandwagon and parrot the meme.
      97%97%97%97%97%97%97%97%
      We’re all gonna die.
      Save the whales
      Oops, wrong meme.
      Save the earth.

  10. An amazing set of BOBBLEHEADS with Hansen orchestrating the Senate Committee on Energy and Natural Resources. Just picture it all, the bobbleheads responding in synchronicity to whatever road they travel on. Mindlessness, uninformed, doing what bobbleheads do the best.

  11. A very worthwhile post. Anything that helps us make more sense of what has happened in politics and in climate science since the 1980s is to be welcomed for bringing us nearer to reducing the chances of such madness recurring.

    It is our, that is humanity’s, bad luck that someone like James Hansen was in such a position at such a time. A stronger character, a more powerful intellect, and a personality not given to self-exaltation would have been so much better. The harms caused by overblown alarmism based on so little of substance, and ignoring so much of substance, are many and are surely not yet over.

  12. So even 30years ago “climate science ” was a fraud. The more things change the more they stay the same.

  13. Everybody complains about the weather, but nobody does anything about it.

    The Charles Dudley Warner quote has been around for a long time, and finally Dr Hansen prodded the politicians into action. How unsurprising that they came up with a politician’s solution, a new tax. When all you have is a hammer …

    (By the way, hold Alt key and type 229 to get a σ sigma character)

  14. “he said that he was 99% sure human-caused global warming was already happening.”

    Why not 100 percent? I am 100 percent certain that human-caused global warming is already happening from CO2 because it cannot be otherwise. Concurrently, human-caused global cooling is also proceeding from aerosols. Rather a lot of natural warmings and coolings are also happening. Good luck sorting it all out. Perhaps I’ll toss some bones for this year’s forecast.

  15. I doubt it would take any time whatsoever to find out how these figures have been upjusted since this date and conclude using their own logic that: “we are more than 99% sure the figures have been fraudulently changed”.

  16. I has email Hansen, many times over the years telling him he was wrong. Now he is fast coming to that realization, and it is only to get worse as this decade proceeds and the global temperature trend is down.

  17. Hansen lied to get more government funding for his department. Others lied, and continue to lie, to eiher: enrich themselves (Gore, et al) or power/control (most government officials). Many of the latter have found a way to financially benefit as well. Wonder how much Obama has secretly hidden away form Solendra and other benefactors?

    When this Hoax has finally collapsed, I hope a lot of these crooks end up in prison for a very long time, especially people like Gore, Hansen, Mann, Alley and Puchari! This list goes on and on.

    Bill

  18. This is just one in a long line of open incompetences in climate science, but one of the first and worst.

    Nobody outside of places such as WUWT republish this old crap to demonstrate just how “settled” the science really was at the time (and is still) and nobody seems to have to guts to retroactively call him for incompetent statistical analysis. This is the head of NASA GISS, testifying before congress, putting on a dog and pony show that involved turning off the air conditioning in the Capital building during Washington, DC summer, and presenting a figure that is an absolute abomination as far as statistics is concerned. The error bars are incorrect and meaningless. The assertions of confidence are dressed up in the language of statistics but are without the slightest statistical foundation.

    There are only two possibilities here. Either Hansen knew that the numbers being presented were nonsense and lied to congress under oath, or else he is so utterly incompetent in statistics that everything he ever published should be rescrutinized and withdrawn where the statistical analysis is basically a bunch of made up numbers (as in this figure — I defy him to justify an error bar smaller than what is accepted in e.g. HadCRUT4 even today, with input from far more stations including ARGO buoys that do indeed cover a lot of the ocean). And as was pointed out above, the variations in the graph he himself presented utterly refute his conclusion even without the sins of cherrypicking end points, hogwash error estimates, abuse of normal (gaussian) statistics in applying them to a manifestly non-stationary timeseries, and a healthy dose of post hoc ergo propter hoc reasoning of the sort that they explicitly warn against in statistics textbooks.

    But indeed, the truly damning thing is that we are following his most optimistic scenario in temperature even as CO_2 is following the most pessimistic trajectory. It is difficult to imagine a clearer signal that the science in the paper was incorrect.

    rgb

    • Obama and his GISS and EPA have endorsed the position of Hansen. It is indeed the current government position! AGW climate change has been pronounced as “real”. The politics need to be changed.

    • I couldn’t say it better, rgb. Like you, I’ve been outraged by the assault on science, and have realized it can only be incompetence or chicanery. It’s a sad day when incompetence is the best benefit-of-the-doubt judgment.

    • Wasn’t it Hansen who believed that Venus was the result of runaway warming due to CO2? Didn’t he often compare that to what could happen to Earth?

      • Can someone tell me where the energy required to raise the heat content of the atmosphere to the level of Venus would come from? Obviously the CO2 wouldn’t be able to change our orbit, or could it? (I understand that CO2 does wondrous things.)

      • Indeed. Venus was used as an example of the Greenhouse Effect back in science class in the 80’s for me. 97% CO2 concentration and all that (perhaps that’s where the 97% of all scientists figure comes from). Of course the 90 odd times the mass of its atmosphere and pressure at its surface has nothing to do with temperature (sarc) and better not compare ratios of CO2 mass with Earth in case the 230,000 times more CO2 amount on Venus makes rising CO2 correlated temperature rises here on Earth makes you wonder about how soon we will surpass Venus as the hottest planetary surface in the solar system.

    • Has anyone critically examined the 1987 Hansen & Lebedeff paper that is still used by NASA GISS and others to justify long-range correlation of temperature anomolies (and thereby adjust or infill data from hundreds of miles away)?

      The analysis method was documented in Hansen and Lebedeff (1987), showing that the correlation of temperature change was reasonably strong for stations separated by up to 1200 km, especially at middle and high latitudes.

      http://pubs.giss.nasa.gov/abs/ha00700d.html

      • The H&L (1987) results were driven by a desire to “demonstrate” that there was sufficent coverage of the globe by weather stations to perform historical mean temperature studies. The 1200-km limit was chosen (apparently) due to the fact that the correlation coefficient at that distance between random stations drops to 0.50 for middle-to-high latitudes and 0.33 for low latitudes. I would contend that 0.50 is too weak to robustly justify the conclusions and that 0.33 for the low latitudes (which cover 40% of the planet) should have caused them to reject the hypothesis.

        As a crude first swing at the problem, the H&L (1987) study is unremarkable. But I wonder whether its weak correlations have been retested by practitioners or simply accepted due to the convenience of appealing to this authority?

        From H&L (1987):

        If the 1200-km limit described above, which is somewhat arbitrary, is reduced to 800 km, the global area coverage by the stations in recent decades is reduced from about 80% to about 65%.

        From NASA GISS (http://data.giss.nasa.gov/gistemp/station_data/), explaining how they generate data with approximately 80 percent coverage of the Northern hemisphere and 70 percent coverage of the Southern, retrieved 19 February 2015:

        c. the percent of hemispheric area located within 1200km of a reporting station.

  19. The fact that there has been no warming for 18 years when CO2 is rising is a paradox, under the IPCC/Jim Hansen/warmist theoretical paradigm. Paradoxes occur when there are one or more fundamental errors in theory. When observations do not agree with theory, there is a physical reason, there must be a physical reason why that is true. The warmists are skipping the process of science, where the objective is to solve a puzzle, to explain the observations and then after the puzzle has been solved to advocate policy.

    The fact has been a plateau of ‘no’ warming for 18 years (satellite data) is a more serious discrepancy (a paradox) than a ‘lack’ of warming. As atmospheric CO2 has been increasing continually for the last 18 years and as there is a lag from an increase in forcing to a change in temperature, the IPCC’s general circulation models predict a wiggly increase in planetary temperature where the wiggles are caused by natural variability in the earth’s climate and the gradual increase is caused by the increased forcing due to the increase in atmospheric CO2. What is observed is not a lack of warming (a wiggly line that increases but increases more slowly than expected) but rather a plateau of no warming (a wiggly line that does not increase).

    The logical constraints on the forcing mechanisms is different for a ‘lack’ of warming (planetary temperature is still increasing but less than the general circulation models predicted) and a plateau of ‘no’ warming. Aerosols or heat hiding in the ocean could explain a ‘lack’ of warming, where planetary temperatures are increasing but less than model predictions, they cannot explain a plateau of ‘no’ warming.

    As atmospheric CO2 is continually increasing, the aerosols or the heat hiding in the ocean would need to exactly balance the CO2 forcing and to start in 1998. i.e. There needs to be a smart mechanism that hides the CO2 forcing that is suddenly turned on in 1998 or alternatively there must be a smart cooling mechanism that must increase overtime to create the observed plateau of no warming. A third issue with the heat hiding in the oceans or any cooling mechanism(s) is how to explain the latitude warming paradox (the majority of the warming has occurred at high latitudes where the CO2 forcing mechanism/GCM predicts the most amount of warming should occur in the tropics where their is the most amount of long wave radiation emitted to space prior to the increase in atmospheric CO2).

    Comment:
    A second problem with the aerosol cooling theory is the majority of the warming in the last 50 years has occurred in the Northern hemisphere and primarily at high latitudes both hemispheres, the majority of aerosols have been released in the Northern hemisphere so there should be less not more warming in Northern hemisphere.

    A fourth and fifth logical problem with the appeal to weird smart warming hiding mechanism(s) and/or smart cooling mechanisms being the cause/explanation of the 18 year of no warming, is:

    1) What is the something that must have changed in 1998 to cause what is observed (i.e. there must be a change to cause the weird smart very significant smart heat hiding mechanism or cooling mechanism) to suddenly start working in 1998 and continue to work up to present time

    2) If there was a significant heat hiding mechanism(s) or cooling mechanism(s) there would be evidence of weird warming and cooling in the paleo record (there is cyclic warming and cooling in the paleo record, the cyclic warming and cooling correlates with solar magnetic cycle changes).

    The CO2 forcing mechanism cannot be turned off, if it is real. The warmists have appealed to the laws of physics and their models and have ignored the logical implications from the observational fact that there is no warming when atmospheric CO2 is increasing and the fact there is no significant observed warming of the upper troposphere: There logically must be, observations forces there to be, something that is fundamentally different in the upper troposphere to inhibit/stop greenhouse gas warming.

    What is inhibiting/stopping the greenhouse gas mechanism does not affect the CO2 mechanism in the lower troposphere. This upper troposphere inhibiting mechanism is the reason why in the paleo record there are periods of millions of years when atmospheric CO2 is high and the planet is cold and vice versa. (i.e. Greenhouse mechanism works but saturates for a physical reason.)

    An alternative hypothesis to the smart heat hiding in the oceans, or the smart cooling mechanisms to explain the plateau of 18 years where there is no increase in planetary temperature during a period when atmospheric CO2 is steadily increasing, is that the CO2 mechanism must saturate. The logical constraints of other observational discrepancies indicate that the CO2 mechanism saturates at roughly 200 ppm, less than the pre-industrial atmospheric CO2 level, of 280 ppm.

    If the CO2 mechanism saturates, then the majority of the warming in the last 50 years, was caused by the same thing that caused the majority of the warming in the last 150 years, solar magnetic cycle affects on planetary cloud cover. The explanation for the warming plateau of 18 years is that the multiple mechanisms by which solar magnetic cycle modulate planetary cloud cover saturated for the last 18 years (i.e. more or less solar magnetic cycle changes did not cause more or less cooling but rather the caused the wiggles in the temperature line).

    Observational support for the above assertion is the fact that the tropics have not warmed and the fact that there is no tropical troposphere hot spot at 8km. Due to the overlap of the absorption frequencies of water and CO2 and due to the fact that there are more CO2 molecules lower in the atmosphere (due to pressure), the CO2 mechanism is saturated in the lower troposphere. Higher in the troposphere there is less water as the atmosphere is colder due to the lapse rate – 6.3C cooling per 1000 meters increase in elevation in the tropics – and due to the fact that there are less CO2 molecules per unit volume higher in the atmosphere than lower in atmosphere (due to pressure, partial pressure determines the relative amount of CO2 molecules compared to other gases in the atmosphere, absorption and saturated of the greenhouse affect is dependent on the number of molecules of the greenhouse gas in question as well as the number of any greenhouse gas that overlaps with CO2 in the case of the CO2 lack of warming paradox) prior to the increase atmospheric CO2 so the CO2 forcing change is larger.

    The tropical tropospheric warming at 8km is not only a key observational signature that the CO2/greenhouse gas mechanism is working as expected, the tropospheric warming at 8km, warms the tropics due to down long wave radiation. i.e. The majority of the CO2 greenhouse gas warming occurs due to down long wave radiation from the warmed upper troposphere.

    Roy Spencer: Ocean surface temperature is not warming in the tropics.

    http://www.drroyspencer.com/2013/02/tropical-ssts-since-1998-latest-climate-models-warm-3x-too-fast/

    There is no tropical tropospheric hot spot Douglas and Christy paper.
    http://icecap.us/images/uploads/DOUGLASPAPER.pdf
    http://joannenova.com.au/2012/05/models-get-the-core-assumptions-wrong-the-hot-spot-is-missing/

  20. Hansen is the poster boy for Noble Cause Corruption.

    Anything for the cause: fiddling with data, jacking up the heat in committee presentations, getting repeatedly arrested on government time, lying when he claimed that President Bush was ‘muzzling’ him, etc.

    It’s all for the cause, and it’s a noble cause. Right? So it’s OK.

    • The stated desire to save the world is almost always a false front for the desire to rule the world.
      The desire to rule the world is not a noble cause.

  21. Hansen’s widely parroted claim that Venus is an example of a runaway greenhouse effect is the most quantitatively provable falsehood in this whole watermelon plan to take over the world . Every undergraduate student in “climate science” should be required to work thru the calculations — perhaps to have had the experimental experiences to ingrain the understanding . Without this hell being pitched as what could happen to us if we don’t submit our rationality and prosperity to the global eKo-crapitalists , it would be hard to instill the necessary fear .

    The earth is estimated to be about 3.0% warmer than the 279K of a gray ball in our orbit . ( This deadly fiasco is all over a 0.3% or so variation . ) Mercury and Mars ( even with it’s 95% CO2 atmosphere ) are also within a few percent of the gray body temperature in their orbits .

    Venus’s surface is apparently about 225% of the gray body temperature in its orbit . There is NO spectrum which can create such a “solar heat gain” , especially with Venus’s reported 0.9 reflectivity wrt the Sun’s spectrum . The best humanity has yet been able to create is TiNOX with a ratio of about 221% , but that’s with a reflectivity wrt the solar spectrum of about 5% . Note that by Kirchhoff ( & Stewart ) reflectivity and emissivity are just flip sides of the same processes .

    This was the point of my Heartland presentation on the most basic computations of planetary temperature , http://climateconferences.heartland.org/robert-armstrong-iccc9-panel-18/ .

    How Hansen’s halloween tale ever got past “peer review” and has not been repudiated with prejudice years ago is to me the flagrant confirmation that this existential absurdity is willful nonscience .

    • Bob Armstrong,

      Mercury and Mars ( even with it’s 95% CO2 atmosphere ) are also within a few percent of the gray body temperature in their orbits.

      Oh dear.

      http://en.wikipedia.org/wiki/Atmosphere_of_Mars

      The atmospheric pressure on the Martian surface averages 600 pascals (0.087 psi), about 0.6% of Earth’s mean sea level pressure of 101.3 kilopascals (14.69 psi) and only 0.0065% that of Venus’s 9.2 megapascals (1,330 psi). It ranges from a low of 30 pascals (0.0044 psi) on Olympus Mons’s peak to over 1,155 pascals (0.1675 psi) in the depths of Hellas Planitia.

      http://en.wikipedia.org/wiki/Atmosphere_of_Mercury

      Mercury has a very tenuous and highly variable atmosphere (surface-bound exosphere) containing hydrogen, helium, oxygen, sodium, calcium, potassium and water vapor, with a combined pressure level of about 10^-14 bar (1 nPa).

      Which is to say Mercury basically doesn’t have an atmosphere.

      Venus’s surface is apparently about 225% of the gray body temperature in its orbit . There is NO spectrum which can create such a “solar heat gain” , especially with Venus’s reported 0.9 reflectivity wrt the Sun’s spectrum.

      Noting Venus’ high albedo relative to Mercury, Earth and Mars pretty much torpedos your “argument”.

      http://nssdc.gsfc.nasa.gov/planetary/planetfact.html


      Mercury Venus Earth Mars
      ----------- ----------- ----------- -----------
      0.068 0.900 0.306 0.250 Bond albedo
      9,126.6 2,613.9 1,367.6 589.2 Solar irradiance (W/m2)

      440.1 184.2 254.3 210.1 Black-body temperature (K)
      440.0 737.0 288.0 210.0 Average temperature (K)

      1.0 4.0 1.1 1.0 Temperature Ratio

      I’m gonna go with magic not being the cause of Venus’ 4x greater surface temperature than S-B calculations would predict based on albedo and solar constant alone.

      • I mentioned the Mars 95% ( Wiki says 96% ) only because of the absurd claims made about CO2’s effect on temperature . My understanding , tho I’ve not bothered to calculate it , is that Mars’s CO2 “column” is about the same as earth’s .

        As you quote I said “especially with Venus’s reported 0.9 reflectivity wrt the Sun’s spectrum” That matches your quoted albedo . You do understand that albedo is reflectivity wrt the solar spectrum ?

        And , Venus’s extremely high albedo is exactly what I cite in the third paragraph as why Hansen’s claim is bogus by an order of magnitude .

        And that “Black-body” temperature for planets NASA lists is an embarrassment to NASA — an utterly useless figure which comes from the assumption of an extreme step function spectrum which is absolutely useless in an other computation and whose only use is propaganda . How that ever got published by NASA , particularly with that totally misleading label , and why it has never be corrected or removed is prima facie evidence of the mediocrity and even mendacity of the climate establishment . In fact , the blatantly irrelevant 184.2K figure for Venus illustrates how irrelevant the 254.3 figure for the Earth is . These numbers correspond to no real spectrum when real measured spectra are available .

        The gray ( flat spectrum ) body temperature in an orbit , which is identical to the actual “Black Body” and corresponds to the total energy density in the orbit is the only value of use in computations . But the understanding of the most basic experimentally testable physics of radiative transfer seems not to be in the curriculum of “climate scientists” . They jump right to Navier-Stokes and spend well remunerated careers jerking off in computational clouds without first learning how to calculate the temperature of a croquet ball under a sun lamp .

      • I find it amusing that atmospheric pressure caused by gravitational mass are routinely used when explaining the mechanism for stars to gain a high enough temperature to start the fusion process, is used to calculate temperatures at various levels the the atmosphere of Jupiter, Saturn, Uranus and Neptune, but is dismissed when explanations of raised “surface” temperatures on Venus or Earth are presented.
        The “surface” of Earth and Venus begins at the outer atmosphere and all planets warm from the outside in. More energy per molecule on the outside than on the inside, but more molecules per given volume on the inside than on the outside. Thus is a temperature gradient achieved.
        Apply Occam’s Razor. The convoluted greenhouse theory is convoluted and unnecessary.

  22. If you look at current GISS data sets and compare it to the one shown here, it appears they not only didn’t know what the temperature would be in the future, they didn’t even know what the temperature was at the time. Still, they were 95% sure they had the data right and 99% sure they knew what it meant.

    • Excellent point. I’d go so far as to say that one can’t be a warmist if one can spot contradictions like this.

  23. I wonder how much the atmospheric O2 is declining? Should it be a 2:1 Oxygen to Carbon ratio to the amount of fossil fuel carbons OXIDIZED. Never hear about that. ???? Prost!

      • Phil.

        The decline in atmospheric O2 is well documented, about 2 ppm/year, try Googling it.

        From [209,004 ppm to 209,002 ppm to 209,000 ppm to 208,998 ppm to 208,996 ppm] …

        the Trace gases followed Hansen’s Scenario C, if anything he underestimated the reduction in emissions.

        So, are you claiming that “trace gases” now are more important to the earth’s radiation heat balance than CO2?

        After all, trace gases were very small but CO2 increased from 1945 to 1966, but global average temperatures went down.
        Trace gases and CO2 increased from 1966 to 1976, but global average temperatures were steady.
        Trace gases and CO2 increased from 1976 through 1996, and global average temperatures increased.
        Trace gases were steady but CO2 increased from 1996 through 2015, and temperatures were steady.

      • RA , I think you are off by an order of magnitude .

        20% is 200,000 ppm .

        CO2 was originally somewhere around 300,000 ppm before photosynthesis since it contained that entire 200,000 ppm O2 .

        [fixed. .mod]

      • Very good point. You are correct.

        But, then again, we are told time and time again that this blog is not “peer-reviewed” so I must be right, right? 8<)

        Mod: request the 20,000 ref point be "0 'ed" properly.

      • This is 21st century “peer review” . The old high cost print “pal review” model is of rapidly diminishing value or influence .

        The fact that the original 0% O2 , 30% CO2 atmosphere , rather than causing the planet to broil , enabled the explosion of green life is overwhelming evidence that this whole absurdity fails grade school science ( cool sun not withstanding ) .

      • Everybody knows that, but there is only one gas exploited by the get rich on carbon tax crowd. I I imagine that they might be missing out! Help them tax every gas in the atmosphere…maybe create some new ones! Invest in futures, get credits when gas “X” declines, buy low, sell high…. Prost !

  24. The other thing to note is that Hansen actually destroys the case of CAGW with his grey shaded area in the second graph showing the “Estimated Temperature during the Altithermal and Eemain”, the current and previous interglacial periods, about 6,000 and 120,000 years before present.
    Well we still haven’t got above there yet, even with all their Quality Control Adjustments to the Raw data.

  25. So atmospheric CO2 has been increasing. ISTR that it was about 400 ppm about 4 years ago. What is it now? Or has it stopped increasing (so as to create the plateau?).

    • Dudley Horscroft

      Dudley Horscroft

      So atmospheric CO2 has been increasing. ISTR that it was about 400 ppm about 4 years ago. What is it now? Or has it stopped increasing (so as to create the plateau?).

      The “shelf” (now 18 years – 3 months) began before atmospheric CO2 reached 400 ppm, continued just as flat as it passed 400 ppm, and has continued as it exceeded 400 ppm.

    • The resident liberal-left newspaper here ‘The Irish Times’ prints daily on its weather page the previous days temperature maxima from major cities around the world, as do most other newspapers.The curious thing is that their temperatures are pretty consistently between one and two degrees Celsius higher than the temperature maxima in many other papers. Their source is the MeteoGroup. Does anyone know anything about this group and whether they have a “track record”?

  26. I have written about this particular testimony many times in the past. There is a very large difference between his spoken testimony that day in front of congress and the conclusion made in the peer-reviewed paper. Making fundamentally different statements to different audiences seems a hallmark of pathological science. But the more important point is how he arrived at this.

    He assumed that in the absence of human interference earth temperature would be a stationary random process with zero mean referenced to the 1951-1980 period and standard deviation of 0.13 per yearly average. And his probability statement rests on the distribution being gaussian. I am unconvinced that mean earth temperature would be stationary under the stated conditions. There seems little evidence that it is gaussian distributed. It might even be that the distribution of such statistic is such that the central limit theorem does not apply.

    Uncertainty in measured quantities ought to consider four things: 1) bias, 2)uncertainty contributed by instrument, 3)observer, and 4) underlying process. To my mind no one has ever conducted a proper Gage R&R study to separate the influences, surface stations project was the first serious step this direction in my view; and rarely will researchers in this field consider bias except in one direction — that historical temperatures have always been biased too high and must be corrected downward.

    • He assumed that in the absence of human interference earth temperature would be a stationary random process with zero mean referenced to the 1951-1980 period and standard deviation of 0.13 per yearly average. And his probability statement rests on the distribution being gaussian. I am unconvinced that mean earth temperature would be stationary under the stated conditions. There seems little evidence that it is gaussian distributed. It might even be that the distribution of such statistic is such that the central limit theorem does not apply.

      Oh, please. One glance at the temperature record over any significant time span is sufficient proof that it isn’t stationary. Including the same graph above where he makes this “assumption”. It isn’t true in the immediate past of 1950 to 1980. It isn’t true in any century long span of the thermal record. Indeed:

      show me the “stationary” period during the last 12,000 years! Note well that the black line is a) an average of all of the colored spaghetti without regard to its presumed accuracy or location; b) smoothed over roughly 300 years. Which means (as one looks at the spaghetti, the black line, and Hansen’s 30 year baseline data) that temperature is never stationary, it is always gradually increasing or decreasing, sometimes rapidly, sometimes slowly. It also strongly suggests that at least regionally, temperatures can vary by whole degrees per century, if not more.

      So one doesn’t need to remain “unconvinced” that the earth’s mean temperature would have been stationary if it weren’t for CO_2 over the last decade, century, or millennium, as there isn’t a shred of evidence for probable stability on any of those timescales in historical or proxy-derived data. We simply cannot predict what it would have done with, or without, CO_2. There is also no reason to think that the climate mean temperature is more than (possibly) gaussian on an annual or monthly basis around a comparatively slowly varying (but constantly varying) mean.

      If you want a butt-kicking good discussion of the evil of applying normal stats to non-stationary time series, read some of William Briggs’ extensive writings on the subject:

      http://wmbriggs.com/post/5172/

      This is just one of several of his enormously biting comments on the subject, but since it kicks off a series that really walks through it by the numbers, it is worth the read.

      None of this is “modern statistics” by the way. It’s just that Hansen is, as noted, either an idiot or a liar or both. Quite possibly both.

      rgb

  27. Not that I agree with Mr. Hansen’s (Henson) Muppetteering of the records in many cases but I think the point was missed

    “This is pretty much Jim Hansen’s Figure 1 presented to the senate committee. I’ve added the green box, showing the ±0.13 C 1s jitter of global temperature during the 1951-1980 reference period.

    The 1987 record was Figure 1 in Hansen and Lebedeff published in April 1988, about 3 months before his testimony, [2] and was Figure 6 of Hansen and Lebedeff, November 1987. [3]

    In his testimony Jim Hansen implied that this 1s = ±0.13 C jitter was the full sum total of natural climate variability. The rise air temperature by mid-1988, nearly 0.4 C, was then 3s beyond nature. Obviously, that made the trend 99% unnatural.

    That’s the whole ball of wax. Don’t believe it? Check out the quotes in the Appendix.

    Somehow the 1884 and 1937 trend was overlooked by both Jim Hansen and the Senators. Right before their eyes was a 0.84 C global air temperature increase. Let’s see, that’s more than 6s beyond nature. In Jim Hansen world, that makes the trend more than 99.99966 % likely to be unnatural. Hmmm … what could possibly have caused that?

    What about the probable ~1 C, unnaturally 7.7 s, increase in global air temperature between the Little Ice Age, 1650, and 1900? [4] Humans couldn’t have done it. Climate gremlins, maybe?”

    Mr. Hansen was referring to the Temperature/Time ratio.
    The quoted trends of 1884 to 1937 .84C was over a 53 year period 1/10C per 6.3 years
    The quoted probable trend of 1C from 1650 to 1900 is a span of 250 years 1/10C per 25 years
    And
    The quoted trends of 1884 to 1937 .84C was over a 53 year period 1/10C per 6.3 years
    While
    He is referring to the increase (trend) of .4C / 6.5 year period. 1/10C per 1.6 years

    I guess the Muppetteer is concerned His hand will sweat in his sock puppets

    • And you are dead right on every account. It was absurd even as he presented it. Sadly — and I say this as a professional who has founded two companies based on advanced statistical modeling and who has had to try to explain how it works to corporate executives of fortune 500 companies:

      NOBODY UNDERSTANDS STATISTICS

      This is a statement that is a self-referential statistical truth. If you go down a phone book (to cite a nearly obsolete reference but you understand what I mean) and use random numbers to select individuals, call them on the phone, and ask them what the Central Limit Theorem is, not one in a 100 — maybe not even one in 1000 — will be able to answer at all, and it is more likely one in 10,000 that MIGHT actually be able to tell you that it says that the mean of any collection of N > ~30 independent, identically distributed samples drawn without bias from a distribution with a suitably bounded variance forms, in the limit, a gaussian/normal distribution around the true mean of the distribution. The percentage that would know what an error function is even smaller, even though a surprising number would be able to tell you that the standard deviation has something to do with probable accuracy.

      This is almost independent of educational level achieved, although in mathematics, statistics, computer science and many of the sciences people with bachelor’s degrees in principle should know it. This includes many, but not all, physicians and health care providers (it should be all, but sadly the world isn’t perfect).

      Of people who know it, you can subtract away those that don’t know about joint and conditional probabilities, marginal probabilities, and Bayes’ theorem in its various generalizations, hypothesis testing, stationary vs non-stationary distributions, various distributions, and modelling in general, and you are left with a thin, thin fraction of all of humanity, one that excludes most scientists, a fair number of mathematicians, most computer scientists. You are left with the small fraction of humanity that is marginally competent to judge statistical results for things like bias, uncertainty, predictivity, probably truth. Maybe one in 100,000? With luck? Maybe even one in a million.

      Sure, they are concentrated in the well-educated first world countries so there are more than 3000 people in the US or Europe, but how many are there in Africa (per capita) or Asia (per capita)?

      rgb

      • Well as I have said on several occasions. Statistics is always performed on a (any) given set of exactly known numbers, whose origin is entirely irrelevant, to the process. (as far as the validity of the results of whatever statistical algorithms you choose to apply.)

        So statistics is literally numerical origami, and whether you end up with a crane or a swan after you do the work, is entirely a function of the algorithm and not the numbers in the set.

        The mischief starts when you try to attach meaning to what your folded paper turned out to be.

        However you fold, you can never get any information about any number that is not in the set. You can’t predict whether it will be higher, lower, or equal to the last number in the set, or for that matter, to any other number in the set.

        That is true if your number set is 1,2,3,4,5,6,7,8,9 ; or if it is the set of the very first actual numbers to be found on each page of that telephone book you mentioned.

        It is simply a matter of faith to believe that statistics can tell you anything about any numbers not in the statisticated data set. The numbers themselves contain ALL of the information that is present in the data set.

        Some persons would write a nice story or a piece of music on an empty clean sheet of paper.

        Still other persons, can’t think of anything more creative to do, than to fold it up , and see what it looks like.

        Just my opinion of course

      • rgb

        As always I enjoyed your comments immensely. For some reason, I think I would have loved your classes 50 years ago.

        A few of your testimonies before Congress would sink the boat quicker than anything I can imagine.

      • I knew some of these things, once upon a time. I slogged through Student’s “T”, the erf and even F-tests as a student in engineering. I had a very small window of need for some of the basic functions in my very first job nearly 40 years ago, but since then, nada. If you don’t use it, you lose it. I wouldn’t dream of trying to explain the central limit theorem to my sons without a handy reference and a couple of nights review beforehand, and “go ask your teacher” is the more likely reply. The real challenge for us “civilians” is trying to figure out when “the experts” are dealing from the bottom of the deck and when they’re dealing straight.

      • I couldn’t answer your questions either, but I can still spot bogus stats a mile away. Just because I don’t know the vocabulary, doesn’t mean I can’t understand the data.

    • Bryan, Hansen’s 99% certainty statistic doesn’t rely on a temperature/time ratio. It’s all about the 0.4 C magnitude of the trend. Not about its rate.

  28. If you have ever had problems with mildew, you probably learned that it is pointless to attack the mildew directly; you must change the conditions that cause mildew to thrive. The mildew spore are omnipresent, just waiting for favorable climes.

    People like Hansen are like mildew. They thrive under immoral conditions. What conditions are those?

    Not long ago, in the U.S., paper money was a promise — a contract — by the government to give the bearer on demand a certain amount and fineness of silver or gold. The price of gold was not fixed in terms of the dollar, as is popularly assumed; the dollar was by contract redeemable in a fixed amount of silver or gold.

    In 1933, with an unconstitutional stroke of the pen, FDR passed a law demanding all US citizens turn in to the government the largest circulating pool of gold coins ever, at risk of 10 years in prison and a $10,000 fine. He gave them IOU nothings in return. He unilaterally broke the government’s contract with the people. (Yet foreigners could continue to redeem dollars for gold.)

    In 1971, Nixon likewise gave the middle finger to foreign dollar holders. Like the first, this was a fraudulent default, as the US had plenty of gold with which to honor its contract.

    Since 1971 the US dollar has not been ‘money’, but rather, evidence of broken promises.

    In other words, the moral dimension of money has been annihilated. Money used to be a token of good faith, of honorable dealings. Money and morality were inseparable concepts.

    Now money, pseudo-money really, is a constant reminder that we are, via ‘legal tender laws’, forced at gunpoint to use broken promises redeemable in more of the same. The Treasury issues bonds redeemable in irredeemable Federal Reserve Notes that are themselves backed by bonds. The pretty name for this is ‘check-kiting.’ It is one gigantic lie.

    So there you have it. In a world where money is a lie, and where morality has been eliminated ‘for the greater good’, anything goes. The end justifies the means.

    So, back to Hansen… what do we expect? Money intercedes in every aspect of our lives. Money is now corrupt. Funding is now corrupt. Users of funding are now corrupt.

    Hansen is like mildew. He and his slimy type won’t go away regardless of how hard you scrub. As long as our monetary system is corrupt, you will see an endless cavalcade of liars and con men (read: confidence men) masquerading as helpers, bent on destruction.

    Anything goes.

    • Wonderful Max. Yes the Global Warming Con is just part of a larger Banking Con, which in turn is part of a larger agenda for concentrated power.
      I wish more people were aware of the deliberately created environment that keeps the majority focused on the various consequences of the environment but blind to its cause.

  29. BBC news just shown an item on the north east US and Canada’s 2015 big freeze, it is incomprehensible that such events could be a constituent of the so called global warming theory.

    • Correct – it’s nothing to do with AGW.
      Just a normal PJS meridional extension south over the E half of N America.

    • Its the ‘magic’ of AGW that well everything can prove it , hotter/colder wetter/dryer , nothing can disprove it. Think religion and you will see how this works.

  30. “The standard deviation of 0.13 oC is a typical amount by which the global temperature fluctuates annually about its 30 year mean; the probability of a chance warming of three standard deviation is about 1%. Thus we can state with about 99% confidence that current temperatures represent a real warming trend rather than a chance fluctuation of the 30 year period.”

    Makes perfect sense provided the natural global temperature behaves like a random variable with a Gaussian distribution of fixed mean and variance, and each year is independent of the next.

    Almost none of which is likely true.

  31. I hate these #uc*ing morons who say that they are XXX% sure something is happening. You can never, ever, ever say that in the scientific method. All that you can say is that you can reject the (null) hypothesis that something is not happening at the XXX% significance level. Most people say “Well, isn’t that the same thing??” and the answer to that question is “Not on your life.” Francis Bacon would vomit on Hansen as not adhering to the scientific method. Get some bread to feed Hansen. He is a quack.

    CAGW is a escathological cargo cult and will go down in history as being even more absurd that phrenology. At least phrenologists had something vaguely correct. The shape, etc. of the brain does predict behaviour to a certain extent. It is just that the shape of the skull says nothing about the brain inside it. The Phrenologists even had a journal once upon a time. https://archive.org/details/americanphrenol04unkngoog

  32. In a fair world, tar, feathers and Hanson riding a rail out of town.

    In the real world, Hanson, lucrative career, incompetence and dishonesty pays.

  33. I doubt J. Hansen’s scientific reputation is what he is concerned about. His real motivations are entirely political. As far has his science, just read a few of his published papers over the years. Pathetic garbage.

    For RACook, Atmospheric O2 at 20.9 percent is 209000 ppm.

    [fixed. .mod]

  34. Request you check the following sentence in the first paragraph above: The 0.035 and 0.025 are not clear there, nor how they are used with respect to the 0.13 “jitter”

    “But the essence of 99% is in the next Figure, the GISS 1987+ global air temperature record, complete with ludicrously small error bars (1sigma = ±0.035 C, or ±0.025 C).”

    • In the original, the “±0.025 C” was red, to indicate it referred to the red error bar on the face of the Figure. But the color didn’t translate into HTML. Live and learn. :-) Thanks for pointing it out so I could clarify.

      I just noticed, too, that I owe the moderator a vote of thanks, for writing in “sigma” where “s” once reigned. Thanks, mod. :-)

      [No, still not clear. But thank you.
      So the +/- 0.025 is for the red error bar. But then what is the +/_ 0.035 linked to? .mod]

      • Mod, the (+/-)0.035 C is linked to the black error bar. It’s beneath the red error bar on the Figure. The black one is a bit bigger and sticks out, top and bottom.

        The unsmoothed black temp line –> black error bar, = [black text, (+/-)0.035 C]. The smoothed red temp line –> red error bar = [red text, (+/-)0.025 C].

        The original idea was to indicate meaning using color-code, so as to avoid having to use text. But, that all back-fired, as we all now see. :-)

  35. Pat Frank leads off a paragraph with,

    “Jim Hansen’s physical causality? . . . .”

    I think Pat Frank’s quest post is an important retrospective. My take on it is that it shows Jim Hansen’s pre-science masquerading as science under the respectability of NASA’s once excellent scientific reputation. Thanks Pat Frank.

    The physical causality argument used by Jim Hansen (Head of NASA GISS From 1981 to 2013) was based on an illogical argument of ‘begging the question’ (petitio principii).

    His ‘begging the question’ physical causality argument is that since manmade CO2 ‘a priori’ must destroy the Earth Atmospheric System (EAS)** then science must show the EAS is being destroyed and must show the EAS destruction will accelerate in the future.

    ‘Begging the question’ illogic forms the supporting basis of many mythologies.

    ** regardless of whether of not all the rest of the dynamics remain equal in the EAS

    John

  36. “The standard deviation of 0.13 oC is a typical amount by which the global temperature fluctuates annually about its 30 year mean”

    I guess by typical Jim means the lowest deviation every seen.

  37. Hansen based his stuff on a climate model. While many models have aged better than I have, not many make the cover of Sports Illustrated swimsuit issue year after year. SI just keeps finding new ones…that won’t age any better. 8-)

  38. 99% eh,

    World Meteorological Organization – “The main problem arises from the fact that the shape of climate change signal is unknown”

  39. WMO-

    “The main problem of
    the application of absolute methods is that the separation between the climate change
    signal and the inhomogeneity is essentially impossible”

  40. WMO-

    “Data homogeneity is strongly related to the climate change
    problem, which is at the centre of scientific and policy debates. It has been recognized and widely accepted that long and reliable observation series are required to address climate change issues and impact studies. Unfortunately, these high quality meteorological data series seldom exist”

  41. oh blimey what a mess.

    WMO

    “It was already mentioned that the long-term climatological time series are often
    plagued with discontinuities caused by station relocation, installation of new instruments,
    etc. Several types of disturbances can distort or even hide the climatic signal. Therefore, it is quite natural that the data are tested in order to locate possible discontinuities. However, usually the detection of the homogeneity breaks is not enough. The breaks appear to be so common that rejection of inhomogeneous series simply leave too few and too short series for further analysis”

  42. There is an ole saying that goes ….. “If you won’t listen, … then you will have to feel”.

    So my question is, … just how many of those fossil fuel protesting college students and other individuals under the age of 30 ….. are for the 1st time in their lives, during this February 2015, actually feeling what 0 F to -20 F surface temperatures actually feel like or the “wind chill” temperatures which are much lower?

    Whatta you wanna bet that 90+% of them are truly appreciating the “warmth” that fossil fuels are currently providing then.

    My thermometer was reading -12 F here in central West Virginia, USA, at 6:00 AM this morning of 02-20-15, …. which are temperatures I last experienced here some 20 years ago

  43. maybe it’s me but the math by some posters here is kinda funny. CO2 went from ~360 ppm to 400 ppm a 10% increase. Well, ya kinda. That is the kind of sketchy math that makes it seem like it went up a bunch. So it went from .036 % to .040 %. Reality would suggest that is an increase of .004%. From .03 to .04% is just a .01% increase. Not 25%. It is the same with temperature. What do you call it when someone uses Celsius instead of Kelvin in a heat transfer equation? A freshman mistake. Temperature, for heat transfer calculations, didn’t go from 20 C to 21.3 C or whatever the jump was. It went from 293 Kelvin to 294.3 Kelvin, or a .4 % change, not a 6.5% change. You could refer to that way, but it would be inaccurate for scientific discussions.

Comments are closed.