Hansen’s 1988 Predictions Redux

Guest Post by Willis Eschenbach

Over in the Tweeterverse, someone sent me the link to the revered climate scientist James Hansen’s 1988 Senate testimony and told me “Here’s what we were told 30 years ago by NASA scientist James Hansen. It has proven accurate.”

I thought … huh? Can that be right?

Here is a photo of His Most Righteousness, Dr. James “Death Train” Hansen, getting arrested for civil disobedience in support of climate alarmism …

I have to confess, I find myself guilty of schadenfreude in noting that he’s being arrested by … Officer Green …

In any case, let me take as my text for this sermon the aforementioned 1988 Epistle of St. James To The Senators, available here. I show the relevant part below, his temperature forecast.

ORIGINAL CAPTION: Fig. 3. Annual mean global surface air temperature computed for trace gas scenarios A, B, and C described in reference 1. [Scenario A assumes continued growth rates of trace gas emissions typical of the past 20 years, i.e., about 1.5% yr^-1 emission growth; scenario B has emission rates approximately fixed at current rates; scenario C drastically reduces trace gas emissions between 1990 and 2000.] The shaded range is an estimate of global temperature during the peak of the current and previous interglacial periods, about 6,000 and 120,000 years before present, respectively. The zero point for observations is the 1951-1980 mean (reference 6); the zero point for the model is the control run mean.

I was interested in “Scenario A”, which Hansen defined as what would happen assuming “continued growth rates of trace gas emissions typical of the past 20 years, i.e., about 1.5% yr-1“.

To see how well Scenario A fits the period after 1987, which is when Hansen’s observational data ends, I took a look at the rate of growth of CO2 emissions since 1987. Figure 2 shows that graph.

Figure 2. Annual increase in CO2 emissions, percent.

This shows that Hansen’s estimate of future CO2 emissions was quite close, although the reality was ~ 25% MORE annual increase in CO2 than Hansen estimated. As a result, his computer estimate for Scenario A should have shown a bit more warming than we see in Figure 1 above.

Next, I digitized Hansen’s graph to compare it to reality. To start with, here is what is listed as “Observations” in Hansen’s graph. I’ve compared Hansen’s observations to the Goddard Institute for Space Studies Land-Ocean Temperature Index (GISS LOTI) and the HadCRUT global surface temperature datasets.

Figure 3. The line marked “Observations” in Hansen’s graph shown as Figure 1 above, along with modern temperature estimates. All data is expressed as anomalies about the 1951-1980 mean temperature.

OK, so now we have established that:

• Hansen’s “Scenario A” estimate of future growth in CO2 emissions was close, albeit a bit low, and

• Hansen’s historical temperature observations agree reasonably well with modern estimates.

Given that he was pretty accurate in all of that, albeit a bit low on CO2 emissions growth … how did his Scenario A prediction work out?

Well … not so well …

Figure 4. The line marked “Observations” in Hansen’s graph shown as Figure 1 above, along with his Scenario A, and modern temperature estimates. All observational data is expressed as anomalies about the 1951-1980 mean temperature.

So I mentioned this rather substantial miss, predicted warming twice the actual warming, to the man on the Twitter-Totter, the one who’d said that Hansen’s prediction had been “proven accurate”.

His reply?

He said that Dr. Hansen’s prediction was indeed proven accurate—he’d merely used the wrong value for the climate sensitivity, viz: “The only discrepancy in Hansen’s work from 1988 was his estimate of climate sensitivity. Using best current estimates, it plots out perfectly.”

I loved the part about “best current estimates” of climate sensitivity … here are current estimates, from my post on The Picasso Problem  

Figure 5. Changes over time in the estimate of the climate sensitivity parameter “lambda”. “∆T2x(°C)” is the expected temperature change in degrees Celsius resulting from a doubling of atmospheric CO2, which is assumed to increase the forcing by 3.7 watts per square metre. FAR, SAR, TAR, AR4, AR5 are the UN IPCC 1st, second, third, fourth and fifth Assessment Reports giving an assessment of the state of climate science as of the date of each report. Red dots show recent individual estimates of the climate sensitivity

While giving the Tweeterman zero points for accuracy, I did have to applaud him for sheer effrontery and imaginuity. It’s a perfect example of why it is so hard to convince climate alarmists of anything—because to them, everything is a confirmation of their ideas. Whether it is too hot, too cold, too much snow, too little snow, warm winters, brutal winters, or disproven predictions—to the alarmists all of these are clear and obvious signs of the impending Thermageddon, as foretold in the Revelations of St. James of Hansen.

My best to you all, the beat goes on, keep fighting the good fight.

w.

Get notified when a new post is published.
Subscribe today!
3.7 3 votes
Article Rating
207 Comments
Inline Feedbacks
View all comments
Tom Halla
January 6, 2019 2:37 pm

So Hansen predicting only twice as much warming for a given amount of CO2 is “accurate”?

MarkW
Reply to  Tom Halla
January 6, 2019 2:44 pm

Let’s see if I have the technique down.

I doesn’t matter how much warming Hansen predicted.
He predicted that it would warm.
It did.
Therefore if we don’t stop producing CO2 we are all going to die.

bit chilly
Reply to  Willis Eschenbach
January 6, 2019 4:33 pm

Willis , the old adage that goes something like “best not to argue with idiots as they will drag you down to their level and beat you with experience”.

I am amazed you have the patience for twitter ,it is an aptly named platform.

2hotel9
Reply to  bit chilly
January 6, 2019 4:39 pm

Really? It is misspelled constantly! The actual name is TWATTER, little Jackie Dorsey has been running from the real name since 2006 and has failed to escape it yet. That is why he looks so constipated all the time.

Joe Wagner
Reply to  MarkW
January 6, 2019 3:24 pm

I’m sorry- you’re wrong… its:

WE ARE ALL GOING TO DIEEEEEEE!

2hotel9
Reply to  Joe Wagner
January 6, 2019 3:38 pm

Sorry, you got it wrong, have to have”iiiiiiii” in front of the “eeeeeeee”s. Once a grammar nazi always a grammar nazi!

Kurt
Reply to  2hotel9
January 6, 2019 6:00 pm

“Nazi” should be capitalized.

Editor
Reply to  MarkW
January 6, 2019 4:13 pm

Sorry to have to disagree, but no I don’t think you have “Got it in one”.
The Tweeterverser said “The only discrepancy in Hansen’s work from 1988 was his estimate of climate sensitivity. Using best current estimates, it plots out perfectly.”.
So temperature doesn’t even have to go in the same direction as the original prediction – it didn’t even have to warm.
It’s difficult to put the Tweeterverser’s idea into simple words in a way that doesn’t make it sound as potty as it really is, but maybe something like this:

The forecast was made 30 years ago, so only an idiot would test it without first bringing it up to date using the known conditions over those 30 years. When you do this, the forecast is proven to be completely accurate.

We need that thinking program for schools, and we need it yesterday.

MarkW
Reply to  Mike Jonas
January 6, 2019 5:07 pm

Basically they are saying that if you take what we know now, and apply it to the predictions from back then, the predictions from back then would have looked what we know now. Therefore Hansen’s predictions back then were spot on.

Circular math at it’s best.

Tim F
Reply to  Mike Jonas
January 6, 2019 5:08 pm

Everyone is 100% correct when you can change your prediction based on today’s knowledge. His hypothesis was falsified and needs to start over. You are definitively right in you need to take a course in logic and the scientific process.

Paul
Reply to  Tim F
January 6, 2019 5:59 pm

As Yogi Berra said “predictions are hard especially when they are about the future “.

Jim Whelan
Reply to  Mike Jonas
January 7, 2019 8:51 am

There’s a much shorter word than “Tweeterverser”: “twit”.

Kone Wone
Reply to  MarkW
January 6, 2019 4:45 pm

Well, we are all going to die

2hotel9
Reply to  Kone Wone
January 6, 2019 4:57 pm

My standard operating principal is “you first”, then I will write the account of you going first and I am pretty sure “globall warmining” will not be the listed cause of death.

Eyal
Reply to  MarkW
January 7, 2019 12:29 am

“we are all going to die” anyways.

Neo
Reply to  MarkW
January 7, 2019 6:22 am

We’ve done so much tipping that it must have been a great meal.

AK in USA
Reply to  MarkW
January 7, 2019 10:25 am

When will science finally admit lambda = 0? Lamba = 0 makes perfect sense, and is consistent with every scrap of information we have, and exactly explains the warming to date. The only downside of admitting lambda = 0 is there suddenly is no crises.

R Shearer
Reply to  Tom Halla
January 6, 2019 4:05 pm

Close enough for government (climate science) work.

commieBob
Reply to  Tom Halla
January 6, 2019 5:23 pm

Oh yes.

Of all the climate models, one is fairly close to the observed trend. Therefore, we have to believe that the most extreme models are credible as well. link

Dave Fair
Reply to  commieBob
January 6, 2019 5:48 pm

That one is a Russian hack of real climate science!

Louis Hooffstetter
Reply to  Tom Halla
January 6, 2019 8:41 pm

At this point, Hansen’s Scenario A prediction is only off by 2 standard deviations (and drifting further and further off every day).
So he’s only off by 95%+.

Louis Hooffstetter
Reply to  Louis Hooffstetter
January 6, 2019 8:45 pm

Technically, that’s not exactly correct.
What I should have said is that we can be 95%+ sure that Hansen’s Scenario A prediction is a boat load of crap.
Fixed.

RockyRoad
Reply to  Louis Hooffstetter
January 7, 2019 7:00 pm

Since when is “boat load” a scientific term? /s

Mark N
Reply to  Tom Halla
January 7, 2019 8:47 am

“The only discrepancy in Hansen’s work from 1988 was his estimate of climate sensitivity. Using best current estimates, it plots out perfectly.”

All that this really demonstrates is that the alarmists do not understand the skeptical point of view at all. They live in a bubble created by failing public schools and a mainstream media, both only capable of presenting one point of view. I’m so cynical about the future of our culture…

Peter Charles
January 6, 2019 2:41 pm

Indeed. “There are none so blind as those who will not see” is an eternal truth.

Reply to  Peter Charles
January 6, 2019 7:07 pm

People who are not introspective should take a long a long, hard look at themselves.

January 6, 2019 2:44 pm

“The only discrepancy in Hansen’s work from 1988 was his estimate of climate sensitivity.”

Oh, is that all? Well, we all know climate sensitivity isn’t that important.

LdB
Reply to  Roy Spencer
January 6, 2019 5:04 pm

Fudge factors are always good enough apparently.

Clyde Spencer
Reply to  Roy Spencer
January 6, 2019 5:30 pm

Roy,
And scenarios B and C showed the effects of a postulated volcanic eruption in 2014 (that didn’t happen), which was the only reason that those two scenarios were reduced enough to come close to the historical record. That is, even extreme mitigation of CO2 emissions would have been much hotter than what actually happened, had it not been for the reduction in the temperatures from the eruption that didn’t roar.

January 6, 2019 2:44 pm

“The only discrepancy in Hansen’s work from 1988 was his estimate of climate sensitivity. Using best current estimates, it plots out perfectly.”

That response is just beautiful!

MarkW
Reply to  Jimmy Haigh
January 6, 2019 5:08 pm

He was wrong back then, however if we adjust his predictions to match what actually happened, then his predictions were correct now. And that is all that matters.

E
Reply to  MarkW
January 6, 2019 5:59 pm

“Predictions are hard, especially about the future.”, Yogi Berra, American philosopher

Editor
January 6, 2019 2:52 pm

Thank you, Willis. That was fun and well done.

Regards,
Bob

DaveR
January 6, 2019 2:55 pm

The real world data seems to better match somewhere between Hansen’s scenarios B and C, even though neither match the trajectory along the way particularly well. If global temperatures continue to fall back to the trend after the El Nino peak of 2016, then Hanson’s scenario C will be closest.

But clearly there was no “drastic reduction of trace gasses between 1990 and 2000” as required under Hansen’s scenario C, so something is dramatically wrong in his modelling.

iflyjetzzz
Reply to  DaveR
January 6, 2019 3:16 pm

Are there any climate alarmist models that have survived the test of time?

It’s been a while since I casually looked for a single model that was ‘close’ for longer than 5 years. I couldn’t find any.

Craig
January 6, 2019 3:03 pm

Kind of like this gem from last summer: “30 years later, deniers are still lying about Hansen’s amazing global warming prediction.”

The article details how Scenario B is spot on if you adjust it down 27% to “to reflect the actual radiative forcing from 1984 to 2017.” Amazing.

https://www.theguardian.com/environment/climate-consensus-97-per-cent/2018/jun/25/30-years-later-deniers-are-still-lying-about-hansens-amazing-global-warming-prediction

iflyjetzzz
Reply to  Craig
January 6, 2019 3:18 pm

Kind of like stating that if my aunt had balls, she’d be my uncle.

Greg Cavanagh
Reply to  iflyjetzzz
January 6, 2019 4:08 pm

Wouldn’t that give you two uncles?
Something else has to change as well, not just your aunty.

Kurt
Reply to  Greg Cavanagh
January 6, 2019 6:08 pm

No – we’re talking about a parent’s sibling here. I think you’re assuming that the “aunt” is an “aunt” by marriage to an “uncle” by blood. If my mother is an only child and my father only has one female sibling, the statement “if my aunt had balls, she’d be my uncle” is perfectly logical and requires no “second uncle.”

Greg Cavanagh
Reply to  Kurt
January 6, 2019 9:07 pm

Yea, it took a while, but I finally got to the same conclusion. I somehow mixed up uncle and father. I’m a bit dizzy today. Monday…

JohnWho
Reply to  Craig
January 6, 2019 5:06 pm

So, it is “spot on” as long as you change it.

Huh?

old construction worker
Reply to  Craig
January 7, 2019 4:42 am

I see. The Texas sharp shooter method.

michael hart
January 6, 2019 3:05 pm

I still think he got the choice of hat about right though. I lost a similar one in Seattle back in 1996, but I expect they’re not related.

Steven Mosher
January 6, 2019 3:09 pm

standard approach is to re run the model with the new sensivity value.

Neville
Reply to  Steven Mosher
January 6, 2019 3:22 pm

You mean like the Russian model’s sensitivity, or perhaps Lindzen, or Curry Lewis, or Dr Spencer, or Dr Michaels or etc, etc?

Tom Abbott
Reply to  Neville
January 6, 2019 6:40 pm

I think Willis ought to do a comparison between Hansen’s predictions and the Russian climate model.

And with UAH.

What kind of scientific results can one get from using bastardized data like GISS and HadCrut? I would say bastardized results.

The Cob
Reply to  Tom Abbott
January 7, 2019 3:00 am

That’s it Tom. At this point in time with data being corrupted like it is, methinks UAH should be the main resource point.

Tom Abbott
Reply to  The Cob
January 8, 2019 11:12 am

I would like to see Willis do a comparison of all these charts with the Tulsa, Oklahoma surface temperature chart.

It won’t look like a Hockey Stick chart with the warming of the 1930’s removed, instead it will show 1936 as being the hottest year on record in Tulsa, warmer than subsequent years. Tulsa, Oklahoma has been in a temperature downtrend since 1936.

Let me tell you about the summer of 1936 in Tulsa. In the summer of 1936, Tulsa had about 60 days of over 100F, and 20 of those days were over 110F, and four of those days were 120F. And the surrounding states were just as hot. If we had weather like that today, the Alarmists would go crazy with fear. But we don’t have weather like that today, instead we have some of the most benign weather in memory. The Alarmist are not describing this reality.

The decades of the satellite era (1979 to present) don’t even come close the the extreme weather and temperatures of Tulsa, Oklahoma in the 1930’s. And the rest of the United States shows the same high temperatures verses today’s temperatures, if you go by city and state temperature charts. NASA/NOAA have managed to bastardized even the US surface temperature chart now, but the individual city and state temperature charts show a different story than what NASA and NOAA are telling us, and the individual charts are the real reality.

Compare those Global Climate Models and Hockey Sticks to some real data, local data that hasn’t been tampered with.

New Orleans has a nice, long surface temperature record. Let’s see if it looks anything like a Hockey Stick.

Reply to  Steven Mosher
January 6, 2019 3:26 pm

The problem wouldn’t exist if they used the standard approach from the start. But they wanted to prove CO2 was the problem and they were successful, based on the bad policies implemented and the trillions of dollars wasted. They wanted and got a predetermined result for a political agenda; in Hansen and the gang were successful. Accurate climate forecast? Not so much.

Santa
Reply to  Tim Ball
January 6, 2019 11:27 pm

They have taken and politicized an idea of NATURE and use that, Nature dictatorship, to dominate Man and capitalism. Their real aim is a radical chance of society.

Santa
Reply to  Santa
January 6, 2019 11:37 pm

In policy based “science” a study only have to fit the political story, here UNFCCC.

Tim F
Reply to  Steven Mosher
January 6, 2019 3:31 pm

Standard approach is when your hypothesis is falsified is to go back and start over.

LdB
Reply to  Steven Mosher
January 6, 2019 4:22 pm

Put enough guesses out there one of them has to be right eventually.

MarkW
Reply to  LdB
January 6, 2019 5:13 pm

It’s not so much that the predictions were wrong, it’s that there were bad assumptions built into the model.
Replace the bad assumptions and the previous model does better.
Ergo, the you never have to admit that you were wrong, you just disavow all previous work.
I wonder if the trillions of dollars wasted because of the now disavowed predictions can be clawed back as easily?

Writing Observer
Reply to  Steven Mosher
January 6, 2019 4:36 pm

Quite so. Re-run the model with the new (lower) sensitivity.

WHOOPS! Now the model agrees (within reason) with more recent observations – but the modeled temperatures are way out of line with earlier observations; the model is far COLDER.

No problemo! Just “adjust” (the NewSpeak word for “fake”) the earlier observations so that they are colder than real history said they were. The year 1984 is a good place to start…

Editor
Reply to  Steven Mosher
January 6, 2019 4:45 pm

Steven Mosher: The statement “standard approach is to re run the model with the new sensitivity value” is seriously deranged. I suppose it comes from the same school of post-modern science as the need to protect a ruling paradigm: “The failure of a result to conform to the paradigm is seen not as refuting the paradigm, but as the mistake of the researcher.”.
Look, this thing is really simple. To test a prediction, you compare results with predicted results. Period. If you update using new values, you are making a new prediction, and it does absolutely nothing to the old prediction. The old prediction is still “out there” for testing.
Regrettably, many in science don’t seem to understand simple basics. Science, or at least some of it, would appear to be in a very sorry state.

StephenP
Reply to  Mike Jonas
January 7, 2019 12:45 am

Richard Feynman summed it up brilliantly in his talk to some students that you can see on YouTube, saying that if the observations do not match the hypothesis then the hypothesis is WRONG.
No matter how clever you are, how clever the hypothesis is, it is WRONG.

Louis Hooffstetter
Reply to  Mike Jonas
January 7, 2019 8:18 am

“The failure of nature to conform to the General Circulation Models is seen not as refuting the models, but as errors of reality and mistakes of the researchers.”
Generic IPCC Climate Scientist

MarkW
Reply to  Steven Mosher
January 6, 2019 5:11 pm

If my latest prediction is correct, that proves that all my previous, bad predictions, are also correct.
That may be the standard approach in climate science, however in actual science, scientists own up to their mistakes and move forward with new data and new knowledge.

Derg
Reply to  Steven Mosher
January 6, 2019 5:19 pm

And change another value and re-run the model and then re-run again.

How many models do we have? Why do have so many models?

Settled science indeed

Tarquin Wombat-Carruthers
Reply to  Derg
January 6, 2019 11:45 pm

My model is correct, and so are all the others! It’s that bloody reality that’s the problem! See my most-recent grant-funded paper!/(sarc)

Reed Coray
Reply to  Steven Mosher
January 6, 2019 5:29 pm

Let’s see. I have a model that predicts the yearly-average temperature for the next 30 years. My “model” contains beaucoup degrees of freedom (independent variables). I set a numerical value for each independent variable and plot the model output temperature for the next 30 years. Over that time, measurements are made of the yearly-average temperature. The measurements don’t come close to agreeing with my model predictions; but using the “standard approach” I adjust a few of independent variable values, and voila my model matches measurement quite well. From this I conclude that I had a good model. Give me a break. My model wasn’t just the selection of a set of independent variables; my model also contained a numerical value for each independent variable. To claim that I had a good model all along is a joke. With enough degrees of freedom, I can make a model that will fit any finite set of measurements. By this “standard approach” line of reasoning, there is no such thing as a “bad model,” only models with too few degrees of freedom.

Reed Coray
Reply to  Reed Coray
January 6, 2019 6:21 pm

As an example, imagine the following conversation between a broker and an investor he advises.

Investor: “What happened? In addition to the 10 grand I paid you because you convinced me you had a can’t fail stock-price-prediction model, I lost my shirt making investments in line with your model.”

Broker: “My model is actually very good. I just used the wrong values for a few of the model parameters. I fixed that problem by inserting new values into my model. My model with the new values with 100% certainty tells you how you should have invested to make a fortune. So obviously my model was and is good. It just didn’t accurately predict the future, but it will now. Given that, when can I expect another $10,000 check from you for my advice?”

Investor: “Just a sec, I’ll have to check my model to predict what your worth is to me. It doesn’t look good. My model says you can expect a check from me sometime between when hell freezes over and when pigs fly.” See, I too have a pretty good model.”

Dave Fair
Reply to  Steven Mosher
January 6, 2019 5:56 pm

Mr. Mosher, apparently you didn’t get the consensus memo; climate sensitivity is an emergent property of the models, not an input to the models. Just ask Gavin or any of the rest of the gang.

This is all so much BS for inquiring (susceptible) minds.

Matthew R Marler
Reply to  Steven Mosher
January 6, 2019 6:23 pm

Steven Mosher: standard approach is to re run the model with the new sensivity value.

Which new sensitivity value?

Of possible sensitivity values, should we regard the one that best corrects the model-data fit as a new estimate?

Dave Fair
Reply to  Matthew R Marler
January 6, 2019 6:36 pm

Again, the model calculates the climate sensitivity. One must fundamentally change the model to get a different ECS.

Matthew R Marler
Reply to  Dave Fair
January 7, 2019 8:59 am

Dave Fair: Again, the model calculates the climate sensitivity. One must fundamentally change the model to get a different ECS.

Steven Mosher: standard approach is to re run the model with the new sensivity value.

What Steven Mosher recommended (or says is a standard approach) is something that can not be done. When a bunch of parameter estimates and other model details are changed, then there isn’t an entity “the model” to which you could amend one sensitivity value.

Gerald Machnee
Reply to  Steven Mosher
January 6, 2019 7:00 pm

**standard approach is to re run the model with the new sensivity value.**
I thought it was”standard approach is to re FUDGE the model with the new sensivity value.”

Dave Fair
Reply to  Gerald Machnee
January 6, 2019 8:19 pm

Again, sensitivity values come out of models; they are not put in. Modelers dick around with math and parameters until they get something that “seems right” to them. At the time, Hansen liked his then-model because it gave him a sensitivity value greater than 4.

He was hoping for 5, but couldn’t dick around too much because of those darned historical values. It took NOAA’s Karl to get around a lot of history.

People got tired of model-seances for sensitivity values and went about using empirical methods to come in with values somewhat less. See Lewis, especially.

Louis Hooffstetter
Reply to  Steven Mosher
January 6, 2019 8:54 pm

Yay, Moshpit comes to the rescue with another obtuse, obfuscating drive by comment!

Moshpit means re run the model with whichever sensitivity value necessary to make Hansen’s boat load of crap prediction match reality.

That’s how climate “scientists” roll.

Greg Cavanagh
Reply to  Louis Hooffstetter
January 6, 2019 9:10 pm

I still get flashbacks of the Ozzy Osborne mosh pit experience. I wouldn’t have missed it for the world, but man, it comes at a cost.

Dave Fair
Reply to  Louis Hooffstetter
January 6, 2019 11:27 pm

The all-knowing Mr. Mosher forgot that climate sensitivity is an emergent phenomenon of the models, not an input. But the modelers do dick around with the math and parameters to get the sensitivity that “sounds about right.” Hansen was shooting for 5, but just couldn’t tweak fast enough for Big Al’s climate circus.

hunter
Reply to  Steven Mosher
January 7, 2019 12:02 am

But his prediction was for terrible things to happen due to *less*CO2 than was actually emitted.
He was wrong.

Reply to  Steven Mosher
January 7, 2019 2:41 am

Willis, did you try to recalculate the TCR from the observations? In your figure 5 you showed essentielly the ECS bandwidth, however the TCR ( which is smaller than the ECS) is more appropriate in this case IMO. I tried it elsewhere and found about 1.3 °C/doubling CO2 which is also the result of Lewis/Curry 2018. Can you confirm this value?

Mark Hansford
Reply to  Steven Mosher
January 7, 2019 3:32 am

if you feed in the values founded on actual data it is no longer a forecast or prediction its a hindcast. Hardly the same thing is it?

John Endicott
Reply to  Steven Mosher
January 7, 2019 5:13 am

standard approach is to re run the model with the new sensivity value.

That maybe the standard approach for what passes as science where you come from Mosh, but in real science you don’t get do-overs on your predictions. Real science is predictive, and your predictions stand or fall on what they originally predicted. If your predictions fail, then you start over as your hypothesis that made those predictions is FALSFIED. You don’t get to rejig your predictions and then claim your predictions were actually accurate.

Newminster
Reply to  Steven Mosher
January 7, 2019 10:51 am

But that doesn’t make the original prediction right, does it? Hansen said that given scenario A, X would happen … given scenario C, Z would. We have had scenario A and Z happened!

Yes, by all means redo the figures but you then have to wait 30 years before that is proved right or wrong and meanwhile we have lives to lead and no reason to suppose Hansen’s second attempt at glorified guesswork would be any better than his first.

Not forgetting that “the climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible.” Or put another way 3 metres of snow in 24 hours in the Bavarian Alps may be freak weather or some side-effect of global warming or the first sign of an impending (little) ice age. And we don’t know which.

Phil's Dad
Reply to  Steven Mosher
January 7, 2019 6:48 pm

“standard approach is to re run the model with the new sensivity value.”

… following which all the “tipping points” and “end-of-the-world” scenarios fade away to nothing (on business as usual growth).

Graemethecat
Reply to  Steven Mosher
January 8, 2019 6:33 am

What if the value of climate sensitivity that gives the result closest to observations is zero? I. e. carbon dioxide has no effect on climate?

Neville
January 6, 2019 3:17 pm

Willis why is the Russian model the closet to observations over the last 31 years? Were they just lucky or perhaps not hampered by the competitive need to alarm us all?

LdB
Reply to  Neville
January 6, 2019 5:10 pm

Someone has to guess right, eventually, couple years it will be out 🙂

No model will lock term it’s the same problem as weather forecasting.

Reply to  Neville
January 6, 2019 5:21 pm

Neville, here is what you need to know about the Russian model, INMCM5 (previous version was 4).

https://rclutz.wordpress.com/2018/11/16/latest-results-from-first-class-climate-model-inmcm5/

Short answer: Compared to other models INMCM5 has lower CO2 sensitivity, extremely high ocean inertia, and lower H2O feedback in atmosphere. The latest version reproduces HADCRUT4 quite well.

comment image

Figure 1. The 5-year mean GMST (K) anomaly with respect to 1850–1899 for HadCRUTv4 (thick solid black); model mean (thick solid red). Dashed thin lines represent data from individual model runs: 1 – purple, 2 – dark blue, 3 – blue, 4 – green, 5 – yellow, 6 – orange, 7 – magenta. In this and the next figures numbers on the time axis indicate the first year of the 5-year mean.

jim
January 6, 2019 3:20 pm

Isn’t this a method to determine climate sensitivity – compare Hanson’s projections under various emissions amounts and the one that matches is Hanson’s estimate of sensitivity.

thanks
JK

LdB
Reply to  jim
January 6, 2019 5:42 pm

You are trying to determine Radiative Transfer sensitivity with classical physics measurements … good luck with that we can’t even do that in a lab setting because you are dealing with EM waves/particles.

Try the most basic experiment called the coloured cup experiment this is what you do with the kids

Gather five coffee cups, identical except for color. Run hot water from a tap for a minute or two, until it reaches its maximum temperature. Fill the cups with hot water and move them to a dark, cool room. Place a thermometer in each one and wait 20 minutes. Read the thermometer in each cup and compare the temperatures and colors. The darkest colors should read the coolest .. now explain why?

Classical physics says all the cups cool at the same rate, and that is just one QM effects that are at play that you are trying to cover with a single sensitivity number for Earth.

Irritable Bill
January 6, 2019 3:29 pm

Come on…its perfectly simple…he had the sheet of paper tilted at the wrong angle…otherwise perfect!
Much in the same way Obama and the head of NOAA accidentally tilted the entirety of the twentieth century in the writing of the infamous “Pausebuster Paper.” A perfectly understandable misunderstanding. In taking the Pausebuster Paper to the UN Paris blatherfest and representing it as fact when it was complete bullshit…was a simple miscalculation. Not the greatest fraud in human history…costing the world trillions of dollars ongoing and being responsible for hundreds of thousands of deaths due to pneumonia by power bill hikes etc. They just accidentally tilted the page…a simple mistake anyone could make, surely?
By the way, I’m guessing you all know about that? Whistleblowers from within NOAA went to the Senate and blew….NOAA belatedly apologized and were awfully sorry that “unfortunately” their computer broke and they couldn’t show how they came to these findings, so changed them, and the publication Nature changed their criteria for accepting papers saying they wouldn’t publish papers in the future that couldn’t be replicated, and then absolutely nothing happened!? WTFH is that? Does anyone know what is going on with the investigation into the greatest fraud in human history? My best guess is…zip. Evidently Trumps Tax returns from decades ago are a far more compelling way to spend investigative recourses than the current ongoing greatest fraud in human history. Meanwhile the countries that are paying vast sums of money based on the greatest fraud in human history…are still paying. Even though Obamas fraudulent lies are actually exposed in front of the Senate and are proven to be lies.
If I were running the joint things would be very different.

Mike H
January 6, 2019 3:37 pm

If only I had multiplied instead of divided, my answer would have been correct.

LdB
Reply to  Mike H
January 6, 2019 5:46 pm

Even a stopped clock tells the right time, twice a day.

Reply to  LdB
January 7, 2019 12:25 am

LdB

Not if it’s a digital display.

John Endicott
Reply to  HotScot
January 7, 2019 5:51 am

Depends on how the Digital display “stopped”. If its still getting power just not updating the display (IE “frozen” display), then the “right time twice a day” phenomena holds. If it’s lost power/showing a blank display then you are correct.

That has been your daily dose of pedanticism, you are welcome..

Reply to  John Endicott
January 7, 2019 4:59 pm

John Endicott

Unless it’s showing a 24 hour clock. Once a day then.

pedanticism cubed………you are also welcome. 🙂

Neville
January 6, 2019 3:37 pm

Dr Curry expects the AMO to change to cool phase sometime in the not too distant. So what will be their excuse if/when this happens and temps start to drop or pause in the NH?
The NH and Arctic temp increase has been so important for them to drive the debate about their so called CAGW. Not so much the SH or Antarctica.
Perhaps the see saw effect will help them out as we’ve seen in the past? Who knows, but their co2 must act like a sort of pixie dust if that is the case.

DWR54
Reply to  Neville
January 7, 2019 2:11 am

Dr Curry expects the AMO to change to cool phase sometime in the not too distant. So what will be their excuse if/when this happens and temps start to drop or pause in the NH?

About 10 years ago we were being told on this very blog that the world was about to cool rapidly (in fact, had already started to do so) because of PDO/AMO fluctuations and below average sunspot numbers in solar cycle 23 (Don Easterbrook, David Archibald). Instead what followed was the warmest decade on record according to every data set we have, including UAH satellite.

So perhaps the question to ask is what will be the excuse if/when temps *don’t* start to drop, again.

Richard M
Reply to  DWR54
January 7, 2019 8:03 am

DWR54. the AMO has remained positive. I have no idea who expected it to go negative 10 years ago. I suspect you are making that up.

BTW, the temperature did follow most expectations (flat) for several years until the super El Nino. You really aren’t still one of those who thinks that is climate, are you? You must be extremely disappointed with the cooling over the past 3 years.

DWR54
Reply to  DWR54
January 7, 2019 11:59 am

Richard M

DWR54. the AMO has remained positive. I have no idea who expected it to go negative 10 years ago. I suspect you are making that up.

I didn’t say anyone expected it to go negative. I said that “fluctuations” in AMO, PDO and solar activity were being used to produce future cooling projections. From Don Easterbrook’s WUWT post from Dec 2008:-

Comparisons of historic global climate warming and cooling, glacial fluctuations, changes in warm/cool mode of the Pacific Decadal Oscillation (PDO) and the Atlantic Multidecadal Oscillation (AMO), and sun spot activity over the past century show strong correlations and provide a solid data base for future climate change projections.

https://wattsupwiththat.com/2008/12/29/don-easterbrooks-agu-paper-on-potential-global-cooling/

Easterbrook forecast cooling of by 2007 (± 3-5 yrs) of about 0.3-0.5° C until ~2035. How come no one here ever compares ‘that’ prediction against Hansen’s?

BTW, the temperature did follow most expectations (flat) for several years until the super El Nino.

The forecast wasn’t for “(flat) for several years”, it was for cooling of of about 0.3-0.5° C which “seems to have already begun” (Easterbrook, 2008, linked to above).

You really aren’t still one of those who thinks that is climate, are you? You must be extremely disappointed with the cooling over the past 3 years.

On the contrary, the “cooling” over the past few years was entirely to be expected and was widely predicted. The 2015/16 el Nino pushed temperatures above the long term warming rate, as el Ninos tend to do. This so-called “cooling” is simply ‘reversion to the mean’; the ‘mean’ being the long term warming trend of ~ +0.2C/dec.

Kent Noonan
January 6, 2019 3:40 pm

Willis, Can you tell us what climate sensitivity Hansen used to make his graph back then? Knowing that allows comparison with the “best current estimates”. I don’t see a red dot for 1988 in fig.5.

Reply to  Kent Noonan
January 6, 2019 3:52 pm

I would like to see Hansen’s estimate for climate sensitivity as well.

And for those us who don’t remember, what was scenario B and C and how do they compare?

Derg
Reply to  Peter Sable
January 6, 2019 5:27 pm

Peter for settled science why are there 3 scenarios in the first place?

Climate sensitivity factor seems to me like someone doesn’t understand their variables

fah
Reply to  Kent Noonan
January 6, 2019 4:34 pm

Hansen’s 1988 testimony says he used the model given in Hansen, Fung, Rind, Lebedeff, Ruedy and Russell. (1988) “Global Climate Changes as Forecast by Goddard Institute for Space Studies Three-Dimensional Model.” Journal of Geophysical Research, 93 D8: 9341-9364. August 20, 1988. At the time of his testimony, the paper was in press. The paper is paywalled and some pay access does not go back that far. However, in case you can’t get access, the relevant text is on page 9342 under section 2 Climate Model. There they say:

“The equilibrium sensitivity of this model for doubled CO2 (315 ppmv -> 630 ppmv) is 4.2 degC for global mean surface air temperature (Hansen et. al. 1984). This is within, but near the upper end of the range 3 degC +- 1.5 degC estimated for climate sensitivity by National Academy of Sciences committees (Charney, 1979; Smagorinsky, 1982), where their range is a subjective estimate of the uncertainty based on climate-modeling studies of the empirical evidence for climate sensitivity. The sensitivity of our model is near the middle of the range obtained in recent studies with general circulation models (GCMs) (Washington and Meehl, 1984; Hansen et. al 1984; Manabe and Wehterald, 1987; Wilson and Mitchell, 1987).”

Reply to  fah
January 6, 2019 5:14 pm

thank you fah

The equilibrium sensitivity of this model for doubled CO2 (315 ppmv -> 630 ppmv) is 4.2 degC for

pronouns are the bane of science and engineering. I can’t tell whether “this” applies to scenario A, B, or C.

fah
Reply to  Peter Sable
January 6, 2019 5:33 pm

It looks like, from Section 4 of the paper, that the scenarios A, B, and C were designed to refer only to variations in the trace gas composition of the atmosphere and not other aspects of the models, particularly any hydro or thermodynamics.

Further down, in Section 6.1 of the paper, they say
“The climate model we employ has a global mean surface air equilibrium sensitivity of 4.2°C for doubled CO2. Other recent GCMs yield equilibrium sensitivities of 2.5°-5.5°C,
and we have presented empirical evidence favoring the range 2.5°-5°C (paper 2). Reviews by the National Academy of Sciences [Charney, 1979; Smagolins/..y, 1982] recommended the range l.5°-4.5°C, while a more recent review by Dickinson [1986] recommended l.5°-5.5°C.
Forecast temperature trends for time scales of a few decades or less are not very sensitive to the model’s equil­ibrium climate sensitivity [Hansen et al., 1985]. Therefore climate sensitivity would have to be much smaller than 4.2°C, say l.5-2°C, in order to modify our conclusions sig­nificantly. Although we have argued [paper 2] that such a small sensitivity is unlikely, it would be useful for the sake of comparison to have GCM simulations analogous to the ones we presented here, but with a low climate sensitivity. Until such a study is completed, we can only state that the observed global temperature trend is consistent with the “high” climate sensitivity of the present model. However, extraction of the potential empirical information on climate sensitivity will require observations to reduce other uncer­tainties, as described below. The needed observations include other climate forcings and key climate processes such as the rate of heat storage in the ocean.”

When they say [paper 2] they refer to Hansen et. al. 1984.

Reply to  fah
January 6, 2019 5:43 pm

fah:
most helpful, thank you!

Dave Fair
Reply to  fah
January 6, 2019 6:25 pm

Since climate sensitivity to CO2 is an emergent property of the model, one cannot change the sensitivity of the model without going in and fundamentally changing the model’s guts. Show me a quote that contradicts that; ECS is an emergent property of the models, according to the modeling experts.

Steve Heins
Reply to  Dave Fair
January 6, 2019 6:41 pm

You are 100% correct Dave Fair. Unfortunately many people on this site will not understand your point.

Dave Miller
Reply to  Steve Heins
January 7, 2019 9:23 am

Not only that, but the climate sensitivity as defined would NOT be constant over time, given the nature of the myriad modes of energy transport (see non-linearity, capacitance).

There is absolutely no basis upon which to make that assumption, or even that its phase angle would not change with time (relative so some putative “independent” variable). Except maybe averaged over geologic time periods. And we know how that relationship looks. Atmospheric CO2 LAGS temperature.

Some of us understand.

fah
Reply to  Dave Fair
January 6, 2019 7:56 pm

This is not my field but I do have access to most journals and it looks like you are quite correct. I have been mistaken for quite some time on that point. One of the references in Hansen et. al. 1988 had a nice explanation (at least to me) of the origins of climate modeling. They discuss climate sensitivity and from whence it comes at some length. I was particularly interested in the derivation by analogy to electrical engineering rather than thermodynamics, since it accounts for and makes more understandable (to me) the peculiar terminology used, from the perspective of physics. The reference is available scanned on some Brit’s web site:

http://www.350.me.uk/TR/Hansen/Hansenetal84-climatesensitivityScan.pdf

fah
Reply to  fah
January 6, 2019 8:06 pm

Sorry, I meant to say the nice explanation was in Hansen et. al. 1984, not 1988.

Reply to  fah
January 7, 2019 7:55 am

fah, I join with others in expressing great thanks to you for digging up this 30-year old information having such detail.

I take particular note of this portion of the quoted statements “. . . the range 3 degC +- 1.5 degC estimated for climate sensitivity by National Academy of Sciences committees (Charney, 1979; Smagorinsky, 1982), where their range is a subjective estimate of the uncertainty based on climate-modeling studies of the empirical evidence for climate sensitivity.”

So, decoded, it says NAS committees (note plural) reviewed mathematical models that incorporated some undefined amount of “empirical evidence for climate sensitivity”, and based on that process they SUBJECTIVELY ESTIMATED the “range” (was it one-sigma? two sigma ? 0.1 sigma?) of ECS uncertainty to 0.1 degC precision. Yeah, right.

I seriously doubt any of the “empirical evidence” used in the mentioned climate model “studies” had data accuracies and consistencies/repeatabilities of even +/- 0.5 degC, let alone +/- 0.1 degC. Then you run that empirical data through some climate models (or maybe 10-plus different climate models . . . how many sophisticated global climate models did they have in the mid-1980’s?) and expect the data accuracy to improve? And then you run those model outputs through the admittedly “subjective” minds of those serving on the NAS committees (How many? What were their qualifications to evaluate the model output results?) and expect whatever “data” accuracy to not be further degraded? Finally, what process did they use to reach a “consensus” that the ECS uncertainty range was no greater than +/- 1.5 degC, all things considered?

The stupidity in above-quoted NAS ECS uncertainty assertion . . . it burns! And of course, this BS went unchallenged at the time by a great number of so-called “climate scientists” because it served so well to demonstrate the accuracy they had in determining ECS back then.

fah
Reply to  Gordon Dressler
January 7, 2019 9:08 am

The later NAS document on CO2 and climate is available online at

https://www.nap.edu/catalog/18524/carbon-dioxide-and-climate-a-second-assessment

This is the later reference Hansen et. al. 1988 use for climate sensitivity. It has a good bit of discussion of their thoughts on it at that time. Hansen was a participant in the assessment. Those who are knowledgeable in this area might find it interesting reading. I only skimmed it a bit and spent a little time on the sensitivity section. One thing that jumped out at me was how much time and effort they spent on attacking estimates made by S. B. Idso. It looks like they felt a fair amount of effort was warranted doing so. I vaguely recall seeing Idso mentioned off and on in the blogs. A quick look at the desmog blog indicates he is not in favor in those circles.

Bruce of Newcastle
January 6, 2019 3:45 pm

Is the West Side Highway underwater yet? That was supposed to happen by last Monday. Or maybe he meant by 31 Dec 2019?

ShanghaiDan
Reply to  Bruce of Newcastle
January 6, 2019 8:44 pm

Well, it did rain on December 31st, 2018 – so it was underwater. Just not the water most people assumed…

Roger Knights
Reply to  Bruce of Newcastle
January 6, 2019 11:34 pm

His West Side Highway scario (Hey—autocorrect finally did something right!) depended on large chunks of Antarctic ICE falling into the ocean, and he’s said his timeline for that extends to 2040.

Dave Fair
Reply to  Roger Knights
January 6, 2019 11:37 pm

Climate scare science follows the Sen. Harry Reid dictum: It worked, didn’t it?

Kurt
January 6, 2019 3:56 pm

Hansen later argued that his model was demonstrated to be reasonably accurate with regards to climate sensitivity because what his model missed on was not emissions, or climate sensitivity, but the amount of greenhouse gasses remaining in the atmosphere. Essentially he was saying that the real world CO2 concentrations followed his scenario B, as did temperatures. This twitter guy you were talking to was just winging his response, and doesn’t know what he’s talking about.

The problem with Hansen’s defense was that, in that revisionist history, he was ignoring the difference between correlation and causation. The only way to demonstrate your understanding of causal relationships in a system via a prediction is to accurately predict the result of causing a change in an input; simply predicting corresponding values of two variable only shows correlation between the two and not causation. In 1988 Hansen was clearly using his model to show the causal effect of emissions on temperatures – that’s why he laid out three “emissions” scenarios, testified under oath that Scenario A was the “business as usual scenario” and tried to advocate for emissions reductions.

But when the world kept emitting as usual, and his doomsday Scenario A was way off, Hansen judged his model in retrospect by cheating; he pretended that his model only set out three different possible scenarios of future CO2 concentrations and argued that the scenario with the CO2 concentration closest to reality also had temperatures closest to reality. But if that was the original purpose of his model it would have only been designed to test the correlation between temperatures and atmospheric CO2 level – not at all testing whether temperatures follow changes in CO2 concentration, or whether CO2 concentration follows changes in temperature, or some combination of the two.

This is one of the reasons I think Hansen and his myriad sycophants are less scientists than they are propagandists that believe global warming as a matter of dogma and just rig the procedures and mathematics of their published papers so that they conform to the dogma.

Rich Davis
Reply to  Kurt
January 6, 2019 5:05 pm

Are you saying that Hansen predicted future CO2 concentration in the atmosphere based on an estimate of future emissions but overestimated the fraction that would remain in the atmosphere? (In which case, he was far off the mark as to the harmful impact of continued use of fossil fuels which was his main point?)

Are you also arguing, as I would, that seeing CO2 concentration be a function of temperature is exactly what we would expect if the climate is warming naturally and CO2 has little effect on temperature?

For the sake of argument, if ECS were zero (which I’m not claiming), and therefore temperature would not even be weakly a function of CO2 concentration, we would still expect CO2 concentration to rise as temperature rises.

If ECS is about 1.3-1.5K, then the observed rise in temperature is partly due to CO2, but must also be largely due to other factors.

By contrast, Hansen and the CAGW believers say that temperature is strongly a function of CO2 concentration and virtually all temperature rise has been caused by CO2.

Willis asserts that Hansen underestimated the emissions as well. So that implies that if he had accurately predicted emissions, he would have predicted even worse consequences.

Kurt
Reply to  Rich Davis
January 6, 2019 5:54 pm

“Are you saying that Hansen predicted future CO2 concentration in the atmosphere based on an estimate of future emissions but overestimated the fraction that would remain in the atmosphere? ”

That’s essentially what Hansen was trying to claim in 2006 when he testified to Congress and wrote a follow-up research paper evaluating the accuracy of his original 1988 paper. Hansen’s follow-up testimony and corresponding 2006 paper tried to gloss over the fact that his original paper and original testimony presented his emissions scenario A as the “business as usual scenario.” Instead, he tried to pretend in 2006 that his original forecast was accurate because both CO2 concentration and temperatures in the real world were close to the Scenario B curve, even though actual emissions more closely followed scenario A.

When making this presentation, Hansen deceitfully mischaracterized his earlier paper as presenting a “worst case scenario,” saying that his original paper described the scenario as being on the “high side of reality.” That quote was taken grossly out of context; scenario A was described in the original paper as the consequence of “continued exponential growth” in GHG emissions and was qualified by the caveat that, since it assumed exponential growth it “MUST EVENTUALLY BE be on the high side of reality” since fossil fuel supplies must at some point start to run out.

My post above merely points out that this historical revisionism is silly; Hansen originally used his model forecast as a causal prediction of the consequences of three fossil fuel EMISSIONS scenarios (not concentration scenarios) in an effort co convince Congress to curtail fossil fuel use. But in 2006 he was trying to pretend that he could validate that causal prediction simply by showing that both the actual temperatures and actual GHG concentrations somewhat matched the temperatures and GHG concentrations predicted for Scenario B, even though emissions followed Scenario A. But that kind of prediction would be useless for policy purposes because even if it did hold true, it would not show a causal relationship that rising CO2 concentration causes temperature to rise – it would only prove a correlation.

I don’t know whether naturally rising temperatures would cause an increase in CO2. I don’t have any idea about how you would even begin to scientifically test that. But I do know that if you want to test whether increasing CO2 concentrations cause an increase in temperatures by making a prediction, you have to predict future temperatures as a function of how much CO2 is emitted, and not just show some relationship between CO2 concentration and temperatures.

Rich Davis
Reply to  Kurt
January 6, 2019 6:28 pm

CO2 concentration in sea water is a function of temperature easily measured in the laboratory. Outside the laboratory you can easily observe this basic fact if you have a bottle of cola sitting in the hot sun. If you open it while it is hot, the CO2 will rapidly come out of solution and the mass of bubbles will cause the bottle to overflow. The same bottle allowed to cool down in the refrigerator will only release a small amount of effervescence when opened. This is due to the fact that CO2 solubility in water is dependent on temperature in an inverse relationship.

Paleoclimate evidence shows that in fact CO2 concentration lags temperature change over all time periods.

Kurt
Reply to  Rich Davis
January 7, 2019 2:32 pm

Just because the capacity of sea water to hold CO2 is a function of temperature does not mean that an increase of temperature over interval X was a cause of a corresponding rise in CO2 concentrations in sea water over that interval.

And to rely on the paleoclimate record to prove that changes in temperature caused a corresponding change in CO2 concentration you first have to assume that the proxy reconstruction methods can match the time scales of CO2 changes to temperature changes to an accuracy smaller than the time lag you are relying on. You also have to assume that there is no other common variable that is causing both temperatures and CO2 to rise in concert.

Rich Davis
Reply to  Rich Davis
January 7, 2019 5:31 pm

Yes, I agree that it is imposssible to separate all the factors and I do not claim that we can. I am only saying that a warmer ocean, and for that matter, a warmer land mass, will outgas CO2, which can be empirically demonstrated.

As for paleoclimate data having insufficient temporal resolution to prove conclusively that CO2 concentration changes lag temperature changes, you may be right. Certainly the point is in dispute. Yet the correlation between CO2 and temperature is trumpeted by no less than Al Gore (though he has causality reversed). While there is a strong correlation between CO2 concentration changes and temperature changes as must be the case if my hypothesis is to be valid, there is a very poor correlation between temperature and CO2 concentration. (Not changes, but actual temperatures and concentrations). If the causation is from CO2 concentration to temperature, then there should be a strong correlation. It should not be possible to enter a glaciation at 4000ppm and be in an interstadial at 300ppm. But if CO2 has little to no impact on the ultimate temperature of the climate system, then there is no contradiction that glaciation may occur at any CO2 concentration while CO2 nevertheless increases with increasing temperature and decreases with decreasing temperature.

In other words, although we may not have sufficient data to prove my hypothesis, the warmist hypothesis should be considered falsified. As you rightly point out, this does not rule out a third factor causing both temperature and CO2 to rise and fall in concert. But actually, although CO2 lags temperature when there is a correlation, it is also the case that CO2 can change independently of temperature. When the total amount of CO2 in the atmosphere-ocean system is roughly unchanging, CO2 concentration in the atmosphere rises with rising temperature and falls with falling temperature. If there is a change in the total quantity of CO2 such as through extensive volcanism or long-term carbonate formation, then CO2 can vary independent of temperature. This is again consistent with my hypothesis and contradicts the idea that CO2 drives temperature.

Roger Knights
Reply to  Kurt
January 6, 2019 11:38 pm

IIRC, Hansen’s defenders on Skeptical Science claim that Hansen wasn’t talking about just CO2 emissions, but all GHG emissions, and that he over-estimated either the amount of them or their sensitivity, so if they are removed his predictions look better.

Dave Fair
Reply to  Roger Knights
January 6, 2019 11:41 pm

Current IPCC climate model estimates of sensitivity bunch around 3 and have been shown to run way too hot. Hansen’s was over 4. All the activists’ Hansen sophistry can’t change that.

Rich Davis
Reply to  Roger Knights
January 7, 2019 3:21 am

Except that according to Willis, Hansen underestimated CO2 emissions, so how does that add up?

Plus if the whole point was to predict the effect on temperature of continued fossil fuel burning, and his temperature estimate was way too high at the underestimated CO2 emission rate, it would have been an even bigger failure at an accurate CO2 emission rate.

As Dave points out, the ultimate question is what is the observed apparent ECS compared to Hansen’s assumed ECS. No amount of “the dog ate my homework” is going to fix his mess if his ECS was three times as sensitive as reality.

Dr. S. Jeevananda Reddy
January 6, 2019 4:02 pm

From figure 5 it is clear that the global warming component flattens with the time [which I stated more than ten years back] as seen from the climate sensitivity factor. If so, there is no chance of getting 1.5-2.0 oC raise in global average temperature as of recent IPCC report.

Dr. S. Jeevananda Reddy

Admin
January 6, 2019 4:15 pm

Don’t forget Willis global warming is a very flexible discipline where scientists get to retrofit their predictions to observations, while drawing wild future scenarios.

Kurt
Reply to  Eric Worrall
January 6, 2019 4:41 pm

That’s exactly right. It’s what happens when you define the physical system you are studying as an open-ended set of statistics that can be mined for whatever type of curve you want to show. “Climate” is itself defined as an average of something over a sufficient number of years. An average of what? Whatever strikes the fancy of the researcher – temperature, maximum daily temperature, minimum daily temperature, precipitation, daily precipitation relative to annual precipitation – the list is endless, limited only by the imagination of the “researcher.”

An average over how many years? Well that again depends on what the researcher wants. If it’s establishing a base period for showing temperature anomalies in a graph, its 30 or more years. But if it’s showing how “climate” changes over time relative to the base period in that same graph, thirty year averages are way, way too long and just won’t do. Maybe a five-year running average is good enough. Maybe it’s ten years. Maybe there is no running average shown and they just put a linear-fit trend line through annual average temperatures. The fact that when they need to, they say that five year averages and ten year averages are just “noise” in comparison to the longer-term “climate” should never be taken as any kind of inconsistency at all. Maybe for global temperatures “climate” is defined as a thirty year average while for precipitation it’s a fifty year average. Again, the “researcher” can just wing it and use whatever metrics are needed to get the right “look” for the pictures they want to show the politicians.

The idea of setting a uniform set of metrics to define climate and measure the change in those metrics, like how the rest of the scientific world defines a single standard for what a kilogram is, or what a volt is, just never occurred to the climate science community – because I suspect – they want the flexibility to influence their results so that they fit the story they want to tell.

Dr. S. Jeevananda Reddy
Reply to  Kurt
January 6, 2019 8:36 pm

World Meteorological Organization [WMO] of United Nations [UN] defined 30 year period [this was decided by experts from national meteorological departments] — 1931-60; 1961-90; — these serve general climate condition — averages, extremes, etc.

To get trend and natural variability WMO in 1966 brought out a manual on “Climate Change”. This presents the methods to understand the data series of a meteorological parameter follow random variation or cyclic pattern. Then it gives the trend. To get periodicity and amplitude and phase angles methods were proposed. From IITM, Pune (late) Dr. B. Parthasarathy applied these with precipitation data series [in 1995 presents meteorological sub-division-wise yearly – monthly seasonal rainfall series for 1871 to 1994]. He prepared programmes in Fotran-IV language — compiled through punched cards through IBM1600.

I learned these from him and as well my boss who was the co-author of “climate Change” manual of WMO (1966).

At that time we don’t have even a simple calculator.

Dr. S. Jeevananda Reddy

Kurt
Reply to  Dr. S. Jeevananda Reddy
January 6, 2019 11:28 pm

Climate scientists do not limit themselves to the WMO standard, which defines climate only for meteorological purposes. The IPCC in defined it as “in a narrow sense . . . ‘average weather,’ or more rigorously, as the statistical description in terms of the mean and variability of relevant quantities over a period RANGING FROM MONTHS TO THOUSANDS OR MILLIONS OF YEARS” and that the 30-year WMO definition is only the “classical” interval to average your statistic over. (And note the use of the vague phrase “relevant quantities.”) NASA’s website, states that “scientists define climate as the average weather for a particular region and time period, USUALLY taken over 30-years.” Another source states that “changes in weather patterns that persist OVER A DECADE OR MORE is defined as climate change.” I’ve seen sites of scientific organizations say that the relevant interval that separates climate from weather will change based what region you’re measuring the climate of, or what variable or statistic you’re measuring.

The way that climate researchers quantify climate, in order to determine the amount by which it is changing, is an amorphous mess that belies any claim that it is scientific at all. They just analyze data in whatever ad hoc procedure they want to adopt for the particular paper they are writing at the time.

Dr. S. Jeevananda Reddy
Reply to  Kurt
January 7, 2019 2:22 am

In my above observation mentioned the two aspects. The second one is with reference to climate change — natural variability: it may be 11 year sunspot cycle and multiples, rainfall cycles with different periods. This is different from climate normal of 30 years. By eliminating cyclic part, we get trend. This trend is practically zero in rainfall except local rainfall with drastic changes in climate system as defined by IPCC. But temperature presents the trend associated with several components. 1880 to 2010 global average temperature anomaly presents a 60-year cycle varying between -0.3 to +0.3 oC with a trend of 0.91 oC from 1951 to 2100. This is not global warming. Also if the data starts at 1850 this is 0.80 oC. Truncated data of a natural cycle present different conclusions. Also, the trend may not be linear but also it may be non-linear, this is what I said in the first comment above.

Dr. S. Jeevananda Reddy

Randy Stubbings
Reply to  Dr. S. Jeevananda Reddy
January 7, 2019 12:54 am

If I have a graph of temperature for some place over the last 31 years that follows a linear trend perfectly, then the 31-year average temperature will be the temperature that existed in the 16th year (and in no other year). The “climate normal” will therefore be what existed for one out of those 31 years. Averages from nonstationary time series are not terribly useful benchmarks and the concept of a “temperature anomaly” for that site is essentially useless.

Reply to  Kurt
January 7, 2019 8:24 am

Kurt, just so. Look at how NASA presently defines “climate change” at its most basic level:

“Climate change is a change in the usual weather found in a place. This could be a change in how much rain a place usually gets in a year. Or it could be a change in a place’s usual temperature for a month or season. Climate change is also a change in Earth’s climate. This could be a change in Earth’s usual temperature. Or it could be a change in where rain and snow usually fall on Earth.
Weather can change in just a few hours. Climate takes hundreds or even millions of years to change.”
— source: https://www.nasa.gov/audience/forstudents/k-4/stories/nasa-knows/what-is-climate-change-k4.html

MarkW
Reply to  Eric Worrall
January 6, 2019 5:16 pm

Eric, it’s more accurate to say that they retrofit the observations to fit their predictions.

Admin
Reply to  MarkW
January 6, 2019 5:18 pm

Ha – the future is certain, it’s only the past which changes.

Rod Evans
Reply to  Eric Worrall
January 6, 2019 11:17 pm

The future is certain, providing we continue to change the past.

toorightmate
January 6, 2019 4:18 pm

Give me a call when they start growing pineapples in Labrador City. Then I might start thinking about the effects of climate change.

Rob_Dawg
January 6, 2019 5:09 pm

Hansen should go to Las Vegas and try to collect on his bet. He can tell us how that went after his knees heal.

Neville
January 6, 2019 5:11 pm

At least Dr Hansen told us the truth about the Paris COP 21 mitigation BS and fr-ud. Dr Hansen’s words not mine, but I fully agree with his very accurate description in this Guardian interview.

Just a pity that Pelosi and other Dem donkeys can’t add up simple sums and understand simple logic and reason. Their virtue signaling allows them to seek to waste endless trillions of $ into the future with no return on the investment at all. OH but they will stuff up the US electricity grid and hurt the poor because of their belief in pixie dust science.

Here’s Dr Hansen’s BS and fra-d interview in the Guardian. Nancy are you listening ? Apparently not and China and India etc are laughing all the way to their banks.

https://www.theguardian.com/environment/2015/dec/12/james-hansen-climate-change-paris-talks-fraud

January 6, 2019 5:12 pm

“The only discrepancy in Hansen’s work from 1988 was his estimate of climate sensitivity. Using best current estimates, it plots out perfectly.”

Is surface temperature sensitive to atmosCO2 conc?

https://tambonthongchai.com/2018/09/25/a-test-for-ecs-climate-sensitivity-in-observational-data/

https://tambonthongchai.com/2018/11/26/ecsparody/

Kat Phiche
January 6, 2019 5:17 pm

Don’t see how 1.5% per year can be considered close to 1.9% a year. Over 30 years, 1.5%/yr gives a 56% cumulative increase, 1.9%/year gives almost a 76% increase. Not an order of magnitude difference, but not close.

January 6, 2019 5:19 pm

“perfect example of why it is so hard to convince climate alarmists of anything—because to them, everything is a confirmation of their ideas”

Yes sir. Much of climate science, and specifically the so called Event Attribution Science is driven by a combination of activism needs and confirmation bias. The science is thus confounded and corrupted by activism.

https://tambonthongchai.com/2018/08/03/confirmationbias/

1 2 3