In an attempt to discredit Judith Curry, Gavin at RealClimate shows how bad climate models really are

From the “whoopsie, that’s not what I meant” department

RC-titanic_header

Guest essay by Thomas Wiita

A recent poster here wrote that they had stopped looking at the Real Climate web site, and good for them. It has become a sad, inwardly focused group. It’s hard to see anyone in the Trump Administration thinking they’re getting value for money from their support of that site.

I still check in there occasionally and just now I found something too good not to share with the readers at WUWT.

Gavin has a post up in which he rebuts Judith Curry’s response to comments about her testimony at the Committee hearing. Let me step aside – here’s Gavin:

“Following on from the ‘interesting’ House Science Committee hearing two weeks ago, there was an excellent rebuttal curated by ClimateFeedback of the unsupported and often-times misleading claims from the majority witnesses. In response, Judy Curry has (yet again) declared herself unconvinced by the evidence for a dominant role for human forcing of recent climate changes. And as before she fails to give any quantitative argument to support her contention that human drivers are not the dominant cause of recent trends.

Her reasoning consists of a small number of plausible sounding, but ultimately unconvincing issues that are nonetheless worth diving into. She summarizes her claims in the following comment:

… They use models that are tuned to the period of interest, which should disqualify them from be used in attribution study for the same period (circular reasoning, and all that). The attribution studies fail to account for the large multi-decadal (and longer) oscillations in the ocean, which have been estimated to account for 20% to 40% to 50% to 100% of the recent warming. The models fail to account for solar indirect effects that have been hypothesized to be important. And finally, the CMIP5 climate models used values of aerosol forcing that are now thought to be far too large.

These claims are either wrong or simply don’t have the implications she claims. Let’s go through them one more time.

1) Models are NOT tuned [for the late 20th C/21st C warming] and using them for attribution is NOT circular reasoning.

Curry’s claim is wrong on at least two levels. The “models used” (otherwise known as the CMIP5 ensemble) were *not* tuned for consistency for the period of interest (the 1950-2010 trend is what was highlighted in the IPCC reports, about 0.8ºC warming) and the evidence is obvious from the fact that the trends in the individual model simulations over this period go from 0.35 to 1.29ºC! (or 0.84±0.45ºC (95% envelope)).”

clip_image002

The figure was copied straight from RC. There is one wonderful thing about Gavin’s argument, and one even more wonderful thing.

The wonderful thing is that he is arguing that Dr. Curry is wrong about the models being tuned to the actual data during the period because the models are so wrong (!).

The models were not tuned to consistency with the period of interest as shown by the fact that – the models are not consistent with the period of interest. Gavin points out that the models range all over the map, when you look at the 5% – 95% range of trends. He’s right, the models do not cluster tightly around the observations, and they should, if they were modeling the climate well.

Here’s the even more wonderful thing. If you read the relevant portions of the IPCC reports, looking for the comparison of observations to model projections, each is a masterpiece of obfuscation on this same point. You never see a clean, clear, understandable presentation of the models-to-actuals comparison. But look at those histograms above, direct from the hand of Gavin. It’s the clearest presentation I’ve ever run across that the models run hot. Thank you, Gavin.

I compare the trend-weighted area of the three right hand bars to the two left hand bars, which center around the tall bar of the mode of the projections. There is way more area under those three bars to the right, an easy way to see that the models run hot.

If you have your own favorite example that shows that the models run hot, share it with the rest of us, and I hope you enjoyed this one. And of course I submitted a one sentence comment at RC to the effect that the figure above shows that the models run hot, but RC still remembers how to squelch all thoughts that don’t hew to the party line so it didn’t appear. Some things never change.

0 0 votes
Article Rating
345 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
April 26, 2017 12:58 am

It is not for Dr Curry to prove the models wrong. It is for their designers to prove them right, if they want to turn them from hypothesis to theory – if they are right. Some cannot be, they are too different to others. Geoff.

Reply to  Geoff Sherrington
April 26, 2017 1:57 am

It is not for Dr Curry to prove the models wrong. The real world data already has done that.

Duane
Reply to  Geoff Sherrington
April 26, 2017 5:55 am

Exactly, Geoff … and this illustrates why the so-called “science believers” (as opposed to the so-called “science deniers”) aren’t practicing science, they’re practicing theology. In theology, in religion, and in politics, the opponents have to prove your position wrong. In science, the burden of proof is always on those who do the theorizing and testing, not on those who question and criticize, and practice true scientific skepticism.

MarkW
Reply to  Geoff Sherrington
April 26, 2017 6:28 am

If the models are not trained using recent data, what are they trained with? Future data?

Tom O
Reply to  MarkW
April 26, 2017 8:43 am

“Modeled data,” of course! Since the output of the model will be controlled by what the premise upon what it is built is to start with, the only possible output from the model is the data upon which it is based. Does sound a bit “circular,,” doesn’t it?

GogogoStopSTOP
Reply to  MarkW
April 26, 2017 10:14 am

The models include algorithms that make temperature proportional to the water vapor in the atmosphere. AND… the water vapor content is proportional to the CO2 content.
So all models must predict higher temperatures because their input is higher CO2.
“Why do we need models,” you ask? So the government and Left Wing media and NGO can spread the propaganda!

john harmsworth
Reply to  MarkW
April 26, 2017 10:51 am

Tuned to tomorrow’s propaganda actually!

Anto
Reply to  MarkW
April 26, 2017 1:49 pm

It’s the same trick played by charlatans who develop software claiming to be able to predict future movements in the stockmarket. Guess what they use as the input? Guess how they “train”/backtest their software?
The difference is, those guys get put in jail for fraud. Call essentially the same thing a “climate model”, however, and you get a billion dollars in government grants.

MarkW
Reply to  MarkW
April 26, 2017 2:06 pm

Tom O: When it comes to the recent revisions of the ground based temperature data, isn’t past data pretty much modeled data anymore?
PS: What’s the difference between modeled data and made up data?

Steve M. from TN
Reply to  MarkW
April 27, 2017 9:30 am

“PS: What’s the difference between modeled data and made up data?”
Modeled data is tortured real data, and made up data is just…well, made up.

meltemian
Reply to  Geoff Sherrington
April 26, 2017 7:34 am

+1

Stu
Reply to  Geoff Sherrington
April 26, 2017 7:50 am

You can’t prove them wrong. You just fail to accept their hypothesis.

rocketscientist
Reply to  Stu
April 26, 2017 8:39 am

Um…the DATA proves the hypothesis wrong, not the scientist. Contradictory observations will determine the failure of a hypothesis.

Reply to  Stu
April 26, 2017 9:15 am

Right, and the hypothesis has to predict a unique outcome, not a splatter of possibilities.

Reply to  Geoff Sherrington
April 26, 2017 8:38 am

Isn’t the scientific method that others find evidence that the hypothesis is wrong, so neither right nor wrong is ‘proved’, but what remains after evidence to demonstrate wrongness of hypothesis slowly rises to rightness, ie theory (but never absolute or total rightness)? That the ‘models run too hot’ has been demonstrated numerous times through observational evidence really by now should have crushed the model claims. As RC haven’t dropped those claims, their stance is nothing but blind dogma.

tim maguire
Reply to  ilma630
April 26, 2017 10:35 am

It’s the role of the scientist to prove the hypothesis wrong. The more attempts to prove it wrong fail, the more likely it is to be true. Therein lies one of the many problems with global warming theory–its proponents do not try to prove it wrong, they try to maintain it in the face of contradictory evidence.

MarkW
Reply to  ilma630
April 26, 2017 11:15 am

Why should I give you my data when your only desire is to find something wrong with it.

Reply to  ilma630
April 26, 2017 12:53 pm

Without access to the data set, testing the accuracy of the model is not possible. One needs to read the brilliant explanation of the Scientific Method as presented by Dr. Richard Feynman. The AGW mob completely fails at this.

Moderately Cross ofEast Anglia
April 26, 2017 1:04 am

Elephant in the pathetic Real Climate room in one Geoff and Thomas. Back to the freezing end of April in the U.K., clue it’s called weather Real Climate, but no doubt it’s still too hot for the terminally alarmist.

Reply to  Moderately Cross ofEast Anglia
April 26, 2017 9:45 am

We are at least a month too long still in winter conditions on the NA West coast. I believe it snowed on the prairies this week. Hardly in keeping with models.

Reply to  Robert Wager
April 26, 2017 1:03 pm

The current weak solar magnetic field is allowing more cosmic rays to hit the Earth’s atmosphere, creating more clouds, which means wetter, colder weather.

Moa
Reply to  Robert Wager
April 26, 2017 8:05 pm

La Nina amigo. This is expected. Natural variability dominates the climate – as Dr Curry points out.

Frizzy
Reply to  Robert Wager
April 26, 2017 9:30 pm

Curiously, Moa, WUWT’s ENSO meter is currently showing +0.5, i.e. right on the boundary between neutral and El Niño conditions, and has been trending upwards. Maybe one of our resident experts can explain that.

April 26, 2017 1:06 am

… I submitted a one sentence comment at RC to the effect that the figure above shows that the models run hot, but … it didn’t appear.
Right there is the important part of your post.

mothcatcher
April 26, 2017 1:06 am

Yes – I read that post too, Thomas, and came to a similar conclusion.
To claim that the models are not ‘tuned’ or ‘trained’ to be reasonably consistent with recent historical data is disingenuous to say the least. They have to be – if they were not, no one would take any notice of them. To suggest that the parameters are devised purely from physical principles without any reference (or knowledge) of that data just isn’t believable. The better climatologists all know this and I think they are getting rather embarassed by some of Gavin’s pronouncements.

Nick Stokes
Reply to  mothcatcher
April 26, 2017 2:53 am

“They have to be – if they were not, no one would take any notice of them.”
No, that is saying they must be tuned because they are right. They could be right because they are based on sound physical principles without need for tuning. And a further counter as Gavin made – if they were so tuned they wouldn’t show this variability.
“To suggest that the parameters are devised purely from physical principles without any reference (or knowledge) of that data just isn’t believable.”
I don’t think people who say these things have a very clear idea of what parameters they are talking about. It is true that there are parameters that are not well established a priori, and are pinned down by tuning to some particular observation. One notable one relates to cloud reflection, which is tuned relative to TOA radiation balance. The conditions for this to work are:
1. You have an extra constraint, as here where you know there should be balance
2. You have a parameter known only within wide limits, to which that constraint is sensitive. They need to be linked by the physics.

Reply to  Nick Stokes
April 26, 2017 3:10 am

There’s another possibility you leave out: the models ARE tuned, and they are so bad they STILL can’t match reality.
Have to consider all the angles.

David Wells
Reply to  Nick Stokes
April 26, 2017 3:34 am

https://wattsupwiththat.com/2017/01/02/john-christy-climate-video/
https://wattsupwiththat.com/2015/02/20/believing-in-six-impossible-things-before-breakfast-and-climate-models/
https://wattsupwiththat.com/2016/12/29/scott-adams-dilbert-author-the-climate-science-challenge/
https://wattsupwiththat.com/2017/01/10/the-william-happer-interview/
http://www.thebestschools.org/special/karoly-happer-dialogue-global-warming/william-happer-interview/
Nick now we have 7 impossible things to believe before breakfast, do you not understand how climate modelling works? I had a email exchange with Joanna Haigh of Reading University during which she tried to persuade me to believe what you want me to believe. She said that it is amazing how good climate models are with just a few tweaks what she meant was parameterisation which means in simple terms you fiddle with the know until the model gives the answer that you believe is the right answer. But you investigate exactly how a computer functions because of the basis physics of how a computer works it cannot do what the IPCC or Gavin Schmidt needs it to do anyway. The grid size used is to large to cope with aerosols but if you reduce the grid size to including lightening thunderstorms and aerosols just one computer run would take 4.5 billion years.
Why is it impossible to believe that what is supposed to be happening is not happening?

Jared
Reply to  Nick Stokes
April 26, 2017 4:32 am

The models have completely failed to properly predict anything, which means they are nothing but junk. I could make a model to accurately reconstruct the stock market and have it project ahead to the year 2100. Whether or not my model is junk or accurate depends on how it predicts the future, not how well I designed it to reconstruct the known past.

jfpittman
Reply to  Nick Stokes
April 26, 2017 4:44 am

James, even more important is to ask the question “Can they be linked by the physics?” At RC, Dr, Browning took the models to task over this and showed where the problem is with Nick’s claim. The use of hypervicosity, and other restrictions meant the physics could not behave the way that the modellers were claiming. His comments were based on the peer reviewed works that he and Kreiss published on these problems. They were physics papers.
It is important to remember, though they are claimed to be physics models, they are actually engineering models. This is because if the “parameters that are not well established a priori, and are pinned down by tuning to some particular observation” this precludes the physics and the relationships are determined by bulk considerations and not a physics package. Those this may be appropriate for a one dimensional model, it has real problems with 3D time stepping sequences with N-S made of coarse gridding, which was the point of several of Browning and Kreiss’s works.

Adam_0625
Reply to  Nick Stokes
April 26, 2017 5:21 am

The models start with the assumption that the enhanced greenhouse effect is true (as shown by model’s having a CO2 coefficient greater than 1). Yet, there is no empirical data to prove that such an effect exists to the extent assumed. The only way to continue to use such an unverified assumption is to modify the weight given to other variables to support it. But, in the world of alarmism, modifying the weight given to other variables to support it is not the same as tuning.

Reply to  Nick Stokes
April 26, 2017 5:24 am

Gavin admitted himself that what they choose to do decides the results. Artefact.

Reply to  Nick Stokes
April 26, 2017 5:34 am

Nick writes

They could be right because they are based on sound physical principles without need for tuning.

Could be…but aren’t. Actually aren’t based on sound physical principles because they are full of approximation and are iterative.

One notable one relates to cloud reflection, which is tuned relative to TOA radiation balance.

This is a new level of spin I see here from you, Nick. Tuning “relative to” the radiative imbalance is simply hilarious. The radiative imbalance is set by tuning, its not some sort of harmonious relationship between fitted clouds and fed back CO2.
Here from Mauritsen et al
http://onlinelibrary.wiley.com/doi/10.1029/2012MS000154/full
In addition to targeting a TOA radiation balance and a global mean temperature, model tuning might strive to address additional objectives, such as a good representation of the atmospheric circulation, tropical variability or sea-ice seasonality. But in all these cases it is usually to be expected that improved performance arises not because uncertain or non-observable parameters match their intrinsic value – although this would clearly be desirable – rather that compensation among model errors is occurring. This raises the question as to whether tuning a model influences model-behavior, and places the burden on the model developers to articulate their tuning goals, as including quantities in model evaluation that were targeted by tuning is of little value. Evaluating models based on their ability to represent the TOA radiation balance usually reflects how closely the models were tuned to that particular target, rather than the models intrinsic qualities.

ferdberple
Reply to  Nick Stokes
April 26, 2017 5:56 am

the models are tuned during the hindcasting. that is why widely different assumptions regarding aerosols return roughly the same projections.
further, all models are tuned during development by the programmers. bugs that make the model return the answers the developers expect are never detected. bugs that make the model return the answers the developers did not expect are identified and corrected. as such, it is most likely the bugs in the models that are making them forecast too much warming, rather than the tuning exercise.
google “experimenter expectation effect”. it occurs in computer programming very frequently, because it is physically impossible to exhaustively test any non trivial computer code, which certainly includes climate models. based on 40+ years of professional programming, I would expect there to be dozens of undetected bugs in each climate model.

MarkW
Reply to  Nick Stokes
April 26, 2017 6:30 am

The models have to be tuned, because:
1) We don’t know all the factors that impact climate
2) Most of the factors that we do know about operate on scales too small for the models to handle so have to be parameterized.
3) The modelers themselves talk about the process of tuning the models.

Reply to  Nick Stokes
April 26, 2017 8:43 am

@ jfpittman
“Can they be linked by the physics?”
That’s the point I keep stressing .
I have yet to see any computable experimentally testable , even 1 dimensional , explication of how some spectral phenomenon , ie : “greenhouse gas” effect , traps heat in excess of that calculated for a planet’s , or simple ball’s , spectrum as see from its radiant sources and sinks .
I’ve had some extended exchanges at Real Climate reaching the point of agreement on the ~ 279K temperature of a gray ball in our orbit , but hit a brick wall in getting even a response to the next step , the generalization of the computation which produces the endlessly parroted 255K meme to arbitrary spectra .
I’m sorry I have no interest in anybody’s fancy Navier-Stokes models if they cannot present the experimentally testable and tested equations for the differential which is asserted to be “trapping” kinetic energy .
That’s why I am starting a fund at http://cosy.com/Science/ComputationalEarthPhysics.html for a prize for the best “YouTube” quantitative experimental test of one of the non-optional classical physical computations necessary to get from the Sun’s output to our mean surface temperature .
Join me to make it substantial .
This field is in desperate need of returning to return to the quantitative experimental analytical method of successful branches of applied physics and end the decades of , as Nir Shaviv puts it , utter stagnation .

Clyde Spencer
Reply to  Nick Stokes
April 26, 2017 9:18 am

NS,
However, even knowing what needs to be done isn’t always possible with modern computers. While it would be ideal to link to physics and operate from ‘First Principles,’ it is well known that the energy exchanges involving clouds can’t be handled at the scale of the clouds. The computational cells are at a much coarser resolution and therefore “parameterized” cloud interactions are necessary. Therein is where some of the tuning occurs. And, I suspect, the ‘fudge factors’ that different modelers use is why all the models don’t give the same results. It is all a bit like dropping a fiberglass reproduction of a Ferrari body over a Volkswagen frame and motor. Oh yes, and ignore that man behind the curtain!

Reply to  Nick Stokes
April 26, 2017 9:22 am

Thanks for the link, TimTheToolMan — a very useful paper.

Michael Jankowski
Reply to  Nick Stokes
April 26, 2017 10:32 am

“…It is true that there are parameters that are not well established a priori, and are pinned down by tuning to some particular observation…”
You admit they are tuned. So what else do you have to argue about?

john harmsworth
Reply to  Nick Stokes
April 26, 2017 11:40 am

Nick-What exactly are you saying here? Is the science settled? Or is it not?

Nick Stokes
Reply to  Nick Stokes
April 26, 2017 12:54 pm

TTTM,
“Tuning “relative to” the radiative imbalance is simply hilarious.”
A very odd choice of Mauritsen quote, where he is describing what they do additional to targeting radiation imbalance. In the earlier para 4 they say
“Instead, the radiation balance is controlled primarily by tuning cloud-related parameters at most climate modeling centers “
Just as I said. And what is the issue with iterative? There is virtually no large scale operation that isn’t iterative. For loops are as old as Fortran.

Reply to  Nick Stokes
April 26, 2017 3:11 pm

Nick writes

And what is the issue with iterative?

Uncertainty accumulates to well beyond the point where the model output has meaning or value. This makes them useless for projection.
Beyond that and because of that, they’re an expression of how they were (built and) tuned.

Kurt
Reply to  Nick Stokes
April 26, 2017 3:15 pm

Nick writes:
“And a further counter as Gavin made – if they were so tuned they wouldn’t show this variability.”
This is not true – Gavin changed the goalposts so he could respond to a straw man. Curry was referencing an admission by modelers that any model that didn’t replicate the sharp uptick in temperatures beginning at the end of the 20th century would simply be discarded. This is what she refers to as “tuning” for her argument of circularity when using those models for attribution. That uptick in the empirical record starts in the early 1980s.
Gavin chose a wider interval, beginning in the 1950s to show his purported model spread. He chose that interval because it “was highlighted in the IPCC reports” – something of no relevance at all to Curry’s point. The graphs shown by Gavin simply do not address Curry’s circularity argument, This is just another example of Gavin Schmidt demonstrating that he is a sloppy thinker.

Kurt
Reply to  Nick Stokes
April 26, 2017 3:28 pm

Also note that the large spread of the trend in individual model simulations doesn’t refute an assertion that the models were tuned so that the average trend around which the spread occurred matched observations.

Kurt
Reply to  Nick Stokes
April 26, 2017 3:37 pm

Here’s Curry’s explanation of her “tuning” argument from the post linked in Schmidt’s response.
“And yes, lets criticize my statement using climate model results, which are tuned to the recent warming as per several published papers and a blog post from Isaac Held.”

Tim Hammond
Reply to  Nick Stokes
April 27, 2017 12:33 am

But models built purely on the physics neither replicate the past nor predict the future.
That is the problem.
Climate scientists refuse to accept that this means we do not have sufficient understanding of the climate, which is the obvious conclusion.

JasG
Reply to  mothcatcher
April 26, 2017 5:26 am

Nick and Gavin rely on their own definition of ‘tuning’. However the word is used throughout the literature meaning that if the model parameters are nowhere near matching reality they are ‘tuned’ until they do. The temperatures themselves are never actually tuned. It’s a distinction without a difference and ‘lying by omission’. Many researchers admit to tuning even in their papers. Others have merely remarked how remarkably the different models match given their large differences in inputs and then wistfully pronounced it as ‘odd’.
In fact, a proper sensitivity analysis would show anything from massive cooling to massive warming due to the error margins of the aerosol parameter. Hence after tuning they are also culled to form a smaller cluster that always warms alarmingly to the pre-ordained 3.5K sensitivity. Using this pre-selected cluster for any kind of verification or attribution is just confirmation bias. The entire exercise is ruled by subjectivity and it’s plain dishonest to pretend otherwise. The best match to obs is made by having zero water vapour feedback, a small CO2 sensitivity and small aerosol effect.

ferdberple
Reply to  JasG
April 26, 2017 6:11 am

agreed. climate models would deliver much more value if they were used for sensitivity analysis of the input parameters, rather than trying to “project” the future of a chaotic system. “project” being admission that the models are not capable of successful prediction..

Science or Fiction
Reply to  JasG
April 26, 2017 7:00 am

. As the models got very different sensitivities for various input parameters, how would you select which model to use for your sensitivity analysis?

john harmsworth
Reply to  JasG
April 26, 2017 11:43 am

JasG- Well stated but you could shorten all that to,”it’s plain dishonest”.

higley7
Reply to  mothcatcher
April 26, 2017 5:34 am

It is my understanding that the climate computer modelers pretty much gave up a long time ago trying to program real scientific principles into their models and went to algorithms that are supposed to approximate the net effect of all the principles. Of course, they thus do not include over 50 major climate factors and magnify greenhouse gases from undetectable effects to drivers of the climate, and even the Universe, if you give them enough funding.

Latitude
Reply to  mothcatcher
April 26, 2017 6:16 am

not ‘tuned’ or ‘trained’ to be reasonably consistent with recent historical data…
..data that has been consistently adjusted….the models can only be as honest as the data put in them
It’s all hogwash…..if a model was accurate 5 years ago…it would not be accurate now

DaveK
Reply to  Latitude
April 26, 2017 8:41 am

An interesting discussion of parametric modeling. But what do we do when Artificial Intelligence is turned loose on weather and then climate forecasting? Say what you want about the usefulness of parametric modeling, but AI is different. The scary thing is that AI can make predictions that cannot be explained. That is, there is no direct correlation discernible between a prediction and the input data, and there is no way to tease out the why of it.
The best AI’s basically generate their own algorithms in a trial and error fashion, so we can’t even really know how they work. As things stand now, we have a basis of understanding the results of a weather or climate model. What will we do when the “model” is simply a black box that we feed tons of data into, gives us results that are reliably right, but we do not and cannot understand why?

Clyde Spencer
Reply to  Latitude
April 26, 2017 9:34 am

DaveK,
Actually, your AI vision isn’t all that different from the present situation. As I understand it, the GCMs have modules that are written in different programming languages, and the modelers are reluctant to share their code. Thus, for anyone outside the team of developers, you would have to get your hands on the source code, be fluent in all the languages employed, and have verbosely commented code and/or a manual of operation that explained all the steps and assumptions. It would help to have access to a supercomputer to test changes in code to address suspected ‘bugs.’ All of those are beyond any one person’s ability and would require a funded Red Team to explore. A step in the right direction would be an annotated flow chart of the workings for all the different models to see how they differ.

john harmsworth
Reply to  Latitude
April 26, 2017 12:47 pm

DaveK- Artificial intelligence in modelling can’t be worse than the current paradigm of Imitation Intelligence. The problem comes about when we think we can act on our models. I’m no computer expert but I would have thought that correct models could in fact be examined to determine what differentiates them from incorrect ones. It might be a slow process with multiple variables requiring a lot of follow on research, but that is science. It’s hard!

Kurt
Reply to  mothcatcher
April 26, 2017 3:27 pm

Also note that the large spread of the trend in individual model simulations doesn’t refute an assertion that the models were tuned so that the average trend around which the spread occurred matched observations.

Scottish Sceptic
April 26, 2017 1:09 am

A week or so ago, you made similar comments which I took to refer to Tony Heller’s site. I now see he’s realclimatescience.com – not realClimate – so I think you must have been referring Gavin’s thing. As such apologies to you.

April 26, 2017 1:10 am

headline from today’s London Times:
“We’re all victims of the great green swindle”
(the headline tells it all, no need to read the article/src)

Eugene S Conlin
Reply to  vukcevic
April 26, 2017 2:22 am
Reply to  Eugene S Conlin
April 26, 2017 2:55 am

great green swindle
from the article: “The same mistake is now being made subsidising power stations to burn American wood pellets that are doing more harm to the climate than the coal they replaced, according to a recent Chatham House report. Drax in Yorkshire, once the largest, cleanest, most efficient coal-fired power station in Europe, has been converted to burn wood pellets with an annual £500 million public subsidy but it now pumps out more CO2. Wind farms are little better because we’ve had to build diesel power plants across the country to help on days when the wind doesn’t blow at the right speed.”

Gamecock
Reply to  Eugene S Conlin
April 26, 2017 3:48 am

“subsidising power stations to burn American wood pellets that are doing more harm to the climate than the coal they replaced”
“The climate” – whatever that is – doesn’t care.

dennisambler
Reply to  Eugene S Conlin
April 26, 2017 3:52 am

Unfortunately, the new memes, such as “all that nitrogen dioxide and those toxic pollutants”, quoted by the Dr as a reason for asthma, have become commonplace. The ludicrous “40,000 premature deaths” is also freely quoted in the media.
There is a good examination of it here:
http://euanmearns.com/mortality-from-diesel-car-pollution-in-the-uk/

andrewmharding
Editor
Reply to  vukcevic
April 26, 2017 3:19 am

This is what happens when vested “scientific” interests get overcome by common sense.

gnomish
April 26, 2017 1:14 am

good one, eh.
those chubby faces with the mephistophelian goatees and flamboyant vanities are practically vaudevillian villains.

Titus
April 26, 2017 1:23 am

A one sentence comment has appeared in Realclimate:
48 TW says:
22 Apr 2017 at 2:56 PM
The main thing that the figure in 1) in the post demonstrates is that the models predict more warming than the observations show.
Interesting to see if it gets responses.

Reply to  Titus
April 26, 2017 3:53 am

Something like Harry and Louise might work. But you’d need 20 million and left-wing media would very likely refuse to air it.

Reply to  Steve Case
April 26, 2017 3:59 am

Hmmm, looks like I posted on the wrong forum – sorry about that (-:

Santa Baby
April 26, 2017 1:55 am

If they are spaghetti different then just one or none can be correct? The Spaghetti results shows that this is not setteled science.

RockyRoad
Reply to  Santa Baby
April 26, 2017 6:30 am

Use an axe for a hammer and you get about the same results.

April 26, 2017 2:07 am

Atmospheric and oceanic computational simulation models often successfully depict chaotic space–time patterns, flow phenomena, dynamical balances, and equilibrium distributions that mimic nature. This success is accomplished through necessary but nonunique choices for discrete algorithms, parameterizations, and coupled contributing processes that introduce structural instability into the model. Therefore, we should expect a degree of irreducible imprecision in quantitative correspondences with nature, even with plausibly formulated models and careful calibration (tuning) to several empirical measures.
http://www.pnas.org/content/104/21/8709.full
Each of the CMIP models has 1000’s of plausible solutions – one run from each model is chosen to join the opportunistic ensemble. It is a product of initial and boundary and instabilities resulting from sensitivity to small changes in initial conditions and from structural instability. So the solutions are either tuned or imprecise. Neither has much scientific validity. For projections – James McWilliams goes on to say that the only way to define the imprecision of a single model is to systematically design model families in a perturbed physics approach.

ferdberple
Reply to  Robert I. Ellison
April 26, 2017 6:17 am

exactly. the models return a solution “space”, showing the range of possible answers for a single combination of CO2. climate science then averages this space to a single answer, which is misleading in the extreme. the future is not an average of all possibilities. otherwise one could simply bet “7” each time on the “craps” table and win every roll of the dice.

john harmsworth
Reply to  ferdberple
April 26, 2017 1:41 pm

Don’t forget, any chance “correct” solution will only apply to that instant of time. The fundamental assumptions might change imperceptibly evenwhile your model is running.

Moa
Reply to  ferdberple
April 26, 2017 8:24 pm

Although the public reasons for the Alarmists’ arguments are “but what if you roll snake eyes every time! it’s possible so we should spend a $40 Trillion to mitigate it”.
Of course the main private reason is: “Hey fellow rich investors, if you also invest in ‘green energy’ you will be able to use State and International bodies to extort taxes from people less well off than you – the ultimate in rent-seeking opportunities for redistribution of wealth from the poor to the rich !”

April 26, 2017 2:11 am

Yes and twice yes the models ARE tuned;
And the trump administration is not as revolutionary as announced but is evasive and adaptive / affirmative.
The human condition.

HotScot
April 26, 2017 2:16 am

Vukcevic
It’s ‘The Times’ not the London Times. I don’t mean to be pedantic but there are several London’s around the world, but only one ‘The Times’. I did wonder if you meant London Times of Canada perhaps.
However, I found the article and it’s interesting, It attacks Gordon Browns promotion of diesel cars whilst both Chancellor and PM, all the time acknowledging there would be local pollution problems. In other words, the government knew the problems yet still told us all to buy diesel because it produced less CO2. The fact that diesel engines are some 25% more efficient than petrol cars thereby saving the worlds dwindling oil resources seems to pass everyone by. 61% of the cost of petrol (59% diesel) is government taxes (the highest in Europe) but that remains ignored.
But the article goes further by condemning Drax, the power station here that imports wood pellets from America to burn. Drax was converted from the cleanest (in terms of CO2 emissions, which I don’t count as clean) power station in Europe to one producing more CO2, whilst burning wood, and receiving a £500M annual subsidy for doing so.
But there’s more. She goes on to condemn wind farms because of the diesel powered generators required when the wind doesn’t blow. It seems it’s OK to poison people to maintain a green fantasy, but not OK for consumers to run efficient cars where the only apparent health effects are to city people, whilst the rest of the country relies on diesel with no negative effects. Once again, the rest of the UK dictated to by a London elite.
Then the article points out a Scottish farmer boasting that he keeps his home heating on in summer because “he is paid more in subsidies to use “green” wood chips for fuel than he pays out in heating costs.”
Then onto Anaerobic digesters “turning huge quantities of crops into small quantities of methane for the national gas grid thanks to yet more subsidies costing £200 million a year.”
But the last paragraph offers the UK a glimmer of hope. “Downing Street policy advisers hint that Theresa May is on the side of the consumer, and sceptical of the latest money-spinning environmental fad. Last year, the prime minister’s joint chief of staff Nick Timothy described the Climate Change Act, which has been at the root of many of these misguided policies, as “a monstrous act of national self-harm”. He was right.”
The groundswell of opinion against the green monster is growing in the UK. Hopefully the Conservatives can win the coming general election with a sufficient majority and ensure we follow the US in targeting wasteful green policies that punish everyone here.

Reply to  HotScot
April 26, 2017 9:08 am

Grexit anyone?

HotScot
Reply to  Hoyt Clagwell
April 26, 2017 2:39 pm

@Hoyt
+++++++++++++Many upvotes.

johndo
April 26, 2017 2:17 am

A plot of the quality station “observational estimate” (for the USA) from the Watts et al paper on the same graph would be below all the individual simulations.
How is it that people still get away with using the fudged (much too high) estimates!

HotScot
April 26, 2017 2:29 am

Marginally off topic here, but if you value your health you might want to read this article by Dr. Malcolm Kendrick. There are striking similarities in the condemnation of sceptical science within the medical profession, over generations, as within the climate debate. Those of persecution, lies, conspiracies, bad science and misinformation (did I leave anything out?).
https://drmalcolmkendrick.org/2017/04/26/tim-noakes-found-not-guilty-of-something-or-other/

dennisambler
Reply to  HotScot
April 26, 2017 3:57 am

Check out medical doctor Malcolm Kendrick’s long standing challenge to the other great myth of our time, that of Cholesterol, only recently starting to fall apart in the media and thence in the public consciousness.
http://www.spiked-online.com/newsite/article/548#.WQB8PDFtns0

dennisambler
Reply to  dennisambler
April 26, 2017 4:04 am

This really isn’t far off topic. Having read the Tim Noakes piece I see Dr Kendrick features his long standing position on saturated fats. As it happens, bang up to date in the Guardian today we have:
https://www.theguardian.com/society/2017/apr/25/saturated-fats-heart-attack-risk-low-fat-foods-cardiologists?
The new paper is roundly attacked and there is a classic comment:
“Dr Gavin Sandercock, director of research at Essex University, rejected the trio’s claims about the benefit of “replacing refined carbohydrates with healthy high fat foods” as not true and not based on any existing evidence.
“We must continue to research the complex links between fat, cholesterol and heart disease but we must not replace one myth with another”, Sandercock said.
Thereby admitting that the Cholesterol story is a myth?

HotScot
Reply to  dennisambler
April 26, 2017 2:37 pm

I have been following his blog for a long time now. It’s all on there.

Ian W
Reply to  HotScot
April 26, 2017 5:50 am

You can read a very much more detailed account of this medical scam that is almost as large as the AGW hypothesis in: “Good Calories, Bad Calories: Fats, Carbs, and the Controversial Science of Diet and Health” Sep 23, 2008 by Gary Taubes

Frederic
April 26, 2017 2:33 am

“The models were not tuned to consistency with the period of interest as shown by the fact that – the models are not consistent with the period of interest. ”
——————-
Untrue. Models not consistent with the period of interest doesn’t mean they were not tuned. It simply means they are so bad that it was impossible to tune them, even with the period of interest.
It’s a known fact that when tuned with the period of interest, models tend to project stable temperatures and fail to show future warming, thus become useless for climate alarmists We have ample evidence of this inconvenient fact when Climateprediction.net’s data were openly published some years ago, before they were disappeared for being too “contrarian”.

Nick Stokes
April 26, 2017 2:38 am

“The wonderful thing is that he is arguing that Dr. Curry is wrong about the models being tuned to the actual data during the period because the models are so wrong (!).”
The even more wonderful thing is that he is right. Dr Curry is wrong. He rightly says:

1) Models are NOT tuned [for the late 20th C/21st C warming] and using them for attribution is NOT circular reasoning.

That is simple matter of fact, as Gavin well knows. It is important to get simple facts right. Dr Curry was wrong. And he has shown it.
As to whether you think the models should cluster more around the observations in that period – well, the deviations are well known. Model runs vary because they generate synthetic weather, and the goal is to predict the climate. Runs are averaged to damp this variation. And yes, the models have run somewhat hot relative to observations of that period, as has been also well observed.

Reply to  Nick Stokes
April 26, 2017 2:54 am

Yes they are tuned and it has been demonstrated statistically.

Nick Stokes
Reply to  Javier
April 26, 2017 3:16 am

“it has been demonstrated statistically.”
Your “demonstration” seems to consist of (from below):
“Even if alternate explanations have been proposed and even if the results were not so straightforward for CMIP5 (cf. Forster et al. 2013), it could suggest that some models may have been inadvertently or intentionally tuned to the 20th century warming.”
Lt’s unpack that:
1. Alternate explanations have been proposed.
2. The CMIP5 results are not so straightforward
3. it could suggest
4. may have been
Doesn’t sound like “demonstrated”

Reply to  Javier
April 26, 2017 3:35 am

Demonstrated for CMIP3. Now you think that the same scoundrels that were cheating to get their models published have straighten up and are now honest about it in the next version, that is up to you.
This is all very human. Car companies were also cheating about car emissions to get their car models approved. But unlike car companies, scientists are not punished when they are found to be dishonest. They are given another chance to tweak their models.

Nick Stokes
Reply to  Javier
April 26, 2017 4:11 am

“Now you think that the same scoundrels that were cheating to get their models published”
A very large number of people have worked on these models. It is impossible to believe that they are all scoundrels. Some codes are published, and there must be many copies of the others in circulation. Massive cheating with so many people involved is unbelievable.
And it isn’t demonstrated for CMIP3 either.

Reply to  Javier
April 26, 2017 12:10 pm

“A very large number of people have worked on these models.”
That’s no obstacle. I have seen with my own eyes how scientific fraud develops. You have to get certain result because otherwise there is no article, model, grant… So along the way at every decision point the one that approaches you to the desired result is taken. Every decision can be justified, but all together constitute fraudulent bias. Very difficult to demonstrate, but then the results are irreproducible with a different approach. With a computer model is even easier than with experimental science and nobody in the future will be able to say that you did something wrong, the model will simply just perform badly because GIGO.
“And it isn’t demonstrated for CMIP3 either.”
Yes it is.
“the total forcing is inversely correlated to climate sensitivity.”
This just doesn’t happen by itself. The most parsimonious explanation is that the output (20th century warming) was targeted. You can argue with William of Occam if an explanation for each model is more appropriate, but he is likely to cut you with his razor.

John Bills
Reply to  Nick Stokes
April 26, 2017 2:55 am

Well Nick, they can alway’s somewhat tune the data.

Nick Stokes
Reply to  John Bills
April 26, 2017 3:24 am

You can turn the logic of this post around. If they were tuning the models to the data, the would agree. If they were tuning the data to the models, they would agree. But in fact, for individuals runs, they do not agree. So neither tuning is being done.

Reply to  John Bills
April 26, 2017 3:46 am

“So neither tuning is being done.”
You should correct your statement. The tuning to 20th century warming has been demonstrated.
Kiehl, J. T. (2007). Twentieth century climate model response and climate sensitivity. Geophysical Research Letters, 34(22).
http://onlinelibrary.wiley.com/doi/10.1029/2007GL031383/full

Nick Stokes
Reply to  John Bills
April 26, 2017 4:07 am

” The tuning to 20th century warming has been demonstrated.”
My “statement” was an expression of the logic of this post. If tuning (of model or data) was done to make the results agree, and they don’t agree, then that is evidence against use of tuning.
But Kiehl did not demonstrate tuning. He doesn’t mention the word. He says that there is an inverse relation between sensitivity and forcing. And he says that the uncertainty of forcing is large, especially aerosol forcing. There is some implication that high sensitivity models use lower forcing, probably with different aerosol assumptions. But that relates to the assumed input, not model tuning.
In any case, as your later quote says, that was for CMIP3, and the association is less clear for CMIP 5.

RockyRoad
Reply to  John Bills
April 26, 2017 6:34 am

Whoa, Nick…. How are models (and it doesn’t matter if they’re the General Circulation Model or some of my economic forecast models) “tuned” unless by adjusting the “assumed input”?
It’s either that or reconfiguring the algorithms that utilize the “assumed input”.

Reply to  John Bills
April 26, 2017 8:14 am

Nick, I think you know that the models are not tuned to the periode 1950…2010 as mentioned in the cited figure. The truth is:
“A longer simulation with altered parameter settings obtained in step 1 and observed SST’s, currently 1976–2005 from the Atmospheric Model Intercomparison Project (AMIP), is compared with the observed climate.”
Source: http://onlinelibrary.wiley.com/doi/10.1029/2012MS000154/full .
It’s easy to show that the models ( the mean of it) match very well the GMST of the time span 1976…2005 but not the span 1950…2010 as it was shown by Gavin. Because the run hot out of the tuning periode. I think you know this, don’t you?

Crispin in Waterloo but really in BSD City
Reply to  John Bills
April 26, 2017 9:11 am

Nick, I think this is a case of special pleading:
“If tuning (of model or data) was done to make the results agree, and they don’t agree, then that is evidence against use of tuning.”
If models are tuned to actual radiation, actual planetary position, actual cloud reflection, actual ozone as inputs, the there is tuning of a sort. I believe that the models are tuned to real inputs, real starting conditions, and then run for a while. I believe that based on what I read about how they are tuned to initial conditions.
Tus to say that the predicted temperatures are proof they are not tuned is not on its own proof of anything. Everyone knows the model outputs are pretty useless for predicting future temperatures, or out of sample temperatures. They are very wrong most of the time over short time periods.
I have nothing to add about the quality of the models or how many parameters are or are not set. It is simply a fact that it is possible to tune a model based on actual data without using temperature as one of them, leaving it as an important output. Tuning to actual temperatures would require running the models backwards to get the initial conditions and forcing from CO2. That would quickly point out that the CO2 forcing is set too high. Pretty obvious, that one.
However they are tuned, most models tend to run hot most of the time and are therefore inadequate for our purpose, which is to inform policy. This is not a game. If it is not capable of informing policy, it is wasted money.

Reply to  John Bills
April 26, 2017 9:43 am

It’s easy to show that the models ( the mean of it) match very well the GMST of the time span 1976…

but only because they average out the horrible regional results they generate.

Nick Stokes
Reply to  John Bills
April 26, 2017 1:10 pm

frank
“It’s easy to show that the models ( the mean of it) match very well the GMST of the time span 1976…2005 “
No. Read your quote properly. It says they look at SST in that period.
But read the context. He’s describing how they tune for radiation balance. And they don’t do full runs and compare with target. He describes three steps:
“1. Short runs of single months, or if possible one or more years, with prescribed observed SST’s and sea ice concentration; first with reference parameter settings, and then altered parameter settings.”
“2. A longer simulation…” [your part-quote]
“3. Implement the changes in the coupled model setup to run under pre-industrial conditionsand evaluate the altered climate. Frequently, we make small parameter changes in this step to fine-tune the climate, without first revisiting steps 1 and 2.”

This is typical of tuning. You run something for short periods, probably many times. It isn’t checking against final results. He doesn’t say how long that “longer simulation” is, but it’s pretty clear it isn’t much more than the 30 years. The purpose is to get the balance. Note step 3 – the eventual test is consistency for pre-industrial. They may not even check back to 1976.

Reply to  Nick Stokes
April 26, 2017 3:01 am

Can you explain the justification for averaging the models?
I get that it smooths out the ludicrous outliers – but they could just be rejected anyway and the accuracy of the starting parameters improved.
But I do not understand how one modeled scenario can be averaged with another.
It’s x° when it’s cloudier and y° when it’s sunnier so let us model both, average them, and say we’ve learnt if it is going to be overcast or not on average…
It makes no sense.

Nick Stokes
Reply to  M Courtney
April 26, 2017 3:21 am

“Can you explain the justification for averaging the models?”
Yes. The paradigm is that there is chaotic weather, but with an attractor, which is the climate. Individual trajectories follow divergent courses, but do cluster (think Lorenz butterfly). The way you ascertain the attractor is by averaging many trajectories.
In this paradigm, Earth’s weather is another trajectory.

Mike Schlamby
Reply to  M Courtney
April 26, 2017 5:32 am

The usual excuse is that “well, some of them are too high and some are too low, so the ones that are too high will cancel those that are too low and so, voila, we get the correct answer”. Frequently it’s couched in terms of “chaos theory”.
Of course, the rather bold assumption is that whatever number-crunching goes on behind the scenes is somehow related to the natural processes that drive climate, and that the models approximate the behavior of the atmosphere in general, and the flow of heat in the atmosphere in particular. That’s the bit that I doubt — there’s no reason to believe that whatever the models do is an accurate simulation of the atmosphere, despite the claims of the modelers. For that to be plausible, the models would have to produce accurate forecasts, which they don’t.
Put another way, if I took the average of the amount of change in my pocket, IBM’s daily share price and the previous night’s point spreads in the hockey, the highs would probably cancel out the lows, and voila, I could predict the climate.

Reply to  M Courtney
April 26, 2017 5:39 am

But that assumes that there is only one strange attractor. It begs the very question we are trying to model.
It seems ridiculous to have ‘greater cloudiness’ and ‘more clear skies’ both leading to the same place. So how can models that project the former be averaged with those that project the latter?

Reply to  M Courtney
April 26, 2017 6:01 am

Interesting rationales for averaging results. None correctly predict, but if you average them you damp out variations, or something. Is there a standard for how many wrongs one must average to make a right?
It also seems strange that the models are not tuned but are based on sound physical principles which don’t do well matching real data. Well, back to averaging wrongs to make a right.

John Bills
Reply to  M Courtney
April 26, 2017 7:49 am

Nick,
Even Carl Mears says the models run too hot:
http://images.remss.com/figures/climate/RSS_Model_TS_compare_globe.png
And Santer proved that the warming from 1993 on was natural:
http://www.nature.com/ngeo/journal/v7/n3/fig_tab/ngeo2098_F1.html

George Daddis
Reply to  M Courtney
April 26, 2017 10:42 am

Nick, the concept of an attractor might make sense if you were averaging actual observations of chaotic weather. But in this instance these are runs of many very different models; and the focus of the discussion is whether they are or even can be representative of reality.
To assume the runs do represent reality (in order to argue for an attractor) is a great example of “begging the question”.

urederra
Reply to  M Courtney
April 26, 2017 10:44 am

Nick Stokes
April 26, 2017 at 3:21 am
“Can you explain the justification for averaging the models?”
Yes. The paradigm is that there is chaotic weather, but with an attractor, which is the climate. Individual trajectories follow divergent courses, but do cluster (think Lorenz butterfly). The way you ascertain the attractor is by averaging many trajectories.
In this paradigm, Earth’s weather is another trajectory.

I think you did not understand the Monte Carlo method.
https://en.wikipedia.org/wiki/Monte_Carlo_method
You can take one model, with 1 value for the climate sensitivity to CO2 doubling constant, execute it 200 times with random starting conditions and average the trajectories. If the average deviates compared to reality, then the model is wrong. If the average does not deviate then the model is accepted and the value you set for the climate sensitivity to CO2 doubling constant is acceptable. That would be a correct Monte Carlo Method.
But you cannot take 100 different models, with 100 different values for the climate sensitivity to CO2 doubling constant and average the trajectories. That would be averaging the models, and that is wrong. why? because you are not sampling randomly the climate sensitivity constant or any other parameter. You are just averaging different models.
What would you do if the average deviates compared to reality?. Discard all models? And if the average does not deviate? Accept all models? Then, if you accept all models, what would the value for the climate sensitivity constant be? The average?, If the average is 3.2 degrees then all the models that don’t use a value of 3.2 degrees will be wrong. It makes no sense to average models.

urederra
Reply to  M Courtney
April 26, 2017 10:46 am

George Daddis (April 26, 2017 at 10:42 am)
You beat me by 2 minutes.

Nick Stokes
Reply to  M Courtney
April 26, 2017 1:20 pm

John Bills,
“Even Carl Mears says the models run too hot:”
No, that is a plot of RSS V3.3 TLT. And what RSS says about that is:

RSS TLT version 3.3 contains a known cooling bias.

And what Santer et al showed wasn’t that warming from 1993 on was natural at all. They showed that if you removed ENSO and volcano effects, the data followed the (rising) model results rather well. It drops below between about 2003 and 2012 (end of their data), which Lord M celebrated for years. But then it didn’t…

Nick Stokes
Reply to  M Courtney
April 26, 2017 1:24 pm

Uredarra,
“I think you did not understand the Monte Carlo method.”
This is chaos, not a random process. The fact is that you have a number of trajectories, which cluster about an attractor. Averaging is a process that diminishes the variation about the attractor, and makes it clearer.

Reply to  Nick Stokes
April 30, 2017 10:30 am

Nick, averaging hypothetic outcomes makes what clearer? You ceaselessly fail to understand a models proper role. Relying on parametirized inputs to “line up nicely” with observational sciences is ludicrous. What happens when we dont fully grasp the attributions? Youre a math guy, tell me how wrong the outcomes can be when the variables are off? Even “if” the models somehow resemble the observed we are unable to validate the attributions therein. Am i wrong? How would you know?
Model runs do have a place; over time, proving our assumptions via observations, we can study and tune the variables. With this knowledge we can begin to understand complex relationships. But I live and breath in the physical world, and my world cannot be described via modeling.
For those who Imagine im wrong consider the immense amount of information the sciences can glean from my comments…now what Should my face look like? And how many friends do I have? What misic do i listen too? What am I wearing right now… And how many goals will i score on tuesday?

Duster
Reply to  M Courtney
April 27, 2017 10:53 am

… They showed that if you removed ENSO and volcano effects, the data followed the (rising) model results rather well.
Please think about that. ENSO IS climate. And the influence, if any, of volcanoes would be as well. So in effect, you are arguing that IF you disregard important climatic events, then the models are fine. But, Nick, we live live in a real climate. None of your models models it, if you ignore influences like ENSO. You are arguing that a “spherical cow” on a frictionless surface will adequately model herd behaviour.

Reply to  Duster
April 27, 2017 11:33 am

Please think about that. ENSO IS climate. And the influence, if any, of volcanoes would be as well. So in effect, you are arguing that IF you disregard important climatic events, then the models are fine.

I think what they’ve done is just carve off all other affects just leaving co2. It’s just co2 is an insignificant fluctuation of climate.

Reply to  Nick Stokes
April 26, 2017 3:14 am

Oh, well he said she was wrong, so that proves it. That sounds a lot like the Demi Moore character in “A Few Good Men” strenuously objecting.

Reply to  James Schrumpf
April 26, 2017 9:24 am

+10

HotScot
Reply to  Nick Stokes
April 26, 2017 3:38 am

Nick,
“And yes, the models have run somewhat hot relative to observations of that period, as has been also well observed.”
I stand to be corrected here, and I’m not a scientist, but from what I have seen of model predictions of global temperatures over the last 30 or 40 years, it seems observed tropospheric temperatures are about to drop below the minimum, and most conservative estimates published by the IPCC.
And I understand the argument about not living in the troposphere, but there is considerable, credible evidence to suggest that surface temperature measurement in the US is frequently subject to the urban heat island effect. Furthermore, the bulk of surface stations are based in the US/Europe whilst the rest of the world is badly provided for.
Judith Curry maintains, I believe, that the urban heat island effect isn’t as important as it’s made out to be as long as the anomaly is used rather than the actual temperature. Which its routinely not when scare stories of the hottest day/month/year etc. ‘ever’ are fed to the media.
My point is that even ignoring the deficiencies of the planets surface station measurements, tropospheric temperatures are still well below where they were predicted to be, which makes the models just plain wrong.
What Gavin seems to be arguing for here is that the models were right, as models, “the fact that the trends in the individual model simulations over this period go from 0.35 to 1.29ºC!”, which is not the same as what the observed temperatures actually did. Tuned or not, the models seem wrong.
Have I got the wrong end of the stick here?

Nick Stokes
Reply to  HotScot
April 26, 2017 3:46 am

“tropospheric temperatures are still well below where they were predicted to be, which makes the models just plain wrong”
They may be getting tropospheric temperatures wrong. But the evidence is thin. V6 of UAH says they are. But V5.6 had them closer. And we still have only 38 years of satellite observations, dominated by big ENSO fluctuations

John Bills
Reply to  HotScot
April 26, 2017 7:51 am
HotScot
Reply to  HotScot
April 26, 2017 2:35 pm

Nick,
hang on. Is the latest version of UAH V6 or V5.6?
In my simplistic, non scientific, addled mind, I would imagine V6 would be the latest figures, but you are quoting V5.6 as the one to refer to. Or is scientific counting different to mine?
As for the 38 years of satellite observations, I understood a reasonable period of observation to be 30 years, at least that’s what I’m told by innumerable climate alarmists when I quote the pause to them.
In any event, it seems to me that over 38 years, satellites have not simply disagreed with the alarmists contention the worlds warming is out of control, it seems to blatantly contradict it.
Without wanting to be confrontational or rude, I can’t understand why there is no credible, empirical evidence that atmospheric CO2 causes global temperatures to rise. There should be hundreds, if not thousands of studies demonstrating that in the field, CO2 is a curse.
Nor do I understand why, whilst the planet has greened by 14% over the last 30 years, there is not a single negative effect of increased atmospheric CO2. Not even the collective supposed negative side effects would come close to the 14% benefit mankind has enjoyed so far.

HotScot
Reply to  HotScot
April 26, 2017 2:43 pm

Bills
Thank you, that’s the kind of thing I meant. It’s kind of obvious isn’t it, even to a thicko like me. In fact it looks like the planet has already fallen below the lowest IPCC predictions and I can only guess, it has jumped back up to the lower reaches, because of El Ninio?

Nick Stokes
Reply to  HotScot
April 26, 2017 2:58 pm

“Is the latest version of UAH V6 or V5.6?”
Actually, they are both currently produced. But the point is that if V5.6 agrees with models and V6 doesn’t, it’s a weak basis for saying models are wrong. What might V6.1 say?

Newminster
Reply to  Nick Stokes
April 26, 2017 3:47 am

“The climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible.”
Perhaps that simple statement ought to be pasted on the wall in foot-high letters above every modeller’s computer screen. It doesn’t matter if the models are “tuned” or not. They are being asked to do something they are inherently incapable of, as are the people who set them up.
Broad weather patterns we can project, provided we have the humility to look back in time and learn from history and not assume that everyone born before 1900 was a cretin. All that modellers are doing — badly — is pretending that they can predict the weather 50 years into the future and aggrandising themselves by claiming to be able to forecast what “the climate” will be like. (Hint: it will be pretty much as it has been for the last several thousand years though probably a bit cooler though not enough to notice.)
As for climate “science”, science doesn’t come into it. This was environmental politics from the beginning and there is more than enough evidence to sustain that accusation beyond a reasonable doubt in any court of law. I wouldn’t stop anyone researching climate; with an open mind we might actually learn something useful about it but only with open minds which are in short supply today. Closed minds will get us nowhere — assuming that in climate research there is actually anywhere “to get to” which I beg leave to doubt.

DB
Reply to  Nick Stokes
April 26, 2017 3:55 am

Mauritsen et al. had a paper on the tuning of climate models. In it we read
“Climate models ability to simulate the 20th century temperature increase with fidelity has become something of a show-stopper as a model unable to reproduce the 20th century would probably not see publication, and as such it has effectively lost its purpose as a model quality measure.”
Tuning the climate of a global model
http://onlinelibrary.wiley.com/doi/10.1029/2012MS000154/abstract
http://www.mpimet.mpg.de/fileadmin/staff/klockedaniel/Mauritsen_tuning_6.pdf

Nick Stokes
Reply to  DB
April 26, 2017 4:32 am

“In it we read”
Yes. And they start with:
“In this paper we have attempted to illustrate the tuning process, as it is being done currently at our institute. Our hope is to thereby help de-mystify the practice, and to demonstrate what can and cannot be achieved.”
and then (para 65, the one before your quote):
“The MPI-ESM was not tuned to better fit the 20th century. In fact, we only had the capability to run the full 20th Century simulation according to the CMIP5-protocol after the point in time when the model was frozen”

Reply to  Nick Stokes
April 26, 2017 5:45 am

Nick writes

That is simple matter of fact, as Gavin well knows. It is important to get simple facts right. Dr Curry was wrong. And he has shown it.

More spin here Nick. You choose to take meaning that the model as a whole was not tuned but thats not what Judith was saying. Every component of the model was tuned to be within observed ranges and thats particularly true of say clouds where they are not based in physics and instead are set to believable quantities based on observed values.
And during that process, all the components are tweaked within their believable ranges to make the model stable and give expected results.
So you can hold your head up and say that a model isn’t “tuned” [like you might fit an arbitrary function to model an elephant] but you’re not fooling everyone when you make those claims.

Nick Stokes
Reply to  TimTheToolMan
April 26, 2017 3:03 pm

“And during that process, all the components are tweaked within their believable ranges to make the model stable and give expected results.”
That’s actually not true, and not feasible. You should read Mauritsen’s paper carefully to see what is done. Tuning is usually a bootstrapping process where you do a little bit of a run, narrow the range of some parameter, do a longer check probably on something else, and so on.

Reply to  TimTheToolMan
April 26, 2017 3:17 pm

Nick writes

That’s actually not true, and not feasible. You should read Mauritsen’s paper carefully to see what is done.

I have read it carefully. You should understand what it is that they’re doing! Mauritsen aren’t developing the model, they’re running. it. They have access to a number of parameters that the developers have carefully provided that allow tuning without causing the model to explode (too much in all likelyhood)
Beneath what Mauritsen et al see are a myriad parameters that have been pre-chosen to represent the components modelled. These were also tuned.

jfpittman
Reply to  TimTheToolMan
April 27, 2017 6:05 am

Yes, TTTM, that is correct. Nick states such at “Nick Stokes April 26, 2017 at 1:10 pm.” They run parts and small time lengths to tune. So, even if you parse the words such that Dr. Curry is wrong, she is still correct.
The other real problem people are generally stating, but not specifically stating, is that the Lorenz chaotic average use is mapping of Y onto X independents numerous times. There is not even a theoretical conformation except for trivial solutions, that one can map one X onto numerous Y dependents and get a correct Lorenz chaotic average. This is their real circular argument, even though the actual X is showing they are wrong. The real breakdown occurs when the modellers use hyperviscosity and other constraints to keep the models from exploding. They are short circuiting the butterfly effect that is the basis of their claims. One can use constraints and they have been used successfully, for Y on X, but it cannot be shown for X on Y where there is one x.
If it is still in the archives, you really need to read the go to that Gavin and Dr Browning had at RC. Gavin did a real good job of defending the models, but just highlighted the fact that they were engineering not physics models, and that is the rub. Engineering models work because they get to examine how the dependent and independent variables relate to each other. Can’t do this with models except as how such authors as Tebaldi and Knutti have indicated, assumptions of validity required.

Reply to  Nick Stokes
April 26, 2017 7:05 am

The models are junk. It has been shown, factual, that a simple extrapolation of GHG forcing used in ensembles can produce a noiseless results remarkably close to the expensive models that make up all these graphs.
Damn straight these models are tuned to do what they are doing

Reply to  Nick Stokes
April 26, 2017 7:08 am

You can’t predict the climate Nick, saying any different makes you a liar.
You are guessing without ever knowing if your model replicated climate even if the prediction happens to be close to reality
The spread of models is a net, and it is cast wide, so one or some models can be “said” to be better than others, but with no validation, that is just a meaningless claim

Graemethecat
Reply to  Mark - Helsinki
April 26, 2017 7:49 am

The mere fact that climate models have to be “tuned” to reproduce observed data demonstrates that they are scientifically vacuous and therefore worthless for prediction. I can’t think of any other scientific or engineering discipline in which this practice is acceptable.

Reply to  Graemethecat
April 26, 2017 8:20 am

I can’t think of any other scientific or engineering discipline in which this practice is acceptable.

Well, Electronic simulations had lots of stimulus files, and initialization files, but circuits need signals and power. This is suppose to be an external forcing (input), we did have varying amounts of aerosols. But until til just recently it was ambiguous, and that allowed liberal interpretation of that numbers to use, and in typical climate science fashion, they applied the logic of “whatever makes the results better, must be closer to the truth, because the models are correct”. But, new estimates came out a few years ago, lots of ripples as iirc some were a ways away.

Reply to  Nick Stokes
April 26, 2017 8:07 am

Nick, a model is not a textbook of physical principles, it is a large mathematical equation. Like all equations, models are ABSOLUTELY tuned as they are developed. There is no other way to do it. They are still in the process of being tuned, and they will continue to be, until they get something besides the 1976-2000 warming right.
I recently did some work on a trivial Berkeley Earth model that claimed temperature can be described by CO2 and volcanism. The model proved bogus as it was tuned by a couple ad hoc parameters. At least they were honest enough to provide the parameters. I have never seen an honest accounting of GCM parameters or their values.

dp
Reply to  Nick Stokes
April 26, 2017 8:53 am

1) Models are NOT tuned [for the late 20th C/21st C warming] and using them for attribution is NOT circular reasoning.

How did this become the definition of Dr Curry’s “period of interest”?

Owenvsthegenius
Reply to  Nick Stokes
April 26, 2017 1:17 pm

Nick, “have run somewhat hot”. The kids are scared, Nick. People expect runaway warming; based on model runs. Model runs are convenient tools for policy makers and fear campaigns.

john harmsworth
Reply to  Nick Stokes
April 26, 2017 2:04 pm

Always reliable Nick! Quick to defend the minutiae of the process while staggering (forward?) utterly blind to academic misdeeds, outright fraud, obfuscations, faulty and non-existent logic, bullying, lousy math and the multitude of crooks and scam artists who feed off this disgusting beast.
Enabling if not quite supporting. No less dishonest.

Richard G.
Reply to  Nick Stokes
April 26, 2017 2:58 pm

Please somebody, explain how climate models that model temperature statistical anomalies:
http://images.remss.com/figures/climate/RSS_Model_TS_compare_globe.png
actually model Climates which are geographic biome distribution maps:
http://www.thesustainabilitycouncil.org/images/climates/biomes.gif
http://www.thesustainabilitycouncil.org/resources/the-koppen-climate-classification-system/

Duster
Reply to  Richard G.
April 27, 2017 10:58 am

Really, they don’t. Not only that, when you consider that “climate” is an average of “weather” the idea of “modelling” climate to “predict” long tern weather trends is absurd, since climate is emergent from weather.

Solomon Green
Reply to  Nick Stokes
April 27, 2017 4:27 am

Mr. Stokes is, I believe, a useful mathematician, but did his course cover logic?
Assume that models are not tuned [for the late 20th C/21st C warming], as he and Gavin suggest, then the question arises “how well do they model that period?”
The answer, from empirical evidence, is badly. So badly in fact that in any discipline other than “climate science” they would be discarded and replaced.
Mr. Stokes runs a very informative website and is obviously well-versed in the subject. How many modelers have admitted that their earlier models were wrong and have then produced new ones? And by that I do not just mean tweaking the existing ones but by bringing in new parameters or even, heaven forbid, dropping some of their exiting ones?

April 26, 2017 2:52 am

This is an issue that has been known for a very long time.
Gavin is being dishonest, because he knows how this works.
Models do not reproduce the actual temperature of the earth. If plotted at temperature instead of anomaly, they are all over the place like spaghetti thinly spread, with differences of up to 2°C which are huge, (4-5°C is the difference with the Last Glacial Maximum). As models cannot reproduce the temperature of the earth, this is not a requirement. The requirement is that they can reproduce the anomaly, otherwise they don’t get published.
And that to reproduce the measured anomaly models are tuned to it is so well known that has even made it to the published scientific literature:
Hourdin, Frederic, et al. “The art and science of climate model tuning.” Bulletin of the American Meteorological Society 2016 (2016).
http://journals.ametsoc.org/doi/pdf/10.1175/BAMS-D-15-00135.1
“6. Tuning to 20th century warming?
The reality of this paradigm is questioned by findings of Kiehl (2007) who discovered the existence of an anti-correlation between total radiative forcing and climate sensitivity in CMIP3 models: High sensitivity models were found to have a smaller total forcing and low sensitivity models a larger forcing, yielding less cross-ensemble variation of historical warming than otherwise to be expected. Even if alternate explanations have been proposed and even if the results were not so straightforward for CMIP5 (cf. Forster et al. 2013), it could suggest that some models may have been inadvertently or intentionally tuned to the 20th century warming.”

They even have been caught doing it.

Nick Stokes
Reply to  Javier
April 26, 2017 3:09 am

“If plotted at temperature instead of anomaly, they are all over the place like spaghetti thinly spread, with differences of up to 2°C which are huge”
Yes. But that is not fixed by tuning the models. It is accepted, and so the results are compared by anomaly.

Reply to  Nick Stokes
April 26, 2017 3:40 am

So how much faith and sound policy decision should be placed on climate models that are not even capable of getting the temperature of the planet minimally correct? In any business area these results would be derided.

Nick Stokes
Reply to  Nick Stokes
April 26, 2017 4:57 am

“So how much faith and sound policy decision should be placed on climate models that are not even capable”
As Mauritsen et al say:
“A particular problem when tuning a coupled climate model is that it takes thousands of years for the deep ocean to be equilibrated. In many cases, it is not computationally feasible to redo such long simulations several times.”
The absolute temperature is determined by the equilibrium state of the ocean, which models can’t compute – the time scale is inaccessible to them. But the thing is, that state isn’t going to change any time soon. So an anomaly approach tells us what we need.

Reply to  Nick Stokes
April 26, 2017 5:48 am

Nick writes

But the thing is, that state isn’t going to change any time soon. So an anomaly approach tells us what we need.

Did you feel dirty writing that? I would have.

Reply to  Nick Stokes
April 26, 2017 5:50 am

From Mauritsen we have
Yet, the span between the coldest and the warmest model is almost 3 K, distributed equally far above and below the best observational estimates, while the majority of models are cold-biased. Although the inter-model span is only one percent relative to absolute zero, that argument fails to be reassuring. Relative to the 20th century warming the span is a factor four larger, while it is about the same as our best estimate of the climate response to a doubling of CO2, and about half the difference between the last glacial maximum and present. To parameterized processes that are non-linearly dependent on the absolute temperature it is a prerequisite that they be exposed to realistic temperatures for them to act as intended. Prime examples are processes involving phase transitions of water: Evaporation and precipitation depend non-linearly on temperature through the Clausius-Clapeyron relation, while snow, sea-ice, tundra and glacier melt are critical to freezing temperatures in certain regions.

Reply to  Nick Stokes
April 26, 2017 9:58 am

I would be happier if instead of reporting an average temperature anomaly, we could see what the estimated daily maximum temperature anomaly and daily minimum temperature anomaly were.
You were saying

The paradigm is that there is chaotic weather, but with an attractor, which is the climate. Individual trajectories follow divergent courses, but do cluster (think Lorenz butterfly). The way you ascertain the attractor is by averaging many trajectories.

I would suggest there are multiple dimension of attraction, so why should we limit our inquiries to one synthetic dimension?

Reply to  Paul Jackson
April 26, 2017 10:11 am

I would be happier if instead of reporting an average temperature anomaly, we could see what the estimated daily maximum temperature anomaly and daily minimum temperature anomaly were.

Your wish, including absolute values (math done while a vector, not as a field)comment image

john harmsworth
Reply to  Nick Stokes
April 26, 2017 4:06 pm

We see anomalies, too, Nick! Where straightforward science would question the precepts and practices that generate a false result, climate science covers them up ( Mike Mann’s utterly bogus tree ring proxies as just one example). Where physics or chemistry research on fundamental science would publish working information for other researchers to attempt replication , climate scientists state the most outrageous misrepresentations of their findings as fact. Where historical information indicates previous warm periods driven by natural causes, climate science goes to extremely dishonest lengths to show we are imagining historical fact. Where biology shows that shelled creatures inhabited the world’s oceans during periods when CO2 was 10 times what it is today, climate/bio-warriors tell us life in the oceans is about to be destroyed due to utterly non-existent “ocean acidification”! There are other “anomalies” around current declarations of the state of the environment.
Our analysis of these “anomalies”, Nick, is that we can only reconcile the truth of observation with the utter B.S. of climate science by applying a “fudge” factor ( that’s what it looks like, anyway), of fake science practiced by charlatans with ridiculous environmental and political agendas

jfpittman
Reply to  Nick Stokes
April 27, 2017 6:19 am

Nick states: “Yes. But that is not fixed by tuning the models. It is accepted, and so the results are compared by anomaly.” This indicates another problem with average of models.
Please note that TOA is defined and requires Temperature in degrees Kelvin and Rankine at T^4. So, when integrating over the whole surface at TOA, the difference between 14 C and 16 C is two worlds that are different from our world. At 14C we have a world that is like the cold time of the 1950’s but has the thermal solution of a hotter world with significant amounts of CO2. At 16C, we have a world that is hot and even more heat retention to balance and yet is somehow the same as the cold world.
Since these conditions describe two worlds obviously different from ours, why should these two different models be accepted? For those unfamiliar with how using anomalies hides problems with the worlds depicted, IIRC Lucia had some posting about that.

Reply to  Javier
April 26, 2017 7:11 am

Nick unfortunately is immune to reality if he doesn’t like it.
They DO tune aspects of the models, it’s a fact

MarkW
Reply to  Mark - Helsinki
April 26, 2017 8:11 am

The people who write the dang things have talked about the process of tuning.

Nick Stokes
Reply to  Mark - Helsinki
April 26, 2017 3:08 pm

You seem to be immune to the reality that I agree. I said so in my first comment.

MarkW
Reply to  Mark - Helsinki
April 27, 2017 7:32 am

Now that’s funny, coming from the guy who claimed that climate models are the same as weather models.

son of mulder
April 26, 2017 2:54 am

The models are not running hot, they are simply another guide to need for more adjustments to the temperature record. They have only recently been cooling the 1930’s so there is plenty of time to warm the early 21st century record. I have every confidence this will happen.

Janus100
April 26, 2017 2:55 am

Nick, I think that you are not right here.
Here is the actual situation:
-The models were tuned for the particular period and still were so wrong.
-And it is, in essence, a circular argument (and on top, wrong as well)

Editor
April 26, 2017 3:09 am

Curry’s claim is wrong on at least two levels. The “models used” (otherwise known as the CMIP5 ensemble) were *not* tuned for consistency for the period of interest (the 1950-2010 trend is what was highlighted in the IPCC reports, about 0.8ºC warming) and the evidence is obvious from the fact that the trends in the individual model simulations over this period go from 0.35 to 1.29ºC! (or 0.84±0.45ºC (95% envelope)).”

If the models aren’t “tuned,” what is the historical phase of the CMIP5 model ensemble?comment image
When we model potential oil & gas reserve additions when we evaluate drilling prospects. The individual model runs also reflect a wide probability distribution. However, if actual production results don’t track the model mean (P50), our model inputs are seriously wrong.
The observed temperatures don’t track anywhere near P50…comment image
Note that during the 1998-99 El Niño, the observations spiked above P05 (less than 5% of the models predicted this). During the 2015-16 El Niño, HadCRUT only spiked to P55. El Niño events are not P50 conditions. Strong El Niño and La Niña events should spike toward the P05 and P95 boundaries.

dikranmarsupial
Reply to  David Middleton
April 26, 2017 3:32 am

We have observational estimates of the forcings for the historical period, but only scenarios (RCPs) for the future forcings.
“The observed temperatures don’t track anywhere near P50…”
Of course they don’t, this is entirely unsurprising. P50 is an estimate of the forced response of the climate systems, whereas the observations are a combination of the forced and unforced behaviour of the climate system. We would only expect the observations to track near P50 if the effects of unforced variability (such as ENSO) were close to zero, which obviously isn’t the case.

Reply to  dikranmarsupial
April 26, 2017 4:48 am

Total nonsense. The historical run tracks P50 with strong ENSO events pushing P05 and P95.
The CMIP5 ensemble is a range of RCP scenarios from 2.6 (strong mitigation) to 8.5 (bad science fiction, AKA business as usual). The “trend” of the observed temperatures has never tracked near P50 in the predictive run of the ensemble irrespective of the initiation point of the run.
The model mean (P50) is supposed to reflect the most likely temperature trend. The observed temperatures have consistently tracked close to the strong mitigation scenarios despite a general lack of mitigation.

There are several possible explanations for why the earlier observations are at the lower end of the CMIP5 range. First, there is internal climate variability, which can cause temperatures to temporarily rise faster or slower than expected. Second, the radiative forcings used after 2005 are from the RCPs, rather than as observed. Given that there have been some small volcanic eruptions and a dip in solar activity, this has likely caused some of the apparent discrepancy. Third, the real world may have a climate sensitivity towards the lower end of the CMIP5 range. Next, the exact position of the observations within the CMIP5 range depends slightly on the reference period chosen. Lastly, this is not an apples-with-apples comparison because it is comparing air temperatures everywhere (simulations) with blended and sparse observations of air temperature and sea temperatures. A combination of some of these factors is likely responsible.

https://www.climate-lab-book.ac.uk/comparing-cmip5-observations/
“The real world may have a climate sensitivity towards the lower end of the CMIP5 range.”
There’s no “may” involved here. The models have failed because they result in a climate sensitivity that is 2-3 times that supported by observations:comment image
https://judithcurry.com/2015/12/17/climate-models-versus-climate-reality/

benben
Reply to  dikranmarsupial
May 3, 2017 6:53 am

Ha 🙂 well, fair enough. I’d rate CEI about as neutral as Breitbart, but sure. You know your legal system better than I do.
What I don’t understand is: on the american right there is this idea that China and India etc. are just in this for the money (you referred to that idea). So then I point out that in fact China is investing a huge amount of its own money. Then you say “I knew that”. Fine. But if you know that, how can you still believe China is only in it to get its hand on our chubby western moneypile? You can either believe one, or the other. Not both simultaneously.
“wait and see” is a fair reply, but they have already made great strides (see the well documented reduction in Chinese coal demand over the past few years). And ‘wait and see’ most definitely does not mean that in the mean while you get to say to everyone that China is just in it to demand our money.
So how does that work?

Patrick B
Reply to  David Middleton
April 26, 2017 9:04 am

David Middleton,
Thank you for the use of the vertical line to denote when the models were tracking historical data versus attempting to predict the future. Such lines should be required in every graph with model results.

Reply to  Patrick B
April 26, 2017 9:12 am

The thanks go to Dr Ed Hawkins at the University of Reading. He ran the model ensembles.
These images are from his web page:comment imagecomment image
I just zoomed in on the image above and added some annotations:comment image

benben
Reply to  Patrick B
April 26, 2017 11:30 am

David, wouldn’t it be more interesting to look at what the models are actually trying to do, rather than see if they clear your own bar? The goal of the models is not to be 100% accurate every year, but rather to support policy decision on whether to put effort into switching away from fossil fuels. It’s a very different question (you need better than order of magnitude accuracy, but not % point accuracy).
And anyway, the discussion is moot since – trump or no trump – the free market has decided to go for renewables in a very dramatic fashion:
https://www.bloomberg.com/news/articles/2017-04-26/the-cheap-energy-revolution-is-here-and-coal-won-t-cut-it
Cheers!
Ben

MarkW
Reply to  Patrick B
April 26, 2017 1:23 pm

benben, since when are massive government subsidies combined with purchase mandates “the free market”?

benben
Reply to  Patrick B
April 26, 2017 1:39 pm

aah… skeptics, always looking back, never forward. The foundations were built on subsidies (as with any industry really, I don’t see the WUWT crowd complaining about the massive subsidies for nuclear). But 2017 is generally seen the year that renewables have become competitive on their own. As evidenced by basically all the news that you won’t read on WUWT. Because, you know, WUWT doesn’t care about the truth, but rather about pandering to the beliefs of people like you, MarkW. Enjoy your little corner on the internet 🙂 Just don’t be surprised when all of a sudden half your electricity comes from renewables without you even noticing it!
Cheers,

MarkW
Reply to  Patrick B
April 26, 2017 2:09 pm

benben, I see that your penchant for making it up as you go remains undiminished (that’s a polite way of saying you lie).
The idea that all industries are subsidized when they are young is a complete lie.
The notion that the nuclear industry is heavily subsidized was well and thoroughly shredded by David Middleton yesterday.
Any more lies you care to trot out?

benben
Reply to  Patrick B
April 26, 2017 9:58 pm

How exactly you want to spin the news that a couple of nuclear reactors are going to need 10 bilion dollars in subsidy to stay in business as “the notion that the nuclear industry is heavily subsidized was well and thoroughly shredded” is rather strange. But hey, I’m on your turf so you’re entitled to do whatever you want with it. But it’s pretty well established that nuclear only works in today’s economy with subsidies. As evidenced by exactly that post of David, and the various drama’s surrounding the new nuclear capacity being built in Europe.
And subsidies in general… there was a pretty interesting book out recently that examined exactly the role of state subsidies and free markets, and found that most of the things we take for granted now are based on state subsidized research (everything from the internet to plastics). But I understand that you’re not here on WUWT to learn something that goes against your beliefs, so lets just leave it at that.
Have a good day MarkW!

hunter
Reply to  Patrick B
April 27, 2017 12:09 am

2017 is hardly begun and “renewables” are still unreliable, completely subsidized, environmentally destructive and ridiculously expensive. But some arrogant fool, drunk on climate koolaid, drops by to belch out climate hype. So predictable.

MarkW
Reply to  Patrick B
April 27, 2017 7:35 am

1) The 10 billion is a lie.
2) The existence of renewables and their unpredictable output makes all other forms of generation less efficient and more costly.
3) When adjusted for the amount of power produced, even if the 10 billion number were true, it would still several orders of magnitude less than your renewables receive.

benben
Reply to  Patrick B
April 27, 2017 8:18 am

1) oh my, americans and their tendency to declare whatever they don’t like a lie . Alternative facts. Hey, it got your president elected, so it must work. Somewhow.
2) Nuclear and coal are outcompeted by gas, not renewables. And your statement is true, but only if the renewables are +30%, of total capacity which is not the case. You know this. I know this. So it is you who is lying my friend MarkW. I don’t know why, because we both know its a lie.
3) true! but see my comment above. The age of massive subsidies for renewables is over. You like that, I like that. Why don’t we agree to be happy that renewables no longer require massive subsidies?

MarkW
Reply to  Patrick B
April 27, 2017 10:12 am

Nice of you to not even attempt to defend your lies.
As to the age of subsidies for renewables being over. That would be nice, if it were true.
Don’t forget to include the mandates to buy when talking about subsidies.

jfpittman
Reply to  Patrick B
April 27, 2017 10:55 am

benben:
1. The ten billion, if you are talking about the SC and GA reactors, was due to three items: a. the cost overruns required by regulators when after 9/11, Union of concern scientists’ lobbyists and Green Peace fought to get more safety additions that caused both the cost of building, and costs from delays to increase; b. The contracts required that completion time be by a certain date or penalties occurred which was impacted by the delay caused by the UCS and GP; and c. some of the money was borrowed, not a subsidy, and interest had to be paid. Further there are questions as to whether the contractors spent the money to date appropriately. So there may be a fourth reason. Please note none of these are subsidies and all confirmed to date are due to regulatory burden.The source is the bankruptcy filings.
2. At 30% with 20% utility that is 6%. This agrees with the Wyoming study that showed that somewhere around 7% costs started going up exponentially. From EIA, for 2015 last year with completed data, 5% was produced by wind. More has been put on line and yes, the Bloomberg even states that MarkW is correct: “With renewables entering the mix, even the fossil-fuel plants still in operation are being used less often.” Being used less puts the burning/boiling in less effecient areas that do cost more for the amount of fuel burned. No matter if it is taking the burner down or shutting on and off, the units were not designed to run this way. It also effects nuclear, and even hydro when water has to be released for safety. They are not competitive on their own simply from the fact they are not dispatchable, and are not made to pay for the balancing that the other electricity generators have to do to keep the system operating.
3. The subsidies have changed for some areas but not all. Currently when made to pay for balancing in SC, some small units would have been required to pay the grid owners for using their renewable power. Can’t find the source right now, the electric company in question supplies our electricity. The company that has the 2 nukes in SC being built was being taken to task by advocates, because they first refused to allow renewables on the grid, but then agreed only if the money required to balance the system was paid by the renewables. In particular, solar had to pay for not only producing more rapidly when the sun was best, but the ramping up and down and system synchronizing from both ramping up/down during the day, and fluctuations caused by variations during production. Since they did not pay for the service lines, they only got bulk rates, and had to pay for their share of line transmission costs. So the number and category of subsidies matter. Total cost accounting indicated that renewables in some cases should be paying for putting it online.

benben
Reply to  Patrick B
April 27, 2017 11:38 am

Hi jfpittman,
Thanks for your response! Always nice to talk to someone who can communicate without being snarky (hello MarkW :p )
1) I obviously don’t know the details, but nuclear not being economically feasible is pretty common across the world. See for example the Hinkley reactor in the UK: http://eciu.net/blog/2016/hinkley-what-if-it-all-goes-wrong
Or really, any other reactor built in the last decade or so. The only reason to build one is because you want nuclear bombs (china, iran, N korea, russia, UK, france…. see a pattern?). Now if that isn’t a massive subsidy I don’t know what is.
2) hey who am I to dispute that. Maybe the Wyoming grid is just particularly shit. Wouldn’t be too surprising considering the general state of the infrastructure in the US. But what you are doing is cherry picking. Wyoming is pretty irrelevant on the larger scale. On average there is no impact, as evidenced from the last two graphs of the bloomberg article. Massive increase in renewables on the grid, and also lower electricity prices. How does that add up to renewables being ‘exponentially more expensive’ beyond 6%? Most western countries are already beyond that. Sure you can cherry pick Wyoming. But nobody outside of the WUWT comment crew will fall for that.
3) sure there are a thousand type of subsidies. So what? No subsidies means no subsidies. You can quibble about the details, but it’s pretty clear that renewables are on the path to becoming incredibly cheap. Which is something to be happy about, not be in denial.

johnfpittman
Reply to  Patrick B
April 27, 2017 2:45 pm

benben: The units in SC and GA are for the reduction in GHG. Your statement of building nukes only for production of war materials is incorrect. Weapon grade reactors are built with at least 4% and preferably 6% enrichment. They are also run hot. Gen III+ designs like these in SC and GA are too low and run too cool. They cannot be used for this. Weapon reactors have specific requirements that are obvious by their tube configuration, moderation, and cooling systems. One of these Gen III+ is being started in Britain to help supply Britain with electricity to fight GHG production. At least one in China for reducing GHG. In March 2017 the first CAP 1400 reactor pressure vessel passed pressure tests. The two types are AP 1000 (GA, SC) and C(hina)AP1400 (CAP1400) which are about size.
You also state: “Wyoming is pretty irrelevant on the larger scale. On average there is no impact, as evidenced from the last two graphs of the bloomberg article. Massive increase in renewables on the grid, and also lower electricity prices. How does that add up to renewables being ‘exponentially more expensive’ beyond 6%?” Wyoming is not a cherry pick, nor irrelevant. The Wyoming study was feasibility of wind for determining what it took to keep a grid stable with increasing use of wind. What it took was 3 times the electric supply area. This translates as a inability to be stable past 25% without loadshedding which increased it to 33%, This is being demonstrated by both Denmark and Australia. Another criteria was that it was cost effective without subsidy. The study indicated that at 5 to 7% this threshold would start being crossed due to costs of not running needed dispatchable sources in their most efficient range. Yet, another of the reasons of the 3 times electric supply area (load sharing) was to prevent cost of required dispatchable backup to levels where wind was no longer economical. Lower energy prices without levelized costs has not been shown and in the Bloomberg article, the backing down of dispatchable indicate that it is just now being reached. Accurate numbers are not available for 2016, 2017, which are estimated and projected by EIA. It is premature to claim otherwise. SOURCE EIA.
So to cap, the reason cost increases exponentially is due to the criteria of a stable grid without loadshedding or without load sharing. As renewables penetrate more and more, loadsharing will not be possible. At this point dispatchable will be tending to be at maximum cost. The cost is not being put to the cause of the cost. As more comes on line, in order to avoid $billions per overload, safety protection , not yet incorporated in the grid has to be installed. The hardware and software did not exist in 2007 at the time of the study. I have not seen that it exists today. The operation of Denmark and Australia indicate it does not exist. As the penetration increases, the study showed that to avoid $10 to $100 Billions more loss each overload, that hardware and software in each unit (refrigerator, computer, AC) had to have smart linkage. Neither the software nor hardware exist at this point in these types of units commercially available.
Wind and solar are only cheap because they are not having to pay their share.

benben
Reply to  Patrick B
April 28, 2017 10:29 am

Hi johnfpittman,
Regarding the nukes, you’re being pedantic. It’s pretty well known that the civilian nuclear industry exists mostly in order to provide an industrial base for the military applications of nuclear. Sure, the materials coming out of that particular reactor aren’t easily converted into a bomb, but that is not the point. The UK for example has been pretty open about the fact that it needs the Hinkley reactor if it wants to be able to maintain its nuclear deterrence, because otherwise it will have no industrial base to work from. I’m not saying that this is good or bad, but let us at least be honest about why things are the way they are, shall we?
With regards to Wyoming, you’re not understanding my point. You can’t take a study in Wyoming and apply it to a different grid, no matter how well done the study of that particular grid is. Especially when the results fly in the face of the observations from other countries. (One of my best friends in Danish, they most certainly are not experiencing ‘exponentially increasing electricity costs’.) So taking results from that study and pretending it is somehow globally relevant is cherry picking. Anyway, it’s becoming more and more clear by the day that renewables are out competing the rest. See for example the most recent EIA report, which you seem to trust because you reference to them yourself!
Nice write up of the conclusions: https://arstechnica.com/business/2017/04/the-economics-of-energy-generation-are-changing-more-metrics-favor-solar-wind/
Cheers,

johnfpittman
Reply to  Patrick B
April 28, 2017 1:38 pm

benben your assertion about the use of USA new nuclear facilities is untrue. From the agency that oversees nuclear weapons: “”Weapons dismantlement [1] and disposition are major parts of NNSA’s stockpile work and significant elements of NNSA’s effort to transform the nuclear weapons complex and stockpile. By taking apart weapons and ensuring that they can never be used again, NNSA is playing an active role in helping the United States reduce the overall size of its nuclear weapons stockpile and ensuring that the United States meets its international nonproliferation commitments. NNSA’s dismantlement program is supporting the President’s goal of reducing the stockpile. Currently, the overall stockpile, both operationally deployed and those held in reserve, is the smallest it has been since the Eisenhower administration.”” The other item that shows your assertion is incorrect is that tons of nuclear materials are in storage waiting for Yucca mountain. Both of these indicate a surplus of materials in the USA.Don’t know of the case in Britain, but you might want to check that it is for the correct isotope of hydrogen, not nuclear materials of energy production.
You state “With regards to Wyoming, you’re not understanding my point. You can’t take a study in Wyoming and apply it to a different grid, no matter how well done the study of that particular grid is. Especially when the results fly in the face of the observations from other countries. ” You are not understanding my point. The Wyoming study was for a stable grid and no load shedding, no load sharing; not about Wyoming, except that is where it was done. Denmark has been doing and is doing load sharing. They pay other areas to use excess, and then later buy back; also they put the costs in the tariff section of the industry and not in the cost column, the last time I checked. Load sharing works as long as the renewable in question, such as wind, is not at penetration levels equal or greater in both areas. As penetration of the only area serviced approaches certain penetration is when the costs start increasing. It is not the Wyoming results that fly in the face of observations. The observations are load sharing, different renewable sources, load shedding are occurring and the costs in Denmark and Australia indicate that to have a stable grid and have wind penetration serve only the area intended substantially increases cost. Australia in particular is load sharing, load shedding, having black outs and still has expensive electricity. The last blackout caused their standard cost unit to go from $280/unit to $14,000/unit.This does not include the monies lost by industry. A sister plant of ours is investing millions due to the problems of load shedding, and that the corridors are under sized for the load sharing South Australia is trying to use from other areas. The costs are generally in infrastructure costs.
From your link: “”The report, released this month, looks at the cost of generation resources if they were to come online in 2019, 2022, and 2040…Because builders can still take advantage of federal tax credits, onshore wind, and solar photovoltaic (PV) resources are dirt cheap, at $39.30/MWh and $58.80/MWh respectively. “” I use EIA, but also know that they do not use standard LCOE apples to apples, such as counting subsidies as a cost rather than subtracting them from the cost. Don’t know if they are doing it for items in the future. Note that a tax credit is far better than typical subsidies or tax breaks. The last data that approximations can be made and checked is 2015. 2016 is not yet available. The projections are for events that have not occurred, and if you can show me where the infrastructure costs are accounted I would appreciate it. I did not find them. From the ARS link:
“”The more interesting figures are found for 2022 and beyond. Five years in the future, the EIA thinks the most expensive energy resources will include:
Solar thermal plants at $184.40 per MWh
Offshore wind at $145.90 per MWh
Coal plants with 30 percent carbon removal capability at $140 per MWh
CGeothermal: $43.30 per MWh
Onshore wind: $52.20 per MWh
Advanced combined-cycle natural gas-burning plants: $56.50 per MWh
Solar PV: $66.80 per MWhoal plants with 90 percent carbon removal at $123.20 per MWh””
If you estimate backwards from 2022, advanced generation by fossil fuels are still the most economical. Not a surprise, since this agrees with current cost estimates.

benben
Reply to  Patrick B
April 29, 2017 10:27 am

Hi johnfpittman,
Well, I think we are actually mostly in agreement. Or at least, we can both look at the same data and come to a shared understanding. You’re just focused on different aspects, but that is fine. Nuclear: I’m talking about a more broad picture, the entire industrial base. You’re not. But we would agree if we looked at the exact same system boundaries I am sure. (also note that nuclear is VERY EXPENSIVE in the report we discussed above)
With regards to costs, I think you’re being overly pessimistic. Problems are there, nobody is pretending they aren’t, but they’re also being solved in record speed. I just have a bit more faith in our engineering capacities I guess. Anyway, my county is going to vastly expand its renewables in the next decade, and I voted for that, so I’ll be eating my own dogfood and pay the price, literally, if I’m wrong!
I think it is important to note that our discussion here is rational and interesting. The main problem with WUWT, and the reason why its so marginalized, is that it carries a lot of bizarre ‘the UN is out to destroy western civilization’ stuff. That is just so far removed from reality it’s pretty difficult to take anything else that is posted here serious. Entertaining though!
Cheers,
Ben

Reply to  benben
April 29, 2017 12:04 pm

I just have a bit more faith in our engineering capacities I guess.

It’s not an engineering problem!
It’s a physic’s problem. There are limits, absolute limits to photon conversion in silicon, and there’s not much left to gain. There’s no Moore’s, that’s shrinking feature size, photo diodes are the bigger the better. And they still have to go through wafer ovens, multiple times. They can go to bigger diameter wafers, but they might have to be thicker to be strong enough, plus will suffer more thermal expansion and contraction negative effects. And 12″ fabs are expensive.

johnfpittman
Reply to  Patrick B
April 29, 2017 1:02 pm

benben, I am engineer. And because of my job reasonable current in the energy market. If these items exist as you claim, they have not made the mainstream. Our recent purchases at our plant and our sister plant indicate the cost of solar is off by about two times.Think about it. If they were available South Australia would be using them and our company not spending millions on a technology that will have to be paid for, and the CO2 production has to be accounted. They are in Australia and the corp is British.
Nuclear is about 25% less than solar and wind and is dispatchable. It is not a contest if you want to limit CO2 production. If that is not a concern, fossil fuel is best by far. Lot more options and versatility. T date where solar has a small niche is home markets to reduce costs. The payback depend on lots of different conditions. I have seen some data that a 10 year ROI in Florida, Texas, etc. But also indication that efficiency loss is that they are just a little better than breakeven before replacement, or have a diminishing utility. Wind has so much variability, not only good numbers are hard to get, but from diminishing return aspects, it will take an evolutionary improvement in storage to make wind actually viable for penetration of 25%. To make sure you understand, that is 25% of a grid’s annual production. Nameplate means almost nothing for both solar and wind. That is another problem with data you find. You have to make sure they are not prorating costs and switching been installed capacity and used capacity. Often when you run the numbers and they don’t make sense, switch from one set to the other will.

johnfpittman
Reply to  Patrick B
April 29, 2017 1:25 pm

As to the UN and M. Strong. We are all victims of our times. And yes you can find quotes from others that believe that CC action is an opportunity to address other real and perceived problems. It is that I don’t think the socialist aspect as a conspiracy of power elites, as much as a confederacy of dunces. One aspect that is true and documented, by the UN itself, is that the IPCC is an intergovernmental entity, and that does mean the scientists were selected, and a window set for purview. That does not make the science wrong. They included known lukewarmers. It does not make the actions worthless from that aspect alone. It does mean that persons have a legitimate right to question the slant and the policy.
One thing that Big Green just doesn’t seem to understand is that they have won the global warming argument, and what the people of the world have indicated they are willing to pay. Not very much or very fast. The poor ones pushing for the socialist part are in it for the money. I would be too. China is in it for itself, and that describes almost everybody else as well, especially the US.
I think Donald Trump has the chance to be the most unlikely CC Hero in history. The PA is a failure for preventing 2C for monetary and practical reasons. By withdrawing from the PA and making the world agree that each and every CO2 is as guilty as the next, it will force the world to agree to take CC serious or not. Either way, that will reap more benefits for humans than the failed PA.
[“Purview”? Or preview? Or pervert view? .mod

benben
Reply to  Patrick B
April 30, 2017 1:14 pm

Ha, another engineer. You are aware that the renewable field is just in a different state than your, presumably much more mature, industry? May I offer the observation? Just because you know your thermodynamics, it doesn’t make you a qualified expert in another field. I’m a chemical engineer. Yet I would never pretend to be able to criticize airplane design, even though that is largely done by engineers. Before you start trashing an industry employing millions of people, maybe take a few introductory courses. Or not, but then don’t pretend to know more than someone who did his PhD in it.
To the topic of discussion: nobody said that solar (especially non-utility scale as you seem to be talking about) is currently going to compete on an even footing with gas at this moment everywhere on the planet. The key element is that the renewables industry is about long term trajectories. All of the key technologies are exactly on or above the trajectory that the renewable industry needs to be on to hit its targets a decade from now or so. You can complain about how the windturbines of yesterday can’t compete with the natural gas of today. But who cares? That is not the point. The fact that WUWT can only keep its cognitive dissonance in check by invoking a massive global conspiracy of epic proportions should give you pause for thought.

jfpittman
Reply to  Patrick B
May 1, 2017 7:40 am

benben, It is not I who is challenging someone who did their PhD in it. I have presented where things currently stand. Not my observations but those PhD persons. I am not trashing an industry when I point the problems of today. And not even someone who has a PhD and many post docs can tell the future reliably. I am a chemical engineer and a biologist. I have been dealing with power since the 1990’s when the CAAA came into play in the USA. Your strawman of criticizing airplane design falls apart if I am complaining about the uncomfortable seat in the plane I flew, or how much the ticket cost. You do a better job in the second paragraph where you make a substantive comment. But for your information, I was was challenged by a PhD engineer who thought my support of renewables was questionable. So he gave me some data and asked me to show where it was economical. So I did. He responded that the local power company couldn’t make money off of it. And I pointed out to him that was his strawman. My claim was that under certain conditions solar was economical.
To your second comment. On your part, you stated “But 2017 is generally seen the year that renewables have become competitive on their own.” But today you state “”nobody said that solar (especially non-utility scale as you seem to be talking about) is currently going to compete on an even footing with gas at this moment everywhere on the planet.”” Perhaps you can help me with the apparent disagreement. I challenged 2017 date with the comments I made. It maybe I have myopic vision, since what the IPCC and governments around the world are talking about replacement of large scale dispatchable with renewables. But I came in on the discussion of nuclear, which as far as I know, all units are large utility scaled; except perhaps a few reasearch units.
You did recognize that around 30% there are problems in a response to MarkW. I recognized in my posts that at low levels of penetration renewables are not a problem. I did not point out studies which show that overall costs can go down at about 5% renewable penetration. I posted about how higher levels are increasing costs; not what future technology can accomplish.
I presented known problems with the industry. That is not trashing. The comments I brought to the table challenge what constitutes economical. At present, the known solutions for renewables penetration raise the cost to the level that nuclear and fossil fuels are more economical. This is not trashing. It is not thermodynamics. It is an evaluation that does not require a PhD, but only working knowledge. I also presented that the known problems do not have to do with the efficiency of the unit or cost of the generation, but the infrastructure cost due to the variability and that it is non-dispatchable.
benben I would have thought the comment about economics with regard to nuclear, if I did not include fossil fuel, would be considered an incomplete evaluation. It is good that we agree to basics. I do not know if the trajectories will continue. I do know that if the penetration continues and costs are passed to the generating units, wind and solar will have to meet the challenges of their intermittent and non dispatchable energy costs. Or not. Some nations are apparently passing them to nuclear and fossil, then using that unlevelized cost as a reason to get rid of the dispatchable source.
I have the same concern for WUWT’s cognitive dissonance that I have the USA’s Left’s. In the Left’s case they ARE supporting WUWT’s global conspiracy thoughts with their throwing every real or perceived social injustice cause into the mix while promoting an anti-capitalist stance that can only be described as a communist style take over of fundamental choice. Not just energy, but life styles, and even the justice system. I typically try to understand both their concerns without letting the rhetoric getting in my way.

jfpittman
Reply to  Patrick B
May 1, 2017 7:41 am

benben, It is not I who is challenging someone who did their PhD in it. I have presented where things currently stand. Not my observations but those PhD persons. I am not trashing an industry when I point the problems of today. And not even someone who has a PhD and many post docs can tell the future reliably. I am a chemical engineer and a biologist. I have been dealing with power since the 1990’s when the CAAA came into play in the USA. Your strawman of criticizing airplane design falls apart if I am complaining about the uncomfortable seat in the plane I flew, or how much the ticket cost. You do a better job in the second paragraph where you make a substantive comment. But for your information, I was was challenged by a PhD engineer who thought my support of renewables was questionable. So he gave me some data and asked me to show where it was economical. So I did. He responded that the local power company couldn’t make money off of it. And I pointed out to him that was his strawman. My claim was that under certain conditions solar was economical.
To your second comment. On your part, you stated “But 2017 is generally seen the year that renewables have become competitive on their own.” But today you state “”nobody said that solar (especially non-utility scale as you seem to be talking about) is currently going to compete on an even footing with gas at this moment everywhere on the planet.”” Perhaps you can help me with the apparent disagreement. I challenged 2017 date with the comments I made. It maybe I have myopic vision, since what the IPCC and governments around the world are talking about replacement of large scale dispatchable with renewables. But I came in on the discussion of nuclear, which as far as I know, all units are large utility scaled; except perhaps a few reasearch units.
You did recognize that around 30% there are problems in a response to MarkW. I recognized in my posts that at low levels of penetration renewables are not a problem. I did not point out studies which show that overall costs can go down at about 5% renewable penetration. I posted about how higher levels are increasing costs; not what future technology can accomplish.
I presented known problems with the industry. That is not trashing. The comments I brought to the table challenge what constitutes economical. At present, the known solutions for renewables penetration raise the cost to the level that nuclear and fossil fuels are more economical. This is not trashing. It is not thermodynamics. It is an evaluation that does not require a PhD, but only working knowledge. I also presented that the known problems do not have to do with the efficiency of the unit or cost of the generation, but the infrastructure cost due to the variability and that it is non-dispatchable.
benben I would have thought the comment about economics with regard to nuclear, if I did not include fossil fuel, would be considered an incomplete evaluation. It is good that we agree to basics. I do not know if the trajectories will continue. I do know that if the penetration continues and costs are passed to the generating units, wind and solar will have to meet the challenges of their intermittent and non dispatchable energy costs. Or not. Some nations are apparently passing them to nuclear and fossil, then using that unlevelized cost as a reason to get rid of the dispatchable source.
I have the same concern for WUWT’s cognitive dissonance that I have the USA’s Left’s. In the Left’s case they ARE supporting WUWT’s global conspiracy thoughts with their throwing every real or perceived social injustice cause into the mix while promoting an anti-capitalist stance that can only be described as a communist style take over of fundamental choice. Not just energy, but life styles, and even the justice system. I typically try to understand both their concerns without letting the rhetoric getting in my way.

benben
Reply to  Patrick B
May 1, 2017 3:25 pm

Hmmm, well jfpittman, I will admit, I might be arguing with the general WUWT readership as much as with you. Like I said before, we are pretty much on the same page, we’re just talking about different things. I have full faith that by the time we need to go beyond 30% grid those problems will be solved. And if not, we just stay there, so no harm done. As a European with hardly any fossil fuel reserves I am very very happy to finally have the technology to become independent from a whole bunch of countries that we should really not be throwing money at. I don’t care if it costs a little bit more money, if that money stays within the Dutch economy (to a larger degree). The climate change thing is a nice bonus.
I agree with you: the American left has this weird tendency to throw identity politics into everything. I don’t agree with that. But so what? Why is it so hard to separate the weird extremities of one political party in one country from what is being discussed here? It’s global warming, not American warming. I’m not American, and neither is the vast majority of the UN, the IPCC, most of the scientists working on the topic, or the rest of the world for that matter.
Cheers!
Ben
PS I just spent a year living in the united states and it’s blindingly obvious that this right wing American100% anti socialist anything is at times pretty damaging and irrational. Newly enrolled in my socialized healthcare system I’m paying 110$/month for healthcare, no strings attached, and free choice in whatever doctor or hospital I want. And hardly a homeless person on the streets! Viva la revolucion 😉

jfpittman
Reply to  Patrick B
May 2, 2017 5:35 am

benben, I agree that nations should do what is best for their country. I understand your optimism. My POV is different in that a lot of industry needs reliable electricity and reliable sourcing fuel for boilers. We even have penalty contracts and insurance for outages, or interruption of supply. The Australian system would not and does not work for similar industry. I had a conversation with several Danes about what was going on in Denmark when I studied the Wyoming study. They were the ones who directed me to how to get apple to apple comparison so I could be reasonably sure that the Wyoming study’s results were relevant to Denmark’s experience. For US, with fracking putting so many of the old coal boilers out, the USA has done as good or better than most in prorated emissions that consider off shore high energy manufacturing losses.
The problem with the far right and the far left is that their numbers and their power in the party primaries means that only the most central politicians who are very popular will support the other side. That is enough to keep the government going, but progress is hard to make in new legislation or fixing old legislation. Not being a US citizen, you may not realize that the welfare system was designed to discourage. Until Bush made some improvements, it made more sense to be totally dependent on the welfare system than using it to get back on one’s feet. The problem with our health care is that the whole welfare system needs major overhauling. It is a sacred cow for the democrats, as much as tax breaks for the rich for republicans.
Unfortunately, the IPCC has set up a system that is contrary to US law and custom. Whether one considers strict liability, or general liability, all humans have benefited from CO2 production. The best example is China. By not having to go through the technology learning process of the 1800’s and 1900’s, they get to jump start their industry. It is both true that they pay for the technology and have benefited from others’ CO2 production. Climate justice is no such thing. A realistic appraisal would recognize China has benefited. It would also recognize population justice. Why should the US or anybody else for that matter pay for how much reproduction other countries support. The moral position of the UN matches M Strong’s vision of wealth redistribution by force, not application and development. In the US, that is an illegal taking. Until the blame game stops, little progress will be made on the real problem. We are all guilty of being the progeny of survivors.
All considered, I think Trump should present the Paris Agreement to the Senate for approval with the statement “Please note that agreeing with this treaty is admitting to guilt of climate injustice.” Otherwise, it is only a vehicle to do an end run around US law and custom as to what constitutes a taking. When you add in that almost half of US legislatures want to throw in all sorts of injustices and forced takings of persons’ wealth, it should be understandable why the issue cannot be separated. It is not being presented by the UN IPCC nor the left as a separable entity.
What we need is a plan that addresses CO2 and nothing else. I am not very optimistic. I can understand poor not wanting to divert funds, but that is true of the rich as well. We need to grow the tent bigger, not exclude persons. We especially should not want to exclude the rich, since they have the most where with all to accomplish the goal.

benben
Reply to  Patrick B
May 2, 2017 1:24 pm

So many assumptions! Just to be clear: the IPCC doesn’t actually do any science or make policy. It just summarizes the state of research and makes some recommendations that nobody has to follow (in fact, nobody follows it).
Its pretty bizarre to claim that the Paris agreement forces the USA to do something illegal, since a) the USA had a veto and b) the agreement is non-binding. What you’re saying is that you don’t like it. That is not the same as being illegal. I know this is not in the spirit of the America of 2017, but try to keep your opinions and your facts separate 😉
China is definitely putting its money where its mouth is at, and on track to spend 360 billion $ on renewables between now and 2020. That is a HUGE investment. I really don’t understand how you can claim that China is somehow trying to be freeriders
http://www.reuters.com/article/us-china-energy-renewables-idUSKBN14P06P
Did you know that China is investing $100bn+/year of its own money on renewables? Most probably not. Why did you not know this? Because you’re confining yourself to a tiny little corner of the web that servers you mostly false or slanted news that panders to your worldview. Sad, but true.
Now, if you were honest you would say ‘hmmm I have been falsified, my worldview needs to be changed’. But that is not going to happen. You’re not going to change your mind based on any of this. Also sad but true. Oh well! Let’s call it a day on this discussion, I have trouble finding this article on the WUWT page 🙂
Cheers!
Ben

johnfpittman
Reply to  Patrick B
May 2, 2017 4:50 pm

benben The illegality is that many consider it a treaty that has to be approved by our Senate, that Obama and the COP lawyered around. Just to be clear, yes I understand that it was the COP of United Nations Framework Convention on Climate Change. IPCC does not make policy, they make policy recommendations. Sorry for my lack of specificity, it sent you down an unnecessary rabbit hole.
The claim is not bizzare but is US law centric. Example https://cei.org/content/options-addressing-president-trumps-paris-climate-pact-promise of the discussion. There is an article based on a leak that lawyers in the White House are discussing that in US law, acceptance would have binding consequences. Can’t find it, not much time left for me to address your other points.
You state: “”I know this is not in the spirit of the America of 2017, but try to keep your opinions and your facts separate ;)”” I remind you I allowed your optimism and did not try to conflate that with the arguments you made about optimism.
You state “China is definitely putting its money where its mouth is at, and on track to spend 360 billion $ on renewables between now and 2020. That is a HUGE investment. I really don’t understand how you can claim that China is somehow trying to be freeriders” I also note that the promised coal factories were changed downward, and yet to date the amount of coal and CO2 that China has stated is part of its plans far exceeds renewables. If they put them in, they put them in. When they do, then the percent that it represents will be important. But it has not happened yet.
You ask “Did you know that China is investing $100bn+/year of its own money on renewables?” You state “Most probably not. Why did you not know this? Because you’re confining yourself to a tiny little corner of the web that servers you mostly false or slanted news that panders to your worldview. Sad, but true.” I don’t know how I could convince you of this since you did not ask and let me answer. Though I would have to admit, I have read about China investments in renewables if for no other reason it is a topic of note in EIA, WUWT, Climate Etc, and a whole host of articles at NY Times, HuffPO, and just about every “liberal” slanted news outlet in the US. Not to mention how often Bloomberg gets all excited about proposed China renewables and how great an investment it is. WAIT WAIT, I can’t be doing that; you have already judged me and somehow know what I read or don’t read, even when we discuss such items that came from Bloomberg and EIA.
Let me end with your comment “”Now, if you were honest you would say ‘hmmm I have been falsified, my worldview needs to be changed’.”” But with this difference: I don’t know what is going to happen; you’re going to have to determine what happens; not sad just true.
Cheers!
John

April 26, 2017 3:23 am

Some one is fibbing here and I do not believe it is Dr. Curry.
Models are written using basic physics principles AS WE CURRENTLY UNDERSTAND, the model is run and we check the output. If the output is not as per observations, we look into the model internals to see where we need to improve our understanding. Make a change and run again.
I do not know about some climatologists, but comparing model output to observations, making model changes and rerunning is called tuning. We adjust the model until the model output matches observations.
What else would you do? match no observations?
Climatologists may call this method of working something else but non-climatologists will have a difficult time calling it something other than tuning!
How do climatologists know when a model produces a ‘good’ output?

Nick Stokes
Reply to  steverichards1984
April 26, 2017 3:39 am

“How do climatologists know when a model produces a ‘good’ output?”
Well, for a start, the models are basically weather forecasting programs. And the general test there is getting the forecast right.
But it’s basically Computational Fluid Dynamics, and the test is whether the solution satisfies the input equations. People often ask for more, but you can’t. It just won’t work. If you try to tweak something, something else goes wrong. It usually crashes.
The persistent notion of tuning to the results takes a very simplistic view of “results”. It isn’t just surface temperature. Output includes winds, humidity, rainfall, a spectrum of radiation, at all kinds of pressure levels.

Reply to  Nick Stokes
April 26, 2017 3:56 am

“the models are basically weather forecasting programs.”
No they are not. We have real weather forecasting programs and they are ECMWF Era-interim and NCEP GFS. Those won’t let Tom Karl fiddle with their workings.
Of course they didn’t show significant warming during the early 21st century prior to the 2015-16 El Niño.
http://www.ecmwf.int/sites/default/files/styles/large/public/av_surface_temp.png

Nick Stokes
Reply to  Nick Stokes
April 26, 2017 4:41 am

“Those won’t let Tom Karl fiddle with their workings.”
I doubt if Tom Karl is coding GSMs. But in fact NCEP GFS does use the NOAA GFDL ocean model MOM3.

AndyG55
Reply to  Nick Stokes
April 26, 2017 5:08 am

“And the general test there is getting the forecast right.”
MASSIVE FAIL !!

Reply to  Nick Stokes
April 26, 2017 5:55 am

Nick writes

Well, for a start, the models are basically weather forecasting programs. And the general test there is getting the forecast right.

The heart of a GCM as opposed to a weather forecast is that at every time step a tiny amount of energy is accumulated. Its calculated and fed back and accumulated and the new state is used in the next timestep….
That is NOT what a weather forecast does, and impossible for the GCMs to do using their crude approximations and fittings too. Weather forecasts dont attempt to accumulate tiny changes over millions of iterations.

MarkW
Reply to  Nick Stokes
April 26, 2017 6:43 am

For years they have been telling us that weather forecast models and the climate models are completely different things.
Now Nick is telling us that climate models are essentially weather forecast models.
Minor disconnect here.

RockyRoad
Reply to  Nick Stokes
April 26, 2017 6:44 am

Like I said above, Nick (but the repetition is apparently necessary since you’re convinced otherwise), an axe should never be used as a hammer.
Of what use is a GCM in which the error is accumulated over every one of the millions of iterations overwhelms the phenomenon being modeled?
(It reminds me of the error-prone iterations of your argument that prove worthless because an axe makes a terrible hammer.)

Clyde Spencer
Reply to  Nick Stokes
April 26, 2017 10:06 am

NS,
And as I understand the situation, different modelers have different priorities and they attempt to optimize the results of their model to address their priority. This TUNING helps explain some of the variance between the different models.
If the situation were as you claim, there would only be a need for one model — based entirely on First Principles — and it could be ‘frozen’ as soon as all the linked modules worked as intended. We would then have an untuned model that all climatologists and atmospheric physicists could use. That obviously isn’t the situation!

Nick Stokes
Reply to  Nick Stokes
April 26, 2017 12:38 pm

TTTM,
“The heart of a GCM as opposed to a weather forecast is that at every time step a tiny amount of energy is accumulated.”
It isn’t the heart of the GCM. It’s minor housekeeping. The program conserves energy as well as possible, and that means that any discrepancies, up or down, accumulate. So they correct. You have to do this in any explicit CFD program that runs for a long time. Implicit methods just build it in.
Weather forecasting runs for thousands of timesteps. I’d be very surprised if they can avoid fixing total energy.

Nick Stokes
Reply to  Nick Stokes
April 26, 2017 12:42 pm

“For years they have been telling us that weather forecast models and the climate models are completely different things.
Now Nick is telling us that climate models are essentially weather forecast models.”

As too often at WUWT, “they” have been telling us. No quotes, no cites, no inkling as to who “they” might be. WND?
GCMs evolved from NWP programs, and some are still used for both purposes. Their structure and functions are very similar. No-one has claimed differently.

Reply to  Nick Stokes
April 26, 2017 1:27 pm

Nick writes

It isn’t the heart of the GCM. It’s minor housekeeping.

LOL Nick.

hunter
Reply to  Nick Stokes
April 27, 2017 12:11 am

Climate hypesters know the models are 6awful from the empirical evidence: they are wonderful at tapping huge wells of endless grant money.

Reply to  Nick Stokes
April 27, 2017 3:22 pm

Nick I don’t understand your tenacity about “tuning”, but more to the point I can’t understand your defense of computational fluid dynamics; we know, provably, that computational fluid dynamic models do not work; that’s the classic Navier-Stokes problem and it’s well recognized as intractable. It’s a limit of mathematics, not a problem of “unknown forces” or Maxwell’s demon.
I believe this thread started as a criticism of the attribution studies currently making the rounds as a method to “empirically” demonstrate the effects of CO2 on AGT through the use of GCMs, which many of us find repulsive. The problem Professor Curry identifies is correct; a model that has been designed to incorporate the concept of CO2 as a force driving temperature at its most fundamental level will of course show that relationship to the extent it has been designed to do. It is in fact circular reasoning.
A recent example of this pseudo-scientific chicanery can be found in “On the causal  structure  between CO2 and global  temperature”, Adolf  Stips, Diego  Macias,  Clare  Coughlan,  Elisa  Garcia‐Gorriz &  X. San  Liang, Nature, Scientific Reports, 2016
How this report garnered interest by Nature is difficult for me to understand, but I’d have to guess the editorial team responsible share your misunderstanding.
Certainly, if the GCMs in use were predictive to some high degree of certainty, using a mathematical model to demonstrate a causal relationship might be appropriate as a decision support tool, however that is clearly not the case in this example. Any report based on the predictions and assumed relationships intentionally cast into a model that demonstrably fails to predict is pure hogwash. It can serve no purpose other than to baffle the uninformed. In my opinion it’s a disgusting perversion of science and I can’t understand how you could support it.

Nick Stokes
Reply to  Nick Stokes
April 27, 2017 6:16 pm

Bartleby,
“more to the point I can’t understand your defense of computational fluid dynamics; we know, provably, that computational fluid dynamic models do not work; that’s the classic Navier-Stokes problem”
I spent a good part of my working life solving the Navier-Stokes equations. CFD works. It is standard, main-stream engineering. Here is a recent Boeing perspective.
GCMs are very similar to numerical weather forecasting programs. They work.
My tenacity about tuning is because I know how and why it is done, and it is nothing like what people here think.

Reply to  Nick Stokes
April 27, 2017 8:10 pm

Nick writes

GCMs are very similar to numerical weather forecasting programs.

No they’re not, they’re fundamentally different.
If a GCM was simply running lots of weather than the weather forecasting models could do that. The GCMs are fundamentally different because they attempt to account for the tiny changes that happen due to …well primarily the influence of CO2 over time.
That happens at every single time step, Nick. Its not magic. At every time step the state of the model must reflect the change to climate which was different to the previous step. But the GCMs simply aren’t capable of doing that.

Reply to  Nick Stokes
April 27, 2017 8:53 pm

Nick also wrote

My tenacity about tuning is because I know how and why it is done, and it is nothing like what people here think.

My tenacity with these model comments and many of Mosher’s comments are that they dont distinguish between an instantaneous effect which is, for example, a wing’s performance, vs that of a long term projection.
A CFD model of a wing “works” because its calculating how the wing responds at the time. There is no accumulation of error from environment change. How accurate would a CFD projection be if instead of simply calculating lift and drag, it had to accumulate that energy lost by drag over 100 years of flight at varying but unknown altitudes and velocities?
Weather models work because they’re much closer to instantaneous than GCM projections. About 4 orders of magnitude difference when compared to a 100 year GCM projection.

Kaiser Derden
Reply to  steverichards1984
April 26, 2017 6:00 am

a good output is one that gets them a big grant …

Reply to  steverichards1984
April 26, 2017 7:56 am

steverichards1984:
You say

Some one is fibbing here and I do not believe it is Dr. Curry.

and

I do not know about some climatologists, but comparing model output to observations, making model changes and rerunning is called tuning. We adjust the model until the model output matches observations.
What else would you do? match no observations?

Oh! That does remind me of something!
Long ago, in 2000, I was one of a group of 18 scientists invited from around the world to give a briefing on climate science at the US Congress in Washington DC. There were three briefing sessions that were provided by three panels.
Session 1 was on climate data and its panel was chaired by Fred Singer.
Session 2 was on climate models and its panel was chaired by me.
Session 3 was on climate policy and its panel was chaired by David Wojick.
In each Session each member of its panel gave a presentation and questions from the floor were invited when those presentations were all completed.
The first questioner of Session 2 stood and said in an aggressive manner,
The first Session said we cannot trust the climate data. Now this session says we cannot trust the models. Where do we go from here?
Gert Rainer-Webber started to stand to reply but as chairman I signaled him to stay seated and I turned to face the questioner. I said,
Sir,
either the climate data are right or they are not.
If the climate data are right then the climate models cannot emulate past climate.
If the climate data are not right then we have nothing with which to assess the climate models.
In either case, we cannot trust the climate models to project future climate.
So, I agree your question, Sir, “Where do we go from here?

The questioner remained silent and studied his shoes so I asked for the next question.
Richard

April 26, 2017 3:41 am

Congress Should Investigate the Claim of Scientific Consensus
For there to be any real scientific “consensus” one would need models that accurately define the factors impacting global temperature. The model the IPCC has chosen claims CO2 is the most significant factors, yet all their models fail to demonstrate the validity of that theory. No real scientist would ever go on record defending the results of the IPCC Models. The models do more to discredit the theory than to validate it.
https://co2islife.wordpress.com/2017/04/24/congress-should-investigate-the-claim-of-scientific-consensus/

Berényi Péter
April 26, 2017 3:50 am

Tuning in to climate models, according to RealClimate (gavin) on 30 October 2016.

Reply to  Berényi Péter
April 26, 2017 4:20 am

“The basic thrust of the article is that climate modeling groups are making significant efforts to increase the transparency and availability of model tuning processes for the next round of intercomparisons.” (Gavin)

Berényi Péter
Reply to  pstevens2
April 26, 2017 5:32 am

Exactly.
J.Curry:

… They use models that are tuned to the period of interest, which should disqualify them from be used in attribution study for the same period (circular reasoning, and all that).

Gavin (now) says “Curry’s claim is wrong”, because

Models are NOT tuned and using them for attribution is NOT circular reasoning.

Still, one needs “to increase the transparency and availability of model tuning processes”, as Gavin said half a year ago.
Makes perfect sense. Not.

Dr. S. Jeevananda Reddy
April 26, 2017 3:55 am

Global warming part starts from 1951 as per IPCC. Yet the entire trend from 1951 is not global warming — only a part of more than half is global warming as reported by IPCC. The other human factor was present even before 1951. 1880 to 2010 presented a trend of 0.6 oC per century. From 1950 to 2010 presented 0.8 oC. The difference is 0.2 oC. Then what is the sensitivity factor of global warming component?
Dr. S. Jeevananda Reddy

Gamecock
April 26, 2017 3:56 am

We don’t understand the earth’s atmosphere well enough to model it.
That’s it. There is nothing more.
‘only simulation ensembles made across systematically designed model families allow an estimate of the level of relevant irreducible imprecision.’
Analysis of junk results in junk.
“I do not believe in the collective wisdom of individual ignorance.” – Thomas Carlyle

April 26, 2017 4:08 am

” He’s right, the models do not cluster tightly around the observations, and they should, if they were modeling the climate well.”
Err no. There will always be structural uncertainty especially in models of large complex systems.
Next, the models run a little hot, that makes them PERFECT for establishing policy with a safety zone or buffer.
Long ago I built a model of how far a plane could fly with the fuel remaining. LOTS of unknowns, Lots of structural uncertainty. The model would always underestimate the distance the plane could fly. This a was Great feature. You never ran out of gas and crashed.
In general the models get the temperature right within 10-15% and they get trends right to about the same degree. If you are going to miss a prediction ( well predictions are ALWAYS wrong to some degree) it’s good that the IPCC models miss on the high side and predict too much warming. Its a good upper bound.

Newminster
Reply to  Steven Mosher
April 26, 2017 7:54 am

Brilliant! Models that overestimate are good because with any luck we can force governments into wasting even more taxpayers’ money than reality justifies. Unless you have another interpretation up your sleeve!
How about: observations show that climate, and its main component weather, is doing nothing out of the ordinary except in the models so we have decided to stop pretending there is some existential problem and go off and get useful jobs!
Nah!

John Bills
Reply to  Steven Mosher
April 26, 2017 8:00 am

Mosher,
Surface temperature groups use models too and they tune temperatures.

Kermit Johnson
Reply to  Steven Mosher
April 26, 2017 8:03 am

What BS . . .
“Structural uncertainty” – wow.
“the models run a little hot, that makes them PERFECT for establishing policy” – yes, if you *assume* that temperatures will continue to get hotter! It’s (nearly) always that it is our assumptions that get us in trouble.
“models get temperature right within 10-15%” – what does this mean? What values for temperature are you talking about?

Nick Werner
Reply to  Kermit Johnson
April 26, 2017 9:16 am

KJ… I think he’s referring to the modern Percentigrade scale.

Nick Werner
Reply to  Steven Mosher
April 26, 2017 9:09 am

Given that real-world examples of ‘PERFECT for establishing policy’ already include diesel-powered cars in England (increasing real pollutants relative to gasoline) and idle desalination plants in Australia (badly allocated capital), I hope that you (SM) are able to see the weakness of your argument.

tonyM
Reply to  Steven Mosher
April 26, 2017 9:12 am

Science isn’t about wobbly overestimating based on some self proclaimed precautionary principle as a rationalization.
A similar principle seems to apply to the T databases. According to Tony Heller his study shows the adjustments to the T correlates with increased CO2 with an R2 = 0.98. Has anyone ever observed real statistical data to have such a perfect measure!?
Seems this field has a lot of ostriches posing as scientists.
Geesh, go tell the astronauts going Mars that they will have plenty of fuel; pity they overshoot the planet somewhat on their way to heaven.
This CO2 scam is a dead duck once Trump puts the boot in. China, India, Russia (whose models seem to track the best) could not care less about CO2 unless there is money coming their way. The UK seems likely to throw off the CO2 yoke as well. Here’s to more sanity! Finally!

Reply to  tonyM
April 26, 2017 10:07 am

Has anyone ever observed real statistical data to have such a perfect measure!?

Well, the correlation between min temps and dew points are in the upper 97% range(and cross correlation has dew points leading min temps by a month or so).comment image

tonyM
Reply to  tonyM
April 26, 2017 5:33 pm

micro6500
Thanks for your graphs. I wonder if there is any localized similar correlation; would prove very useful to growers.
It was really left unsaid that the T adjustments tracking so perfectly with CO2 changes implies T data are “tuned” to that relationship. The discrepancy between satellite and “ground” T measures are becoming larger yet there are no reports on lapse rate changes to my knowledge. In any case it further puts a nail in the CO2 coffin in that this runs counter to the hypothesized hotspot warming in the upper tropical troposphere.

Clyde Spencer
Reply to  Steven Mosher
April 26, 2017 10:22 am

SM,
You are basically arguing the Precautionary Principle. If there were no costs involved in taking a conservative approach, then it would be acceptable. However, the world is being asked to turn its energy policy on its head without proper concern for the costs.
Lets, take a look at your plane simulation. While there is merit to being sure a plane never runs out of gas, it is important to be sure that one doesn’t err too much on the side of caution. Because in the real world, a plane that lands with too much fuel might be a safety hazard. It also means that a larger than necessary inventory of fuel needs to be kept on hand if the planes are refueled as recommended by your simulation.
I question your claim that the GCMs get the temperature “right within 10- 15%,” when they appear to be running about 3-times the observed temperatures.

jfpittman
Reply to  Clyde Spencer
April 27, 2017 6:47 am

I agree with Mosher, such can be a safety factor. I disagree, in this case, that it has been “Great”.
Setting a safety factor is a judgement call. Not knowing the safety factor is a tragedy waiting to happen. Whether it is a budget, or a pilot ejecting from a plane, safety factors and knowledge of capability are a prerequisite to a good decision. It costs to overpay or abandon a plane unnecessarily.
The scientists are not claiming it as a safety factor. The politicians are claiming: it is science. So, in this respect, Mosher is pointing out both are wrong.
Worse, advocates are using this to shut down disagreement with policy, while vilifying, “based on the science”, persons who disagree with proposed timelines, threat, or harm.
The reason that vilification should be thoroughly condemned is that you and I should have input to this. That is why using the current tactics are not only harming people, but are deceiving people as to what the real arguments are, and whether or not the actions proposed should be used or not.
Wanting to do something other than wasting money in a futile gesture of virtue signalling should not cause persons to be vilified.

ferdberple
Reply to  Steven Mosher
April 26, 2017 4:21 pm

The model would always underestimate the distance the plane could fly.
===========
what a surprise! a model that is always gives the wrong answer, coupled with an excuse as to why that is better than a model that get the right answer.
why not build a model that gets the right answer then add in a known safety factor? because then it would be called engineering, not climate science.

Reply to  Steven Mosher
April 27, 2017 2:10 am

Steve,
Models that consistently “cry wolf” are not good for *sound* policy decisions.
Clear and consistent exaggerations of the risk are the best evidence against the need for urgent action to mitigate that risk.

Reply to  Steven Mosher
April 27, 2017 2:39 pm

Next, the models run a little hot, that makes them PERFECT for establishing policy with a safety zone or buffer.

No, that destroys credibility. If you are consistently wrong, no one relies on you. ANd they are consistently wrong. Keep betting on the 13 coming up on the roll of the dice.

Reply to  Steven Mosher
April 27, 2017 3:52 pm

Steve –
Err no. Boosting excess fuel on any sort of aircraft is a very bad idea for many, many reasons. My guess is your simulation wasn’t actually used by any commercial carrier if it consistently overestimated provisioning by any significant factor? It’s not just about economics Steve, it’s also about safety.

O R
April 26, 2017 4:10 am

Well, this blog posts conclusion “It’s the clearest presentation I’ve ever run across that the models run hot” is simply wrong.
You can’t do statistics with individual model runs, because some models have 10 runs and others only one.
Doing the stats properly, by use of KNMI climate explorer, show that the average SAT trend 1951-2010 of all 39 CMIP5 rcp8.5 models is 0.138 C/ decade
The corresponding trend of Gistemp loti is 0.136 C/ decade
However, Gistemp loti is blended SAT/SST, not global SAT like models. If we blend models likewise, the average trend decrease to about 0.12 C/decade.
Hence, models have on average a slightly lower trend than Gistemp loti in 1951-2010.
The average trend of five global observational datasets is spot on that of models, 0.12 C/ decade

Clyde Spencer
Reply to  O R
April 26, 2017 10:29 am

O R,
And are the 8 and 6 significant figures or just displayed because the trends were arbitrarily rounded to 3 significant figures as a habit from slide rule days?

Editor
April 26, 2017 4:26 am

You have to watch the pea under the shell very carefully. Gavin quotes Dr. Curry as saying that the models are “tuned to the period of interest”.
However, Gavin changes this. He says they are not tuned to “the period of interest”, but he defines the period of interest as “(the 1950-2010 trend is what was highlighted in the IPCC reports, about 0.8ºC warming)”.
I know of nobody who claims that the models are tuned to that short 60 year period starting in 1950 and ending in 2010. The “period of interest” to which they are tuned is generally the period 1850-2000. And while they do perform poorly during the period 1950-2010, in part that’s because the 21st century is out-of-sample, and in part because they have trouble replicating the temperature drop from 1945 to 1965 or so. Since these two periods are at the beginning and the end of the 1950-2010 interval, this leads to trends all over the map.
NONE OF THIS, however, negates Dr. Judith’s point.Gavin is falsely claiming that the models are not tuned to the historical record. This is errant nonsense that can only be maintained by ruthlessly censoring opposition viewpoints. There is no question that the models are tuned, there have been entire scientific seminars on the subject and journal articles. See Dr. Judith’s excellent post on the subject.
Best to all,
w.

Nick Stokes
Reply to  Willis Eschenbach
April 26, 2017 4:48 am

“The “period of interest” to which they are tuned is generally the period 1850-2000.”
Do you have a reference for that? It seems unlikely to me. Generally for tuning you need a short period with an unambiguous result. That’s partly because full runs are expensive and tuning requires trial and error. Mauritsen et al, quoted above, say:
“The MPI-ESM was not tuned to better fit the 20th century. In fact, we only had the capability to run the full 20th Century simulation according to the CMIP5-protocol after the point in time when the model was frozen”
and later
“To us, a global mean temperature in close absolute agreement with observations is of highest priority because it sets the stage for temperature-dependent processes to act. For this, we target the 1850–1880 observed global mean temperature of about 13.7°C [Brohan et al., 2006].”

Reply to  Nick Stokes
April 26, 2017 6:49 am

Nick writes

Do you have a reference for that? It seems unlikely to me. Generally for tuning you need a short period with an unambiguous result.

Mauritsen et al is a description of running a model, not developing one. Once the model is “complete” there is scope to tune parameters to get better looking results in one area but that’d probably be at the expense of another. Mauritsen et al was an exercise in tweaking all the tunable model parameters to get the best result overall.
And when models are developed, each component must be compared to what is known. Gavin is simply wrong about this or doing Mannian spin.

Nick Stokes
Reply to  Nick Stokes
April 26, 2017 5:14 pm

“You should understand what it is that they’re doing! Mauritsen aren’t developing the model, they’re running. it.”
I understand it very well. Unlike people here, I have actually done tuning, for CFD programs. I’ve tried to explain upthread when it is called for. You don’t do it with full program runs; you have to be able to do trial nd error. I wrote out the three tuning steps they use for TOA balance. That is a development matter; you start with a very brief run to see what you can get out of one association, then with that knowledge try for a longer sequence, probably having most variables following a prescribed rather than a solved trajectory.

Kurt
Reply to  Nick Stokes
April 26, 2017 6:40 pm

Nick:
“Mauritsen et al, quoted above, say: ‘The MPI-ESM was not tuned to better fit the 20th century. In fact, we only had the capability to run the full 20th Century simulation according to the CMIP5-protocol after the point in time when the model was frozen.’”
Note that the passage you quote goes on to say that they already knew that the model would match well with the 20th century when they were developing it, i.e. “Yet, we were in the fortunate situation that the MPI-ESM-LR performed acceptably in this respect, and we did have good reasons to believe this would be the case in advance because the predecessor was capable of doing so.”
The paper also concedes that “[e]valuating models based on their ability to represent the TOA radiation balance usually reflects how closely the models were tuned to that particular target, rather than the models (sic) intrinsic qualities.” If models not producing the 20th century warming are winnowed from publication and use by the IPCC, which the paper also states as happening, does it not stand to reason that this article you quote actually backs up Curry’s point that these models cannot be used to attribute that 20th century warming to any particular cause?

Nick Stokes
Reply to  Nick Stokes
April 26, 2017 7:27 pm

Kurt,
“Note that the passage you quote goes on to say that they already knew that the model would match well with the 20th century”
So? We are talking about tuning here. They have “admitted” that they knew the model matched fairly well in the past. So what are they supposed to do? Throw it out?
As to winnowing, that is a different issue. It isn’t tuning. But what the whole discussion lacks is any evidence of what actually happens. Is it really the case that models were winnowed? How many?

jfpittman
Reply to  Nick Stokes
April 27, 2017 7:03 am

Nick states : “As to winnowing, that is a different issue. It isn’t tuning. But what the whole discussion lacks is any evidence of what actually happens. Is it really the case that models were winnowed? How many?”
Great question. But what it does tell us is that you justification of the average as being acceptable has been invalidated, and worse you cannot tell how badly. Great own goal Nick.

Kurt
Reply to  Willis Eschenbach
April 26, 2017 6:10 pm

Willis:
When Curry says that the models are tuned to the period of interest, I think she is referring to modeler’s admissions that they discard models that do not replicate the abrupt warming shown in the 20th century. She’s said this in both her congressional testimony and the blog post that Schmidt ostensibly replies to. In other words, she’s not referring to the selection of values for parameters in any individual model, but instead to the selection that goes on when models that don’t match that upswing are just never published or used in the IPCC. Because of this selection bias, she argues (correctly in my view) that using those models to attribute the recent warming to CO2 is circular reasoning since the selection process of the models weeded out any one that didn’t show the uptick.
Here’s the quote she uses from an article describing the tuning of models:
“Climate models ability to simulate the 20th century temperature increase with fidelity has become something of a show-stopper as a model unable to reproduce the 20th century would probably not see publication.” Note the specification of the 20th century temperature “increase” as the characteristic used to selectively weed out models.
Gavin therefore does indeed respond to a straw man by shifting the “period of interest” to begin in 1950, but I think the real “period of interest” is from 1980-2000 since that’s when the instrumental record really starts to take off. If a model shows that “hockey stick” all you need to do is line up the model with the record by selecting the base period to measure your anomalies.
The other problem I have with Gavin’s purported rebuttal is his assumption that you can show that models were not tuned to reproduce an empirical trend by showing the variance in the trends of individual model runs. If you look at Gavin’s graph, the center (average) of all the model runs is right around the GISS values. How precisely does this graph refute a premise that the models used to generate these individual runs were tuned so that the average trend centered around the observed trend? The average of the model runs is, after all,what the IPCC points to as validating the models.

Nick Stokes
Reply to  Kurt
April 26, 2017 7:36 pm

Kurt,
There is a lot of goal-post shifting here. Most people like Willis are saying he chose too short a period. You are saying too long.
But in fact the bit you quote does say 20th century.
In fact, the primary test of models is whether they give a reasonable solution in the no forcing case. If they do that, then it is almost inevitable that they will produce a long term rise with GHGs. Not tuning, just physics. There are weather fluctuations which can mask this for a while. That is not a fault in the model.

Kurt
Reply to  Kurt
April 26, 2017 9:15 pm

“In fact, the primary test of models is whether they give a reasonable solution in the no forcing case. If they do that, then it is almost inevitable that they will produce a long term rise with GHGs. Not tuning, just physics.”
That’s a really clear explanation of why modeled output is of no practical value. The forced response is a characteristic programmed into the model, and the test used to supposedly validate the model is not the accuracy of the future forced model output against the future forced climate response, but instead just whether the theoretical unforced response shown by the model can be judged as “reasonable.”
And whether the period chosen by Schmidt was too long, or too short is not germane. The point is that Curry’s description of how models were tuned mentioned nothing that could be interpreted as being narrowly directed to the linear trend from 1950 to 2010. Schmidt arbitrarily chose this interval and then bizarrely thought that it refuted Curry’s argument.

Kurt
Reply to  Willis Eschenbach
April 26, 2017 8:54 pm

Tuning/winnowing is a distinction without a difference, and it seems clear to me that the argument raised by Judith Curry, and disputed by Gavin Schmidt, was that climate modelers discard models that don’t show the rise in temperatures at the end of the 20th century. She cites direct quotes from the modelers themselves that say that any model not having this feature won’t see the light of day, and accordingly argues that the models cannot logically be used to attribute the 20th century rise in temperatures to anything because that feature was baked into the models by the procedure used to select them.
Choosing to adopt a picayune interpretation of the word “tuning” avoids her argument; it doesn’t refute it at all. And when the modelers themselves admit to the selection process that forms the factual basis for her reasoning, I think it’s unreasonable to demand that she, or I, or anyone else come up with the data on whether or how often it happens.

Reply to  Kurt
April 27, 2017 4:38 pm

Kurt writes: “the models cannot logically be used to attribute the 20th century rise in temperatures to anything because that feature was baked into the models by the procedure used to select them.”
Exactly.
Let’s assume I know nothing of the intentions of our model builders or the relationships assumed by them. I use various inputs (low CO2, high, etc) to measure the model’s response. I conclude via legitimate statistical procedures that CO2 has an effect on climate, which is of course exactly what the modeler intended. It’s basic “black box” testing.
What have I proven? Only that the modeler believes CO2 has an effect on climate. There’s no attribution based on empirical evidence, but that’s exactly what’s being claimed.

jfpittman
Reply to  Willis Eschenbach
April 27, 2017 7:12 am

Forrest “Remarkably, he says in the comments that “Everyone understands that tuning in general is a necessary component of using complex models. … But the tunings that actually happens – to balance the radiation at the TOA, or an adjustment to gravity waves to improve temperatures in the polar vortex etc. have no obvious implication for attribution studies.”” This contradicts his claims when arguing with Dr. Browning about exponential increase of potential errors from the coarse grid and time steps WRT N-S and atmospheric physics. In that conversation the implication was that when they balanced such as TOA, and the poles, and got the correct profile, meant that when they did not add CO2 and got flatter profiles, that proved the attribution. So, their is a conflict here.

charles nelson
April 26, 2017 4:34 am

For some reason as I read Nick Stokes’ comments, I am reminded of a man who has walked into quicksand…he started off just up to his ankles, but as he struggles and wriggles to get out, he just sinks deeper and deeper!
His next comment will be the equivalent of a bubble…and the one after that will be like a grasping hand disappearing below the surface.
Shame, I was really quite enjoying it.

Nick Stokes
Reply to  charles nelson
April 26, 2017 4:50 am

Charles,
When it comes to scientific matters, you never seem to dip your toe in.

Reply to  Nick Stokes
April 26, 2017 5:51 am

I think Nick is ready to finally give up in the face of the evidence.
Andrew

MarkW
Reply to  Nick Stokes
April 26, 2017 6:46 am

BA, that would a first.

Kurt
Reply to  Nick Stokes
April 26, 2017 6:51 pm

Got to come to Nick’s defense on this one – Charles is using movie theater science instead of real science. You won’t ever become totally submerged in quicksand since the density of the sand-water liquid is higher than that of the human body.

April 26, 2017 4:40 am

Sad that there is no science in the consensus “Climate science”. Probably why all those marchers had no clue why they were marching. Just derelicts picked up from under the Mayo bridge.

ned
April 26, 2017 4:51 am


https://www.youtube.com/watch?v=tWr39Q9vBgo
Each year, Earth Day is accompanied by predictions of doom. Let’s take a look at past predictions to determine just how much confidence we can have in today’s environmentalists’ predictions.

https://www.lewrockwell.com/2017/04/walter-e-williams/environmentalists-dead-wrong/
Wackoism didn’t end with Carson’s death. Dr. Paul Ehrlich, Stanford University biologist, in his 1968 best-selling book, The Population Bomb, predicted major food shortages in the United States and that “in the 1970s … hundreds of millions of people are going to starve to death.” Ehrlich saw England in more desperate straits, saying, “If I were a gambler, I would take even money that England will not exist in the year 2000.” On the first Earth Day, in 1970, Ehrlich warned: “In ten years all important animal life in the sea will be extinct. Large areas of coastline will have to be evacuated because of the stench of dead fish.” Ehrlich continues to be a media and academic favorite.
https://www.lewrockwell.com/2013/05/walter-e-williams/bring-back-ddt/

TA
Reply to  ned
April 26, 2017 8:15 am

“Wackoism”
We should probably use this description more. It is descriptive of a lot of what is going on in our world today.

Clyde Spencer
Reply to  ned
April 26, 2017 10:36 am

ned,
I started teaching in 1971 and I accepted the claims by Ehrlich and others as being true, and I passed it onto my students as gospel. I’m now doing penance for the damage I did.

April 26, 2017 4:55 am

Once the Federal spigot is turned off for this nonsense, it will just die a natural death. No need to debate them. Turn off their air supply.

commieBob
April 26, 2017 5:02 am

Climate models are tuned.

The process of parameter estimation targeting a chosen set of observations is an essential aspect of numerical modeling. This process is usually named tuning in the climate modeling community. BULLETIN OF THE AMERICAN METEOROLOGICAL SOCIETY

Trying to assert otherwise is either dishonest or ignorant.

Nick Stokes
Reply to  commieBob
April 26, 2017 5:30 am

“Climate models are tuned.”
Yes. but the claim was:
“They use models that are tuned to the period of interest, which should disqualify them from be used in attribution study for the same period “

commieBob
Reply to  Nick Stokes
April 26, 2017 6:02 am

They use models that are tuned to the period of interest, which should disqualify them from be used in attribution study for the same period.

Well DUH!
Are you asserting that the models can be tuned against a certain period and then can claim accuracy based on that same period? That’s the Texas Sharpshooter Fallacy.

The name comes from a joke about a Texan who fires some gunshots at the side of a barn, then paints a target centered on the tightest cluster of hits and claims to be a sharpshooter. link

Reply to  Nick Stokes
April 26, 2017 8:27 am

The period of interest they are predominantly tuned to is the only period they match: 1976-2000.

Clyde Spencer
Reply to  Nick Stokes
April 26, 2017 10:47 am

NS,
At the beginning of this thread, mothcatcher claimed that the GCMs are tuned. You challenged his claim. [Nick Stokes April 26, 2017 at 2:53 am] Now you are arguing that the essence of the dispute is about the period of time to which they are tuned. Can you say, “Sophistry?”

Reply to  Nick Stokes
April 26, 2017 12:41 pm

They use models that are tuned to the period of interest, which should disqualify them from be used in attribution study for the same period “
Nick, Are you trying to say,
1. The models are not tuned,
2.The models are tuned to a period “not of interest”
3. the models are not disqualified even though they are tuned to the period of interest?

Nick Stokes
Reply to  Nick Stokes
April 26, 2017 1:42 pm

Clyde,
“At the beginning of this thread, mothcatcher claimed that the GCMs are tuned. You challenged his claim.”
I said in that initial comment:
“It is true that there are parameters that are not well established a priori, and are pinned down by tuning to some particular observation. “
No-one disputes that tuning is done and is necessary for various specific issues. Many have quoted the paper by Mauritsen et al (many) and there have been others. Judith made a specific claim that they tuned to the period of interest (GHG induced warming) and then used that for attribution. Gavin said that isn’t true. Neither he nor I said that no-one ever tunes anything.
What he was pointing to was a common disconnect in sceptic argument:
“We know they must have rigged it to get that agreement, and besides, it doesn’t agree.”

commieBob
Reply to  Nick Stokes
April 26, 2017 3:42 pm

Nick Stokes April 26, 2017 at 1:42 pm
… Judith made a specific claim that they tuned to the period of interest (GHG induced warming) and then used that for attribution.

According to Gavin, what she actually said was:

They use models that are tuned to the period of interest, which should disqualify them from be used in attribution study for the same period (circular reasoning, and all that).

That statement, as it stands, is absolutely true. Unless he can produce others of Judith’s statements which elaborate on the above statement, I am led to conclude that Gavin has concocted a straw man.
You also said:

Many have quoted the paper by Mauritsen et al (many) and there have been others.

Indeed. I linked to a paper on which Mauritsen was second author. He’s widely cited. I’d say he knows what he’s talking about.
Not everyone is sanguine about tuning. Edward Lorenz, arguably the most influential meteorologist in history, and certainly a pioneer climate modeller, said the following:

This would be the case, for example, if the models had been tuned to fit the observed course of the climate. Provided, however, that the observed trend has in no way entered the construction or operation of the models, the procedure would appear to be sound. link

That appears to set the bar pretty high.

Reply to  Nick Stokes
April 27, 2017 2:16 am

commieBob quotes

This would be the case, for example, if the models had been tuned to fit the observed course of the climate. Provided, however, that the observed trend has in no way entered the construction or operation of the models, the procedure would appear to be sound.

Yes, there is an immediate and very obvious failure to achieve this when the TOA imbalance is tuned.

Reply to  Nick Stokes
April 27, 2017 10:27 am

You really stunk up the thread this time with your attempt at spin! Curry is correct. Gavin tried the spin and got nailed for it, and you are trying to spin the spin!
The first rule of laundry, is you do not go into a spin cycle until AFTER the wash cycle! You forgot that.

Tom in Florida
April 26, 2017 5:11 am

But isn’t it the point of the author of this post that Gavin saying that the models are so varied proves they have not been tuned is the reason, by Gavin’s own word, we should not pay any attention to them in the first place?

Mike Schlamby
April 26, 2017 5:15 am

Sounds to me like the Gavin’s is trying to rebut the argument that the models are invalid in a specific case by saying “no, they’re wrong in general”.

Reply to  Mike Schlamby
April 26, 2017 8:32 am

This is called induction.

April 26, 2017 5:22 am

Gavin is in a desperate position. He is likely to loose his job as both NASA and NOAA are reorganized to work more efficiently toward their primary missions. Is he making these kind of statements in hopes that he will be forced to retire and become a well paid activist like his former boss?

Richard M
Reply to  fhhaynie
April 26, 2017 11:29 am

So when is this going to happen? I’ve seen no evidence that Trump is working to reorganize either one.

Reply to  Richard M
April 26, 2017 1:45 pm

It has started with the political appointees that occupy the top levels of each organization. These individuals are there to assure administration policies are followed. Congress controls their budgets. I did research at EPA for over 20 years and we reorganized about every 3 to 5 years. Sometimes these reorganizations were used to move some individuals out of positions that could have an effect on policy .

JasG
April 26, 2017 5:41 am

Of course in Gavin-world, larger model error margins clip the observation error margins therefore proving that the models are ok; in short worse=better. That it is unjustifiable anyway to use frequentist stats unless the model inputs were randomly selected and all output runs retained, is just another mere nitpicky detail to him.

April 26, 2017 5:42 am

“the models are basically weather forecasting programs”
Just a moment, Mr. Stokes. What about climate?
Andrew

April 26, 2017 6:07 am

Exactly, Thomas, my reaction was the same. I was telling myself: That’s quite a weird strategy to argue to show that the models predict random numbers mostly very far from each other – and from reality – and use this observation as evidence that the climate alarmists are doing something right.
I think that Gavin addressed this own goal to those skeptics or undecided folks who are highly allergic to any “tuning” and who think it’s right for models and scientists not to pay attention to the empirical evidence. Well, I don’t think that sensible people hate “tuning” this much because this opposition is equivalent to throwing the empirical character of science out of the window.
Science should choose theories and models that do a very good job in explaining and predicting natural phenomena and what Gavin showed is another piece of evidence that the climate alarmists aren’t doing anything like that at all.

angech
April 26, 2017 6:08 am

Wonderful post by Nick Stokes April 26, 2017 at 4:48 am Mauritsen et al, quoted above,So much that contradicts his assertions. Such as
“details such as the years of cooling following the volcanic eruptions, e.g., Krakatau (1883) and Pinatubo (1991), are found in both the observed record and most of the model realizations.”
Even better than Gavin.
This comment shows that some limited tuning has been built into most models when they back cast.
Because there is no way a model can know when to predict a volcano in the past or the future.
Hence a historical framework has been incorporated into most models, Right Nick?

Mark T
April 26, 2017 6:18 am

For a mathemetician, Gavin”s certainly not very good with any mathematical analyses.

Reply to  Mark T
April 26, 2017 1:56 pm

Good grief your right, Schmidt isn’t a scientist, now I have a mental image of Mann treating Schmidt like Sheldon Cooper treat Wolowitz!

angech
April 26, 2017 6:19 am

Same article
“models are generally able to reproduce the observed 20th century warming of about 0.7 K,”
is completely at odds with
The “models used” (otherwise known as the CMIP5 ensemble) were *not* tuned for consistency for the period of interest (the 1950-2010 trend is what was highlighted in the IPCC reports, about 0.8ºC warming) and the evidence is obvious from the fact that the trends in the individual model simulations over this period go from 0.35 to 1.29ºC! (or 0.84±0.45ºC (95% envelope)).”
and
“Yet, the span between the coldest and the warmest model is almost 3 K, distributed equally far above and
below the best observational estimates, while the majority of models are cold-biased [for the observed 20th century warming of about 0.7 K only]”
So even though the models have a fitted temperature increase range and known volcano eruptions “fitted in” [Impossible Nick for an untuned model by the way] they are still all over the shop as Gavin and E Smith say.
Amazing.

April 26, 2017 6:20 am

They have in the past used aerosols to tune the runs. Do all the model runs use the same or nearly identical aerosol input files?

MarkW
Reply to  micro6500
April 26, 2017 6:49 am

From what I have read, not even close.

Reply to  micro6500
April 26, 2017 7:14 am

micro6500:
You ask

They have in the past used aerosols to tune the runs. Do all the model runs use the same or nearly identical aerosol input files?

No, they use values that differ by a factor of 2.4.
Please read my post below for data, references and explanation.
Richard

Reply to  richardscourtney
April 26, 2017 7:25 am

Thanks Richard. Answered 2 questions at once. They do tune the runs with aerosols, but because the models are different, or tuned differently, the aerosol correction factors are changed to get the correct prior temperature trends.
Thanks.

MarkW
Reply to  richardscourtney
April 26, 2017 8:14 am

If two models use aerosol numbers that differ by 2.4, then that completely blows out of the water the claim that the models are making predictions from first principles.

Reply to  richardscourtney
April 26, 2017 8:31 am

MarkW:
Of course the climate models are not derived from first principles. A model becomes a curve fitting exercise when it uses any parametrisation.
Of more importance is the invalidity of climate model projections which I relate in my above anecdote.
Richard

Frederic
Reply to  micro6500
April 26, 2017 10:16 am

Even Hansen, in some lapses of scientific honesty, admitted publicly that aerosols data are “out of the hat” :
“Even if we accept the IPCC aerosol estimate, wh
ich was pretty much pulled out of a hat, it
leaves the net forcing almost a
nywhere between zero and 3 watts”
source : http://www.columbia.edu/~jeh1/2009/Copenhagen_20090311.pdf

Reply to  Frederic
April 26, 2017 10:21 am

It was a few years later someone came up with aerosol data that was hard to discount, that upset the apple cart. I’m wondering if that was a little before some of the newer generations of models were introduced.

Thomas Homer
April 26, 2017 6:38 am

Can we run these models with current Mars’ parameters and see how well they can predict a more stable and less complex atmosphere?
I expect that many of the parameters would be negligible, except for the 95% CO2 content of the atmosphere. That one would be much more exaggerated and we’d readily see that the models don’t reflect reality in terms of “Greenhouse Gases”.

RACookPE1978
Editor
Reply to  Thomas Homer
April 26, 2017 6:44 am

Heck.
I’d settle for any model that could run the Moon’s measured surface rock temperatures correctly.
Then Mercury and Pluto’s assumed “surface temperatures” based on those “total albedo” and (lack of gasses) simple rock surface.
Then the simpler, no-water, no-ice, no-oceans, no seasonal (plants) albedo changes, high-CO2 atmosphere of Mars.

April 26, 2017 6:43 am

Nick Stokes,
as always you’re nothing but a retarding element.
Happy to receive attention by hostaging a blog.
What’s your contribution.
Think.

mothcatcher
Reply to  kreizkruzifix
April 26, 2017 2:51 pm

Wouldn’t be much of a discussion here if Nick hadn’t been around, would it? We ought to thank him for his contribution.
But defending Gavin Schmidt on this must surely tax even his considerable ingenuity….

ccscientist
April 26, 2017 6:48 am

It is correct that there is not a master tuning knob for matching the data, but it is not true that the models are not tuned. There is leeway in choice of forcing data–Kiehl showed a tradeoff that implies a tuning. Those models using more aerosol forcing had higher GHG forcing (to balance out).
Kiehl J (2007) Twentieth century climate model response and climate sensitivity. Geophys Res Lett 34:L22710.
The tuning of clouds and albedo and convection and all the rest is not done in isolation–there is always an eye on how it makes the model perform.

ccscientist
April 26, 2017 6:54 am

Gavin himself has admitted that different models incorporate different (or competing as he says) physics. If different physics in the models still matches the data (sort of) then something somewhere has been tuned and one cannot infer that the models are right because they are based on physics. Clouds are a key factor that even the IPCC admits cannot be modeled at present.
Schmidt GA, Sherwood S (2015) A practical philosophy of complex climate modelling. Eur J Philos Sci 5:149-169.

ccscientist
April 26, 2017 6:56 am

Nick says: “If they were tuning the models to the data, the would agree. If they were tuning the data to the models, they would agree. But in fact, for individuals runs, they do not agree. So neither tuning is being done.” This does not follow. If the overall framework is wrong, the N-S eqns can’t be solved correctly, and some things are just unknown (clouds), you can tune all day and not get good agreement. This is particularly so because it would take thousands of runs to tune all variables at once, and this is computationally impossible.

April 26, 2017 7:03 am

Whoever responded to my contribution, be assured I take your opinion seriously.
Only problem is that I stumble through an unmanageable WordPress.com jungle.
Best regards – Hans

April 26, 2017 7:07 am

Listen, everyone: Stokes is PAID to do this. He will never stop, never speak clearly, never admit he is a shill for GCM’s. His definition of “tuning” is what it must be so that he can deny that the models are tuned.
Obviously the models cannot match past temperatures without “parameterization,” another word for tuning, but just try to get Stokes to agree to that…

April 26, 2017 7:08 am

Whoever responded to my contribution, be assured I take your opinion seriously.
Sole problem is I stumble through an unmanageable WordPress.com jungle.
Best regards – Hans

TA
Reply to  kreizkruzifix
April 26, 2017 8:27 am

kreizkruzifix, use the Firefox browser with the NoScript add-on and it will completely eliminate your problems with WordPress by blocking everything WordPress is trying to display in your browser. If you want to allow some function to increase useability, you can easily allow any of the scripts that are trying to run.
There are other fixes to this problem, but I found NoScript is the easiest for me. It’s practically bullet-proof and very easy to use.

April 26, 2017 7:08 am

Thomas Wiita:
You ask

If you have your own favorite example that shows that the models run hot, share it with the rest of us, and I hope you enjoyed this one.

I think I need to post the following again because it explains the correct interpretation which James Schrumpf provides in this thread where he writes to Nick StokesThere’s another possibility you leave out: the models ARE tuned, and they are so bad they STILL can’t match reality.
I write to again explain why that is.
None of the climate models – not one of them – could match the change in mean global temperature over the past century if it did not utilise a unique value of assumed cooling from aerosols. So, inputting actual values of the cooling effect (such as the determination by Penner et al.
http://www.pnas.org/content/early/2011/07/25/1018526108.full.pdf?with-ds=yes )
would make every climate model provide a mismatch of the global warming it hindcasts and the observed global warming for the twentieth century.
This mismatch would occur because all the global climate models and energy balance models are known to provide indications which are based on
1.
the assumed degree of forcing resulting from human activity that produce warming
and
2.
the assumed degree of anthropogenic aerosol cooling input to each model as a ‘fiddle factor’ to obtain agreement between past average global temperature and the model’s indications of average global temperature.
In 1999 I published a peer-reviewed paper that showed the UK’s Hadley Centre general circulation model (GCM) could not model climate and only obtained agreement between past average global temperature and the model’s indications of average global temperature by forcing the agreement with an input of assumed anthropogenic aerosol cooling.
The input of assumed anthropogenic aerosol cooling is needed because the model ‘ran hot’; i.e. it showed an amount and a rate of global warming which was greater than was observed over the twentieth century. This failure of the model was compensated by the input of assumed anthropogenic aerosol cooling.
And my paper demonstrated that the assumption of aerosol effects being responsible for the model’s failure was incorrect.

(ref. Courtney RS An assessment of validation experiments conducted on computer models of global climate using the general circulation model of the UK’s Hadley Centre Energy & Environment, Volume 10, Number 5, pp. 491-502, September 1999).
More recently, in 2007, Kiehle published a paper that assessed 9 GCMs and two energy balance models.
(ref. Kiehl JT,Twentieth century climate model response and climate sensitivity. GRL vol.. 34, L22710, doi:10.1029/2007GL031383, 2007).
Kiehl found the same as my paper except that each model he assessed used a different aerosol ‘fix’ from every other model. This is because they all ‘run hot’ but they each ‘run hot’ to a different degree.
Kiehl says in his paper:

One curious aspect of this result is that it is also well known [Houghton et al., 2001] that the same models that agree in simulating the anomaly in surface air temperature differ significantly in their predicted climate sensitivity. The cited range in climate sensitivity from a wide collection of models is usually 1.5 to 4.5 deg C for a doubling of CO2, where most global climate models used for climate change studies vary by at least a factor of two in equilibrium sensitivity.
The question is: if climate models differ by a factor of 2 to 3 in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy.
Kerr [2007] and S. E. Schwartz et al. (Quantifying climate change–too rosy a picture?, available at http://www.nature.com/reports/climatechange, 2007) recently pointed out the importance of understanding the answer to this question. Indeed, Kerr [2007] referred to the present work and the current paper provides the ‘‘widely circulated analysis’’ referred to by Kerr [2007]. This report investigates the most probable explanation for such an agreement. It uses published results from a wide variety of model simulations to understand this apparent paradox between model climate responses for the 20th century, but diverse climate model sensitivity.

And, importantly, Kiehl’s paper says:

These results explain to a large degree why models with such diverse climate sensitivities can all simulate the global anomaly in surface temperature. The magnitude of applied anthropogenic total forcing compensates for the model sensitivity.

And the “magnitude of applied anthropogenic total forcing” is fixed in each model by the input value of aerosol forcing.
Kiehl’s paper can be read here.
Please note its Figure 2 which is for 9 GCMs and 2 energy balance models.

It shows that
(a) each model uses a different value for “Total anthropogenic forcing” that is in the range 0.80 W/m^2 to 2.02 W/m^2
but
(b) each model is forced to agree with the rate of past warming by using a different value for “Aerosol forcing” that is in the range -1.42 W/m^2 to -0.60 W/m^2.
In other words the models use values of “Total anthropogenic forcing” that differ by a factor of more than 2.5 and they are ‘adjusted’ by using values of assumed “Aerosol forcing” that differ by a factor of 2.4.
So, each climate model emulates a different climate system. Hence, at most only one of them emulates the climate system of the real Earth because there is only one Earth. And the fact that they each ‘run hot’ unless fiddled by use of a completely arbitrary ‘aerosol cooling’ strongly suggests that none of them emulates the climate system of the real Earth.
Richard

Kermit Johnson
Reply to  richardscourtney
April 26, 2017 8:42 am

Thank you for that explanation.
One of the times I lost my posting privileges at ARS Technica was when I responded to someone who claimed to be working on climate models. He disputed my claim that the models were curve-fitted to the rather poor historical data by saying that the models were based on first principals of physics. I asked a simple question about whether the historical data was used in the modeling process, and he went on for a few pages trying to muddy the waters. I finally got to the point and asked if the models were ever tested against the historical data to see how accurate they matched that data. Finally he said that, OF COURSE, the models were tested against that data. I then asked, since the 1980s, how many times the models had been tested on this data? This, of course, is a perfect example of “curve-fitting” – or “tuning.”
Anyone who models financial data knows how a computer model that is “curve-fitted” or “tuned” to the historical data will invariably lose your money. These models are also trying to model non-linear, coupled, chaotic systems.
One other time I was cut off from posting on ARS Technica was when I pointed out that the evolution of the “fudge factors” (sensitivity factors) in climate models was very much like what Richard Feynman wrote about in “Cargo Cult Science.” The trends have been slowly moving to lower and lower values. (Look up Feynman’s comments about Robert Millikan’s oil drop experiment. The regulars there at ARS Technica have some pretty elevated views of their own opinions, and they do not take kindly to these types of questions.)

Reply to  richardscourtney
April 26, 2017 8:48 am

Kermit Johnson:
Thanks for that. I make two responses that are both addressed by copying to here one of my above posts and the link it contains to another above post.
I wrote,
Of course the climate models are not derived from first principles. A model becomes a curve fitting exercise when it uses any parametrisation.
Of more importance is the invalidity of climate model projections which I relate in my above anecdote.
Richard

Clyde Spencer
Reply to  richardscourtney
April 26, 2017 10:58 am

Richard,
I have remarked before that, logically, there can only be one best climate model. Averaging its results with all the poor results only dilutes the best model and arrives at some value in between the best and the worst. What should be done is to see if there is some ‘structural’ or tuning difference between the best model and the others and use that as a guideline as to how to modify the others.

Reply to  Clyde Spencer
April 26, 2017 1:18 pm

Clyde Spencer:
You go to the heart of the problem with the climate models when you say

I have remarked before that, logically, there can only be one best climate model. Averaging its results with all the poor results only dilutes the best model and arrives at some value in between the best and the worst. What should be done is to see if there is some ‘structural’ or tuning difference between the best model and the others and use that as a guideline as to how to modify the others.

OK, but how can one know which is the “best” model?
Hindcasting doesn’t tell the good from the bad.
And fitting to ‘adjusted’ climate data says nothing because the data frequently changes.
Average wrong is wrong so – as you say – averaging model outputs is pointless.
Pseudoscientists excuse the total failure of climate models as predictive tools by pretending that “All models are wrong but some models are useful.”
Scientists know a model is right when it makes predictions that agree with the predicted parameter to within the inherent error range of the predicted parameter.
A model is wrong when it fails to make predictions that agree with the predicted parameter to within the inherent error range of the predicted parameter.

But there is no clear parameter with known inherent error that the climate models are required to predict. Indeed, global temperature anomaly has no agreed definition which is why its historic values change almost every month.
In other words,
climate models are and can only be useless: they are not even wrong.
Richard

Reply to  richardscourtney
April 27, 2017 5:43 pm

Richard Courtney writes: “So, each climate model emulates a different climate system.”
Richard –
Though I’ve been involved in atmospheric science in the past (1978-82, NASA/NOAA) my interests moved on, however I retained my skills in statistics and, more specifically the analysis and modeling of experimental/observational data. One of the things hammered into me as a child was that one never extrapolates from empirical data. Not ever. Very big no-no.
It seems this truth has been lost on climate modelers. This entire thread seems centered around the idea climate models are derived theoretically, then when that doesn’t work they’re “tuned”. What this boils down to is an empirical rather than theoretical model, and that can never work.
I don’t understand why this conversation is even happening. You seem to be a person with some experience in the field, can you explain why this entire mess hasn’t already been tossed into the waste bin of history?

Reply to  Bartleby
April 28, 2017 12:20 am

Bartleby:
You ask me

can you explain why this entire mess hasn’t already been tossed into the waste bin of history?

I have not researched that so I can only give you my opinion based on my experiences.
I think the reason is basically political. Governments fund the work and workers don’t want their careers to be defunded. None of this is science. I explain this opinion as follows.

Science
is a method which seeks the closest possible approximation to ‘truth’ by seeking information which refutes existing understanding(s) then amending or rejecting existing understanding(s) in light of found information.

Pseudocience
is a method that decides existing understanding(s) to be ‘truth’ then seeks information to substantiate those understandings.
There is no empirical evidence for anthropogenic (i.e. man-made) global warming (AGW); n.b. no evidence, none, zilch, nada. In the 1990s Ben Santer claimed to have found some but that was almost immediately revealed to be an artifact of his improper data selection. Since then research to find some – any – evidence for the existence of AGW has been conducted worldwide at an annual cost of more than $2.5 billion p.a..
That is ‘big business’ and it is pure pseudoscience which has been a total failure: nothing to substantiate AGW has been found.
And the politicians who provide the research funds agree there has been NO scientific advance in the field.

Theoretical climate sensitivity was estimated to be between ~2°C to ~4.5°C for a doubling of CO2 equivalent at the start, and the UN Intergovernmental Panel on Climate Change (IPCC) now says it is estimated to 2.1°C to 4.4°C (with a mean value of 3.2°C).
But politicians promote the ‘big business’ of so-called ‘climate science’ as justification for political policies they excuse by pretending AGW is a real threat.
In these circumstances only the output of computer models is available as justification for the ‘big business’. Hence, the computer model projections are promoted as being indications of ‘truth’ about planet Earth when in reality the projections are merely not-validated functions of computer programming.
Richard

Reply to  richardscourtney
April 28, 2017 11:37 am

Thanks Richard, I appreciate the sentiments and of course I agree with everything you write, but it still remains a mystery this model based fanaticism has survived as a “science” for as long as it has when it openly admits to extrapolating from empirical data. It’s such a fundamental flaw, but it goes completely unchallenged as far as I know. Maybe it’s been challenged and hasn’t gained any traction with the media? I was hoping for insights from an “insider”.

Reply to  Bartleby
April 28, 2017 1:05 pm

Bartleby:
The “fundamental flaw” may be obvious to you and me but it certainly is not to laymen such as journalists.
Please remember that “extrapolating from empirical data” is the future prediction method most used by most people, and everyone who has played a ballgame knows the method works for short time scales most times.
Try explaining the “fundamental flaw” to a journalist if you want to see eyes glaze over. An exceptionally good journalist may check the matter by questioning an expert (i.e. a climate modeller) and be reassured by BS (e.g. ‘the models are of basic physics’).
The matter is “challenged” by some (e.g. Richard Lindzen, Pelke snr, and me). Lindzen uses even stronger language than me about it. But I would welcome advice on how to excite the mass media about it.
Richard

katherine009
April 26, 2017 7:25 am

I think these guys are starting to understand how the coal miners felt.

April 26, 2017 7:34 am

Was it some Schmidt code doing the rounds lately that had the word “fudge” in it? Can’t remember

Kaiser Derden
April 26, 2017 7:42 am

why is anyone debating liars and cheats like Gavin S. ?

April 26, 2017 8:12 am

Gavin’s comparison chart I think used a time period that is particularly favorable to the models. Starting say in 1970 would yield a less favorable result I think.

TA
April 26, 2017 8:29 am

Lots of good comments in this thread. Thanks to all.

Steve Oregon
April 26, 2017 8:47 am

Nike Stokes said something earlier I think needs highlighting.
But first, Willis wrote, “This is errant nonsense that can only be maintained by ruthlessly censoring opposition viewpoints.”
Years ago when I frequented and engaged RC because I had enough intellectual curiosity to take in their advocacy side the censoring became severe and worse.
Not only were comments removed or blocked some were edited by moderators to change their meaning to make them easily vilified by them while I was prohibited from responding.
What kind of people do this?
Nike Stokes wrote,
“A very large number of people have worked on these models. It is impossible to believe that they are all scoundrels. Some codes are published, and there must be many copies of the others in circulation. Massive cheating with so many people involved is unbelievable.”
No Nick, it is not impossible to believe. There are all kinds of scoundrels. Some worse than others.
But you and Gavin are exhibit A & B.
Your own behavior puts you solidly in the category you claim is impossible to believe.
Of course you claim otherwise. That’s what scoundrels do.

gbdixon
Reply to  Steve Oregon
April 27, 2017 1:40 pm

I attended a lecture from a well-known alarmist last Tuesday. He was pleasant, articulate and entertaining. His arguments were poor and in at least one case clearly dishonest but masked in an excellent presentation. He fits solidly among the scientists being called scoundrels here, but he was clearly viewed a hero by most in the on-campus audience.
There is no doubt most of these scientists are decent folk who truly believe they are on the right side of the argument, and view us ‘deniers’ as the scoundrels. One reason is the academic echo chamber they occupy. But since their research grants depend on defining a problem to be researched and the models are so easy to run in such a way that potential problems emerge, they not only rely on the models for their academic standing and income, they have come to truly believe their veracity…even trumping observations in some cases.
So richardscourtney is too mild in his scorn of models: They are not just useless, they have become dangerous because they have created an alternate universe the scientists live in.
We should call for withdrawal of all climate model-based papers at every turn.

April 26, 2017 8:52 am

Why, oh why, do they run hot?
Not because arbitrary aerosol damping is insufficient. Not because “unforced” rascals like PDO intervene.
No. It is because the fundamental assumptions of CO2 radiative forcing used are incorrect. This set of assumptions is considered sacrosanct, but the models can never be fixed until these values are “tuned”.

Reply to  gymnosperm
April 26, 2017 9:11 am

Why, oh why, do they run hot?

Here is where they do it.
http://www.cesm.ucar.edu/models/atm-cam/docs/description/node13.html#SECTION00736000000000000000
They preserve mass in the air/water boundary layer during evaporation. The encourages water feedback. A long time ago I read that early models did not warm enough, until they parameterized this layer, and then all the modes which included such a function, all based on real physics of course, warmed up, and then they had to play with aerosols to turn it down.

Reply to  micro6500
April 26, 2017 10:59 am

If you set MODTRAN to 1 meter altitude, and vary only CO2 ppm, you find that there is no change in the upward radiance from 100 to 3000 ppm. At 10 meters it is 100 to 600 ppm. At 100 meters it is 230 to 4000 ppm…
Thanks for the link.

Reply to  gymnosperm
April 26, 2017 11:05 am

If I understand how Modtran works, that gives garbage. No, it gives you a single moment of atm radiative physics. It just changes a lot at night during the cooling cycle, and an average atm one sample, is worthless. What needs to be done, is a run for each change in temperature as it cools over night, as rel humidity changes. Doing this is on my long list of things I’d like to do, but would rather talk someone else into doing so I can get back to rewriting my report code to do all the temp math as a vector, instead of a field.

jfpittman
Reply to  micro6500
April 27, 2017 7:37 am

That is interesting. One of the other unstated attributes of water vapor is that for near surface conditions water vapor is a negative feedback for temperature increase. I do mean that as a feedback and not the dissemblers who call gain or attenuation feedback. The physics is the Stokes-Einstein theorem of diffusing fluids of differing specific gravity and viscosity. It is relevant for Microfluidics and Nanofluidics.
Looks like that bulk property parameterization, strikes again.

April 26, 2017 9:00 am

Everyone who thinks that “temperatures” are important to deciding anything about the IR/Energy balance of the atmosphere, SHOULD HAVE THEIR PHD REMOVED. Without evaluating the HUMIDITY, and calculating the energy content per volume of the air, all other “average temperature” garbage, is just that. GARBAGE. Worthless.

Reply to  Max Hugoson
April 26, 2017 9:15 am

Enthalpy follows temperatures pretty well. I include that, dry enthalpy , and wet ( just the energy from water vapor), plus clear sky surface solar in the beta reports here http://sourceforge.net/projects/gsod-rpts/

April 26, 2017 9:05 am

Gavin is knowingly misrepresenting. Reprehensible. The CMIP5 experimental design was published in 2009 (Taylor) and finalized 2011 (Meehle). Available on line at cmip.pcmdi.llnl.gov. The second mandatory run is a 30 year hindcast from YE 2005. The parameterizations were tuned to best hindcast this period. Curry is exactly correct.

Reply to  frankclimate
April 26, 2017 11:53 am

Did not see it before posting. Skipped to the bottom and provided the referenceto the ‘experimental design’. (running models are ironically not ‘experiments’ in the real world, only the climate science alternate reality.) Your reference is excellent direct evidence. TY

Reply to  frankclimate
April 26, 2017 12:49 pm

I’m in suspense what Nick shall try to respond. It’s very clear that the argumentation ( I’ll show that the models are not inline for 1950…2010) is flawed because this is a strawman argument. Try it better, Nick!

Nick Stokes
Reply to  frankclimate
April 26, 2017 2:49 pm

“I’m in suspense what Nick shall try to respond.”
Just the obvious. They are not tuning to GMST, but to SST. But you should read the associated methodology. They don’t do a full run. They just compute for the relevant period, and do the comparison. In fact, as they say, they then do some other modifications, and often don’t come back to check that they still have correspondence 1975-2005. That wasn’t the point. It’s a step in a bootstrappoing process.

Reply to  frankclimate
April 26, 2017 11:46 pm

Nick thanks. They tune to the SST of 1976…2005, there is no doubt. And what says the figure of Gavin for the span 1950…2010 about the evidence of Currys argument? Nothing at all, IMO.

johnfpittman
Reply to  ristvan
April 27, 2017 3:00 pm

Nick apparently is misrepresenting or making claims he knows have dubious assumptions. He stated upstream
““You should understand what it is that they’re doing! Mauritsen aren’t developing the model, they’re running. it.”
I understand it very well. Unlike people here, I have actually done tuning, for CFD programs.””
If he understands, has done tuning etc., he knows or should know that CFD software was developed using literally 10’s of thousands of independent measurements of phenomena. There is but one x for the weather.

RHS
April 26, 2017 9:07 am

I can’t believe that Gavin and friends let the following link stand which criticizes Gavin:
http://www.wsj.com/video/opinion-journal-how-government-twists-climate-statistics/80027CBC-2C36-4930-AB0B-9C3344B6E199.html

RHS
Reply to  RHS
April 26, 2017 9:09 am
Linnea Lueken
April 26, 2017 9:25 am

As Michael Crichton has said ( State of Fear) — models can never be proof. They’re models– crapshoot at worst and educated guesses at best.

Reply to  Linnea Lueken
April 27, 2017 7:58 pm

Linnea quotes: “models can never be proof.”
And of course Crichton is likely right about that, but predictive models are useful and can demonstrate the theorist’s understanding. These models aren’t predictive though and that’s the crux of the problem; the developers seem unwilling to acknowledge that.

Michael Jankowski
April 26, 2017 10:36 am

Gavin has freely admitted that models fail badly on continental and regional scales. Even if the global average temperature tracked well between models and observations, it is the sum of failures. In what scientific realm is that justifiable?

basicstats
April 26, 2017 10:53 am

This seems to depend upon the meaning of “tuned to the period of interest”. Dr Schmidt suggests a very narrow interpretation that this means fitting the model to temperatures over this period. Dr Curry’s interpretation corresponds to my (limited) understanding of tuning – adjusting parameterizations of subgrid and other processes, using evolving data for variables mostly not temperatures (eg aerosols, humidity etc). In fact, Dr Curry seems to get to the issue of wholly inadequate GCM validation. When models are being updated all the time, there can be no meaningful out-of-sample evaluation (and no model-based attribution).

Reply to  basicstats
April 27, 2017 8:02 pm

BasicStats writes: “and no model-based attribution”
And that’s really what the entire debate is about I think. Why this isn’t obvious to everyone participating escapes me completely. If we build models to tell us what we want to hear, and those models do that, we’ve learned absolutely nothing about the world.

Joel Snider
April 26, 2017 12:27 pm

‘Shell games – you got to learn how to play – Shell games’.
Sung to the tune of ‘Foreigner’.

Phil Cartier
April 26, 2017 1:15 pm

If you want to learn more from (sur)Real Climate read the Borehole part of the comments. If it’s still around that is where all the real comments are. Much better than the illogical mess in the posted comments, as this post so ably docments.

April 26, 2017 1:59 pm

Thanks to all for so many wonderful comments, keep them coming, I learned a lot. The intellectual vibrancy of this site is what keeps me coming back. I was amazed at the flow of comments, and I kind of think that part of what happened here is that this group of commenters did the back-and-forth discussion that should have taken place at RC, if they didn’t always prevent that kind of thing from occurring.
A special thank you to John Bills, David Middleton and Richard Courtney for helping assemble in one thread several great models-to-observations comparisons, and also to Rud Istvan, micro6500 and others for links to other sites and posts, several new to me.
Every new AR, we get new spaghetti graphs and, superimposed on them, observations that bump along moving towards the bottom of the envelope of the spaghetti graph before finally punching out through the bottom of the envelope. Next AR we do it all over again, and the last batch of spaghetti goes in the memory hole. Maybe in a big El Nino temperature spike, the observations jumps up near the middle of the spaghetti, and you can see an example of that above, but then the observations drop right back down again. How the practitioners in this field continue to think that’s okay, and that these spaghetti graphs constitute some kind of accurate forecast, and that the practitioners vociferously defend these projections, and why no one managing or funding this fails to rein them in, baffles the mind.
And finally, a special shout out to Mosher (a great career as a Literature PhD wasted, that one) for a) agreeing that the models run hot and b) saying he thinks that’s a good thing. Here I think that, as a Berkeley Earth team member, he speaks for the sentiments of the alarmist climate establishment. Scaring the rest of us with exaggerated projections isn’t a bug, it’s a feature. The mask slips.

April 26, 2017 3:56 pm

I seem to recall a paper describing problems assigning the droplet size at initial formation of water from vapour. The modelers varied the size, somewhere from 2-10um I think, but the best fit model was at an unrealistic size………sounded like tuning to me. I csn probably to drag out that paper…

Editor
April 26, 2017 4:32 pm

The circularity critique can also be formulated at a more macro level than what Curry articulates here. As I put it in a recent comment:
“The IPCC’s method for ‘estimating’ water vapor feedbacks is to ASSUME that all late 20th century warming was caused by Co2, then calculate how many times the tiny Co2 forcing effect would have to be multiplied up by feedback effects to have created the observed warming. Purely circular scientific fraud. Their claim that Co2 warming effects are strong enough to have caused recent warming is based entirely on the assumption that recent warming WAS caused by Co2.”
Curry is taking a narrower view, criticizing the consensoids for calibrating their models over the same period the models claim to explain (late 20th century warming). She is referring to the same estimation scheme But we are offering different critiques about it.
The estimation scheme starts with a bunch of highly contentious assumptions about forcings, assumptions which leave Co2 as the only possible explanation for late 20th century warming (Curry mentions the omission of indirect solar effects), leading to the estimation that these Co2-warming effects must be super-powerful (getting multiplied up as much as several times by water vapor feedback effects), if they are the only thing that could have caused the observed warming.
Not sure that What Curry is critiquing is actually circularity. If the model does yield a good fit to the data over the entire calibration period that would provide some evidence for it (keeping in mind Von Neumann’s warning that with three degrees of freedom he could wiggle an elephant’s trunk, while climate models have endless degrees of freedom and parameterizations up the wazoo). The evidence would be better if the models could make a prediction that is borne out, but as this post points out, they are running dramatically hot. If they have not already been completely falsified by The Pause they are on the verge of it. The weakness of the consensus position here is not from logical circularity but from empirical falsification.
My critique of circularity is based on the larger shape of the consensus argument. In order to support their grand claim that late 20th century warming was caused mostly by human increments to atmospheric Co2 they assume, in their claims about forcings, that it was caused by Co2, then they derive their estimate of water vapor feedback effects from this assumption.
That is a logically circular argument. If they were not being circular they would estimate water vapor feedback effects by the direct evidence about water vapor feedbacks. Is the increase in Co2 causing an upper tropospheric hotspot, as positive water vapor feedbacks would produce? No. Is warming accompanied by constant or rising relative humidity? No. Lindzen, Eschenbach, etcetera? No no no.
A non-circular analysis would then take the discrepancy between what water vapor feedbacks are directly estimated to be and what they would have to be for the claimed forcings to have created the observed warming and use this discrepancy to estimate how far off the claimed total forcing is from the actual total forcing, then try to figure out how to account for that discrepancy. It could be in the forcing estimates. It could be in the direct estimate of the feedback, but these have to both be estimated directly from their own available evidence. Using one to estimate the other is using circularity to jump over the discrepancy. Not logically allowed.
The IPCC shortcuts the whole scientific process of estimating discrepancy and and trying to account for it, replacing it with with an obvious circularity, using their assumption that Co2 has been the dominant forcing to justify their conclusion that Co2 has been the dominant cause of warming. The two are the same thing, translated only by the simple warming=forcing x feedback formulation that the IPCC employs.
Curry may well have been meaning to allude to the same thing. She only mentioned circularity parenthetically, but it does need elaboration. There is a whole normal scientific process that is being elided, short circuited, omitted, by the logical circularity of the IPCC argument.

Chris Hanley
Reply to  Alec Rawls
April 26, 2017 6:46 pm

“In order to support their grand claim that late 20th century warming was caused mostly by human increments to atmospheric Co2 they assume, in their claims about forcings, that it was caused by Co2, then they derive their estimate of water vapor feedback effects from this assumption …”.
===========================
Exactly, they first use a premise to prove a conclusion then use the conclusion to prove the premise, the amazing thing is that they fail to recognise it — or do they?

GregB
April 26, 2017 4:33 pm

Mosher, Your airplane fuel model shows you use contemptible science – it’s not science- its not math – its religion and you’re trying to save me. There could not be a clearer post on fraud and condoning it.

Editor
April 27, 2017 5:45 am

If you have your own favorite example that shows that the models run hot, share it with the rest of us, and I hope you enjoyed this one. And of course I submitted a one sentence comment at RC to the effect that the figure above shows that the models run hot, but RC still remembers how to squelch all thoughts that don’t hew to the party line so it didn’t appear. Some things never change.

I ran a normal distribution and the models definitely run “hot,” even compared to GISTEMP…comment image
All of the temperature series fall within the 1 standard deviation of the model mean. Which they should because these are historical model runs. HadCRUT4 and Cowtan & Way barely fall within 1 standard deviation, 75-80% of the models are “hotter.” In terms of a “hindcast,” the models aren’t very good.

John Stover
April 27, 2017 12:06 pm

Once upon a time when I was a junior Army intelligence officer stationed in Korea I was sent to a 155mm howitzer battery to seek their advice on answering a question from the US Forces Korea commander. He wanted to know just how accurate/effective was the North Korean artillery likely to be if employed against our forces. Their tube artillery outnumbered ours by something like 35 to one so you could understand his concern. I had tons of data on NKA artillery practices and sat down with the Fire Direction Center chief and his team to compare and contrast theirs and ours. We spent three days going over everything and concluded that the CEP (circular area probable-radius) of their artillery fire was roughly the same as ours. Not what the General, and my own commander, wanted to hear. (Lots of other things impact effectiveness but the weapons performance were roughly similar.
Why is that problem applicable to the climate data discussion? The FDC chief explained it very simply– when you come down to it all we are doing is applying double precision arithmetic operations against highly estimated data. All I really know with any accuracy is the outside air temperature and the pressure at my guns when we pull the lanyard. I can only estimate those and similar factors enroute to the target. Map coordinates (pre-GPS days) are off 80-120 meters horizontally and up to twenty meters vertically. Forward observer range estimates and azimuths vary 10% or more. Powder performance varies by 3-5% between bags. Ogive shapes vary, detonator reaction times vary. Okay, you see the problem. All of these unconstrained variables, and there are dozens more, make the “perfect” solution impossible.
GPS and laser rangefinders make those measurements a little more accurate but friendly fire incidents still occur. And weather still surprises us on occasion.

April 27, 2017 2:07 pm

Gavin Schmidt complaints that Judith Curry “…fails to give any quantitative argument to support her contention that human drivers are not the dominant cause of recent trends.” Here, Gavin, I present such a quantitative argument . It is your boys who were involved and they are hardly the human drivers you are looking for. The incident I have in mind happened about 2008. I was working on my book “What Warming” and noticed that temperature in the eighties and nineties was flat, what we now would call a hiatus. It was an 18 years stretch of temperature involved. On top of it was a wave train created by ENSO. It was comprised
of five El Nino peaks, with La Nina valleys in between. I put a yellow dot in the middle of each line connecting an El Nino peak with a neighboring La Nina valley. These dots lined up in a straight horizontal line which tells us two things, First, the ENSO oscillation was not warming up the world as an idiotic pseudo-scientist has claimed; and second, the wave train was on level ground. I used satellite data from both UAH and RSS and made it part of figure 15 in my book. But before it went to press, this temperature section was mysteriously transmogrified into a warming curve whose temperature rose at the rate of 0.06 degrees Celsius per decade. Worse yet, they extended ths fake swarming to the twenty-first century that followed, in a desire to create more warming. I protested but was ignored. The only thing I could do under the circumstances was to put a notice about it into the preface of my book. That, too was ignored and the fake warming even now is part of their official temperature curve. That is where the matter would have rested but luckily one of my readers unearthed the following NASA document from 1997:
“…. Unlike the surface based temperatures, global temperature measurements of the earth’s lower atmosphere obtained from satellites reveal no definitive warming trend over the past two decades. The slight trend that is in the data appears to be downward. The largest fluctuations in the satellite temperature data are not from any man-made activity, but from natural phenomena such as large volcanic eruptions from Mt. Pinatubo, and from El Nino. So the programs which model global warming in a computer say the temperature of the Earth’s lower atmosphere should be going up markedly, but actual measurements of the temperature of the lower atmosphere reveal no such pronounced activity.”
This leaves no doubt that the originally there was no warming and that the current warming in official temperature curves is a fake At the time thys fake warming was created, James Hansen was still in charge of NASA-GISS. He transferred out to Columbia University and Gavin Schmidt took over. Schmidt is well aware of my objections but refuses to do anything about it. I found a clue to his co-conspirators when I discovered that NASA-GISS, NOAA, and ,the Met Office in UK had all been subject to computer cleaning that unbeknownst to them left identical sharp spikes on top of that section of their temperature curves. The computer cleaning would only make sense if you are trying to hide something. like incompatible data. All this is sufficient to show that what Gavin Schmidt is complaining about in Judith Curry is wrong: The quantitative basis is there. It should be sufficient to justify an investigation into his shadowy dealings with global temperature curves. A large amount of public money may depend upon it.

jstanley01
April 28, 2017 1:30 pm

Whoa! Back up a second!…
“It’s hard to see anyone in the Trump Administration thinking they’re getting value for money from their support of that site.”
Say what? That site is getting taxpayer funding?

MikeN
April 29, 2017 9:45 am

>Models are NOT tuned Gavin of RealClimate
“based on simulations with the U. of Victoria climate/carbon model tuned to yield the mid-range IPCC climate sensitivity. ”
http://www.realclimate.org/index.php/archives/2011/11/keystone-xl-game-over/

April 30, 2017 9:14 pm

But then they also claim the models can, in fact, hindcast. The whole thing makes three card monte look honest.