From the “whoopsie, that’s not what I meant” department

Guest essay by Thomas Wiita
A recent poster here wrote that they had stopped looking at the Real Climate web site, and good for them. It has become a sad, inwardly focused group. It’s hard to see anyone in the Trump Administration thinking they’re getting value for money from their support of that site.
I still check in there occasionally and just now I found something too good not to share with the readers at WUWT.
Gavin has a post up in which he rebuts Judith Curry’s response to comments about her testimony at the Committee hearing. Let me step aside – here’s Gavin:
“Following on from the ‘interesting’ House Science Committee hearing two weeks ago, there was an excellent rebuttal curated by ClimateFeedback of the unsupported and often-times misleading claims from the majority witnesses. In response, Judy Curry has (yet again) declared herself unconvinced by the evidence for a dominant role for human forcing of recent climate changes. And as before she fails to give any quantitative argument to support her contention that human drivers are not the dominant cause of recent trends.
Her reasoning consists of a small number of plausible sounding, but ultimately unconvincing issues that are nonetheless worth diving into. She summarizes her claims in the following comment:
… They use models that are tuned to the period of interest, which should disqualify them from be used in attribution study for the same period (circular reasoning, and all that). The attribution studies fail to account for the large multi-decadal (and longer) oscillations in the ocean, which have been estimated to account for 20% to 40% to 50% to 100% of the recent warming. The models fail to account for solar indirect effects that have been hypothesized to be important. And finally, the CMIP5 climate models used values of aerosol forcing that are now thought to be far too large.
These claims are either wrong or simply don’t have the implications she claims. Let’s go through them one more time.
1) Models are NOT tuned [for the late 20th C/21st C warming] and using them for attribution is NOT circular reasoning.
Curry’s claim is wrong on at least two levels. The “models used” (otherwise known as the CMIP5 ensemble) were *not* tuned for consistency for the period of interest (the 1950-2010 trend is what was highlighted in the IPCC reports, about 0.8ºC warming) and the evidence is obvious from the fact that the trends in the individual model simulations over this period go from 0.35 to 1.29ºC! (or 0.84±0.45ºC (95% envelope)).”
The figure was copied straight from RC. There is one wonderful thing about Gavin’s argument, and one even more wonderful thing.
The wonderful thing is that he is arguing that Dr. Curry is wrong about the models being tuned to the actual data during the period because the models are so wrong (!).
The models were not tuned to consistency with the period of interest as shown by the fact that – the models are not consistent with the period of interest. Gavin points out that the models range all over the map, when you look at the 5% – 95% range of trends. He’s right, the models do not cluster tightly around the observations, and they should, if they were modeling the climate well.
Here’s the even more wonderful thing. If you read the relevant portions of the IPCC reports, looking for the comparison of observations to model projections, each is a masterpiece of obfuscation on this same point. You never see a clean, clear, understandable presentation of the models-to-actuals comparison. But look at those histograms above, direct from the hand of Gavin. It’s the clearest presentation I’ve ever run across that the models run hot. Thank you, Gavin.
I compare the trend-weighted area of the three right hand bars to the two left hand bars, which center around the tall bar of the mode of the projections. There is way more area under those three bars to the right, an easy way to see that the models run hot.
If you have your own favorite example that shows that the models run hot, share it with the rest of us, and I hope you enjoyed this one. And of course I submitted a one sentence comment at RC to the effect that the figure above shows that the models run hot, but RC still remembers how to squelch all thoughts that don’t hew to the party line so it didn’t appear. Some things never change.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Listen, everyone: Stokes is PAID to do this. He will never stop, never speak clearly, never admit he is a shill for GCM’s. His definition of “tuning” is what it must be so that he can deny that the models are tuned.
Obviously the models cannot match past temperatures without “parameterization,” another word for tuning, but just try to get Stokes to agree to that…
Whoever responded to my contribution, be assured I take your opinion seriously.
Sole problem is I stumble through an unmanageable WordPress.com jungle.
Best regards – Hans
kreizkruzifix, use the Firefox browser with the NoScript add-on and it will completely eliminate your problems with WordPress by blocking everything WordPress is trying to display in your browser. If you want to allow some function to increase useability, you can easily allow any of the scripts that are trying to run.
There are other fixes to this problem, but I found NoScript is the easiest for me. It’s practically bullet-proof and very easy to use.
Thomas Wiita:
You ask
I think I need to post the following again because it explains the correct interpretation which James Schrumpf provides in this thread where he writes to Nick StokesThere’s another possibility you leave out: the models ARE tuned, and they are so bad they STILL can’t match reality.
I write to again explain why that is.
None of the climate models – not one of them – could match the change in mean global temperature over the past century if it did not utilise a unique value of assumed cooling from aerosols. So, inputting actual values of the cooling effect (such as the determination by Penner et al.
http://www.pnas.org/content/early/2011/07/25/1018526108.full.pdf?with-ds=yes )
would make every climate model provide a mismatch of the global warming it hindcasts and the observed global warming for the twentieth century.
This mismatch would occur because all the global climate models and energy balance models are known to provide indications which are based on
1.
the assumed degree of forcing resulting from human activity that produce warming
and
2.
the assumed degree of anthropogenic aerosol cooling input to each model as a ‘fiddle factor’ to obtain agreement between past average global temperature and the model’s indications of average global temperature.
In 1999 I published a peer-reviewed paper that showed the UK’s Hadley Centre general circulation model (GCM) could not model climate and only obtained agreement between past average global temperature and the model’s indications of average global temperature by forcing the agreement with an input of assumed anthropogenic aerosol cooling.
The input of assumed anthropogenic aerosol cooling is needed because the model ‘ran hot’; i.e. it showed an amount and a rate of global warming which was greater than was observed over the twentieth century. This failure of the model was compensated by the input of assumed anthropogenic aerosol cooling.
And my paper demonstrated that the assumption of aerosol effects being responsible for the model’s failure was incorrect.
(ref. Courtney RS An assessment of validation experiments conducted on computer models of global climate using the general circulation model of the UK’s Hadley Centre Energy & Environment, Volume 10, Number 5, pp. 491-502, September 1999).
More recently, in 2007, Kiehle published a paper that assessed 9 GCMs and two energy balance models.
(ref. Kiehl JT,Twentieth century climate model response and climate sensitivity. GRL vol.. 34, L22710, doi:10.1029/2007GL031383, 2007).
Kiehl found the same as my paper except that each model he assessed used a different aerosol ‘fix’ from every other model. This is because they all ‘run hot’ but they each ‘run hot’ to a different degree.
Kiehl says in his paper:
And, importantly, Kiehl’s paper says:
And the “magnitude of applied anthropogenic total forcing” is fixed in each model by the input value of aerosol forcing.
Kiehl’s paper can be read here.
Please note its Figure 2 which is for 9 GCMs and 2 energy balance models.
It shows that
(a) each model uses a different value for “Total anthropogenic forcing” that is in the range 0.80 W/m^2 to 2.02 W/m^2
but
(b) each model is forced to agree with the rate of past warming by using a different value for “Aerosol forcing” that is in the range -1.42 W/m^2 to -0.60 W/m^2.
In other words the models use values of “Total anthropogenic forcing” that differ by a factor of more than 2.5 and they are ‘adjusted’ by using values of assumed “Aerosol forcing” that differ by a factor of 2.4.
So, each climate model emulates a different climate system. Hence, at most only one of them emulates the climate system of the real Earth because there is only one Earth. And the fact that they each ‘run hot’ unless fiddled by use of a completely arbitrary ‘aerosol cooling’ strongly suggests that none of them emulates the climate system of the real Earth.
Richard
Thank you for that explanation.
One of the times I lost my posting privileges at ARS Technica was when I responded to someone who claimed to be working on climate models. He disputed my claim that the models were curve-fitted to the rather poor historical data by saying that the models were based on first principals of physics. I asked a simple question about whether the historical data was used in the modeling process, and he went on for a few pages trying to muddy the waters. I finally got to the point and asked if the models were ever tested against the historical data to see how accurate they matched that data. Finally he said that, OF COURSE, the models were tested against that data. I then asked, since the 1980s, how many times the models had been tested on this data? This, of course, is a perfect example of “curve-fitting” – or “tuning.”
Anyone who models financial data knows how a computer model that is “curve-fitted” or “tuned” to the historical data will invariably lose your money. These models are also trying to model non-linear, coupled, chaotic systems.
One other time I was cut off from posting on ARS Technica was when I pointed out that the evolution of the “fudge factors” (sensitivity factors) in climate models was very much like what Richard Feynman wrote about in “Cargo Cult Science.” The trends have been slowly moving to lower and lower values. (Look up Feynman’s comments about Robert Millikan’s oil drop experiment. The regulars there at ARS Technica have some pretty elevated views of their own opinions, and they do not take kindly to these types of questions.)
Kermit Johnson:
Thanks for that. I make two responses that are both addressed by copying to here one of my above posts and the link it contains to another above post.
I wrote,
Of course the climate models are not derived from first principles. A model becomes a curve fitting exercise when it uses any parametrisation.
Of more importance is the invalidity of climate model projections which I relate in my above anecdote.
Richard
Richard,
I have remarked before that, logically, there can only be one best climate model. Averaging its results with all the poor results only dilutes the best model and arrives at some value in between the best and the worst. What should be done is to see if there is some ‘structural’ or tuning difference between the best model and the others and use that as a guideline as to how to modify the others.
Clyde Spencer:
You go to the heart of the problem with the climate models when you say
OK, but how can one know which is the “best” model?
Hindcasting doesn’t tell the good from the bad.
And fitting to ‘adjusted’ climate data says nothing because the data frequently changes.
Average wrong is wrong so – as you say – averaging model outputs is pointless.
Pseudoscientists excuse the total failure of climate models as predictive tools by pretending that “All models are wrong but some models are useful.”
Scientists know a model is right when it makes predictions that agree with the predicted parameter to within the inherent error range of the predicted parameter.
A model is wrong when it fails to make predictions that agree with the predicted parameter to within the inherent error range of the predicted parameter.
But there is no clear parameter with known inherent error that the climate models are required to predict. Indeed, global temperature anomaly has no agreed definition which is why its historic values change almost every month.
In other words,
climate models are and can only be useless: they are not even wrong.
Richard
Richard Courtney writes: “So, each climate model emulates a different climate system.”
Richard –
Though I’ve been involved in atmospheric science in the past (1978-82, NASA/NOAA) my interests moved on, however I retained my skills in statistics and, more specifically the analysis and modeling of experimental/observational data. One of the things hammered into me as a child was that one never extrapolates from empirical data. Not ever. Very big no-no.
It seems this truth has been lost on climate modelers. This entire thread seems centered around the idea climate models are derived theoretically, then when that doesn’t work they’re “tuned”. What this boils down to is an empirical rather than theoretical model, and that can never work.
I don’t understand why this conversation is even happening. You seem to be a person with some experience in the field, can you explain why this entire mess hasn’t already been tossed into the waste bin of history?
Bartleby:
You ask me
I have not researched that so I can only give you my opinion based on my experiences.
I think the reason is basically political. Governments fund the work and workers don’t want their careers to be defunded. None of this is science. I explain this opinion as follows.
Science is a method which seeks the closest possible approximation to ‘truth’ by seeking information which refutes existing understanding(s) then amending or rejecting existing understanding(s) in light of found information.
Pseudocience is a method that decides existing understanding(s) to be ‘truth’ then seeks information to substantiate those understandings.
There is no empirical evidence for anthropogenic (i.e. man-made) global warming (AGW); n.b. no evidence, none, zilch, nada. In the 1990s Ben Santer claimed to have found some but that was almost immediately revealed to be an artifact of his improper data selection. Since then research to find some – any – evidence for the existence of AGW has been conducted worldwide at an annual cost of more than $2.5 billion p.a..
That is ‘big business’ and it is pure pseudoscience which has been a total failure: nothing to substantiate AGW has been found.
And the politicians who provide the research funds agree there has been NO scientific advance in the field.
Theoretical climate sensitivity was estimated to be between ~2°C to ~4.5°C for a doubling of CO2 equivalent at the start, and the UN Intergovernmental Panel on Climate Change (IPCC) now says it is estimated to 2.1°C to 4.4°C (with a mean value of 3.2°C).
But politicians promote the ‘big business’ of so-called ‘climate science’ as justification for political policies they excuse by pretending AGW is a real threat.
In these circumstances only the output of computer models is available as justification for the ‘big business’. Hence, the computer model projections are promoted as being indications of ‘truth’ about planet Earth when in reality the projections are merely not-validated functions of computer programming.
Richard
Thanks Richard, I appreciate the sentiments and of course I agree with everything you write, but it still remains a mystery this model based fanaticism has survived as a “science” for as long as it has when it openly admits to extrapolating from empirical data. It’s such a fundamental flaw, but it goes completely unchallenged as far as I know. Maybe it’s been challenged and hasn’t gained any traction with the media? I was hoping for insights from an “insider”.
Bartleby:
The “fundamental flaw” may be obvious to you and me but it certainly is not to laymen such as journalists.
Please remember that “extrapolating from empirical data” is the future prediction method most used by most people, and everyone who has played a ballgame knows the method works for short time scales most times.
Try explaining the “fundamental flaw” to a journalist if you want to see eyes glaze over. An exceptionally good journalist may check the matter by questioning an expert (i.e. a climate modeller) and be reassured by BS (e.g. ‘the models are of basic physics’).
The matter is “challenged” by some (e.g. Richard Lindzen, Pelke snr, and me). Lindzen uses even stronger language than me about it. But I would welcome advice on how to excite the mass media about it.
Richard
I think these guys are starting to understand how the coal miners felt.
Was it some Schmidt code doing the rounds lately that had the word “fudge” in it? Can’t remember
why is anyone debating liars and cheats like Gavin S. ?
he doen’s dare:
https://wattsupwiththat.com/2015/05/19/nasas-dr-gavin-schmidt-goes-into-hiding-from-seven-very-inconvenient-questions/
Gavin’s comparison chart I think used a time period that is particularly favorable to the models. Starting say in 1970 would yield a less favorable result I think.
Lots of good comments in this thread. Thanks to all.
Nike Stokes said something earlier I think needs highlighting.
But first, Willis wrote, “This is errant nonsense that can only be maintained by ruthlessly censoring opposition viewpoints.”
Years ago when I frequented and engaged RC because I had enough intellectual curiosity to take in their advocacy side the censoring became severe and worse.
Not only were comments removed or blocked some were edited by moderators to change their meaning to make them easily vilified by them while I was prohibited from responding.
What kind of people do this?
Nike Stokes wrote,
“A very large number of people have worked on these models. It is impossible to believe that they are all scoundrels. Some codes are published, and there must be many copies of the others in circulation. Massive cheating with so many people involved is unbelievable.”
No Nick, it is not impossible to believe. There are all kinds of scoundrels. Some worse than others.
But you and Gavin are exhibit A & B.
Your own behavior puts you solidly in the category you claim is impossible to believe.
Of course you claim otherwise. That’s what scoundrels do.
I attended a lecture from a well-known alarmist last Tuesday. He was pleasant, articulate and entertaining. His arguments were poor and in at least one case clearly dishonest but masked in an excellent presentation. He fits solidly among the scientists being called scoundrels here, but he was clearly viewed a hero by most in the on-campus audience.
There is no doubt most of these scientists are decent folk who truly believe they are on the right side of the argument, and view us ‘deniers’ as the scoundrels. One reason is the academic echo chamber they occupy. But since their research grants depend on defining a problem to be researched and the models are so easy to run in such a way that potential problems emerge, they not only rely on the models for their academic standing and income, they have come to truly believe their veracity…even trumping observations in some cases.
So richardscourtney is too mild in his scorn of models: They are not just useless, they have become dangerous because they have created an alternate universe the scientists live in.
We should call for withdrawal of all climate model-based papers at every turn.
Why, oh why, do they run hot?
Not because arbitrary aerosol damping is insufficient. Not because “unforced” rascals like PDO intervene.
No. It is because the fundamental assumptions of CO2 radiative forcing used are incorrect. This set of assumptions is considered sacrosanct, but the models can never be fixed until these values are “tuned”.
Here is where they do it.
http://www.cesm.ucar.edu/models/atm-cam/docs/description/node13.html#SECTION00736000000000000000
They preserve mass in the air/water boundary layer during evaporation. The encourages water feedback. A long time ago I read that early models did not warm enough, until they parameterized this layer, and then all the modes which included such a function, all based on real physics of course, warmed up, and then they had to play with aerosols to turn it down.
If you set MODTRAN to 1 meter altitude, and vary only CO2 ppm, you find that there is no change in the upward radiance from 100 to 3000 ppm. At 10 meters it is 100 to 600 ppm. At 100 meters it is 230 to 4000 ppm…
Thanks for the link.
If I understand how Modtran works, that gives garbage. No, it gives you a single moment of atm radiative physics. It just changes a lot at night during the cooling cycle, and an average atm one sample, is worthless. What needs to be done, is a run for each change in temperature as it cools over night, as rel humidity changes. Doing this is on my long list of things I’d like to do, but would rather talk someone else into doing so I can get back to rewriting my report code to do all the temp math as a vector, instead of a field.
That is interesting. One of the other unstated attributes of water vapor is that for near surface conditions water vapor is a negative feedback for temperature increase. I do mean that as a feedback and not the dissemblers who call gain or attenuation feedback. The physics is the Stokes-Einstein theorem of diffusing fluids of differing specific gravity and viscosity. It is relevant for Microfluidics and Nanofluidics.
Looks like that bulk property parameterization, strikes again.
Everyone who thinks that “temperatures” are important to deciding anything about the IR/Energy balance of the atmosphere, SHOULD HAVE THEIR PHD REMOVED. Without evaluating the HUMIDITY, and calculating the energy content per volume of the air, all other “average temperature” garbage, is just that. GARBAGE. Worthless.
Enthalpy follows temperatures pretty well. I include that, dry enthalpy , and wet ( just the energy from water vapor), plus clear sky surface solar in the beta reports here http://sourceforge.net/projects/gsod-rpts/
Gavin is knowingly misrepresenting. Reprehensible. The CMIP5 experimental design was published in 2009 (Taylor) and finalized 2011 (Meehle). Available on line at cmip.pcmdi.llnl.gov. The second mandatory run is a 30 year hindcast from YE 2005. The parameterizations were tuned to best hindcast this period. Curry is exactly correct.
See this comment: https://wattsupwiththat.com/2017/04/26/in-an-attempt-to-discredit-judith-curry-gavin-at-realclimate-shows-how-bad-climate-models-really-are/#comment-2485952
Did not see it before posting. Skipped to the bottom and provided the referenceto the ‘experimental design’. (running models are ironically not ‘experiments’ in the real world, only the climate science alternate reality.) Your reference is excellent direct evidence. TY
I’m in suspense what Nick shall try to respond. It’s very clear that the argumentation ( I’ll show that the models are not inline for 1950…2010) is flawed because this is a strawman argument. Try it better, Nick!
“I’m in suspense what Nick shall try to respond.”
Just the obvious. They are not tuning to GMST, but to SST. But you should read the associated methodology. They don’t do a full run. They just compute for the relevant period, and do the comparison. In fact, as they say, they then do some other modifications, and often don’t come back to check that they still have correspondence 1975-2005. That wasn’t the point. It’s a step in a bootstrappoing process.
Nick thanks. They tune to the SST of 1976…2005, there is no doubt. And what says the figure of Gavin for the span 1950…2010 about the evidence of Currys argument? Nothing at all, IMO.
Nick apparently is misrepresenting or making claims he knows have dubious assumptions. He stated upstream
““You should understand what it is that they’re doing! Mauritsen aren’t developing the model, they’re running. it.”
I understand it very well. Unlike people here, I have actually done tuning, for CFD programs.””
If he understands, has done tuning etc., he knows or should know that CFD software was developed using literally 10’s of thousands of independent measurements of phenomena. There is but one x for the weather.
I can’t believe that Gavin and friends let the following link stand which criticizes Gavin:
http://www.wsj.com/video/opinion-journal-how-government-twists-climate-statistics/80027CBC-2C36-4930-AB0B-9C3344B6E199.html
Makes you wonder if they’d let this one stand:
http://www.wsj.com/video/opinion-journal-the-climate-change-debates-you-never-hear-about/64410386-C38D-4902-A53A-1E73B0468D4C.html
As Michael Crichton has said ( State of Fear) — models can never be proof. They’re models– crapshoot at worst and educated guesses at best.
Linnea quotes: “models can never be proof.”
And of course Crichton is likely right about that, but predictive models are useful and can demonstrate the theorist’s understanding. These models aren’t predictive though and that’s the crux of the problem; the developers seem unwilling to acknowledge that.
Gavin has freely admitted that models fail badly on continental and regional scales. Even if the global average temperature tracked well between models and observations, it is the sum of failures. In what scientific realm is that justifiable?
This seems to depend upon the meaning of “tuned to the period of interest”. Dr Schmidt suggests a very narrow interpretation that this means fitting the model to temperatures over this period. Dr Curry’s interpretation corresponds to my (limited) understanding of tuning – adjusting parameterizations of subgrid and other processes, using evolving data for variables mostly not temperatures (eg aerosols, humidity etc). In fact, Dr Curry seems to get to the issue of wholly inadequate GCM validation. When models are being updated all the time, there can be no meaningful out-of-sample evaluation (and no model-based attribution).
BasicStats writes: “and no model-based attribution”
And that’s really what the entire debate is about I think. Why this isn’t obvious to everyone participating escapes me completely. If we build models to tell us what we want to hear, and those models do that, we’ve learned absolutely nothing about the world.
‘Shell games – you got to learn how to play – Shell games’.
Sung to the tune of ‘Foreigner’.
If you want to learn more from (sur)Real Climate read the Borehole part of the comments. If it’s still around that is where all the real comments are. Much better than the illogical mess in the posted comments, as this post so ably docments.
Thanks to all for so many wonderful comments, keep them coming, I learned a lot. The intellectual vibrancy of this site is what keeps me coming back. I was amazed at the flow of comments, and I kind of think that part of what happened here is that this group of commenters did the back-and-forth discussion that should have taken place at RC, if they didn’t always prevent that kind of thing from occurring.
A special thank you to John Bills, David Middleton and Richard Courtney for helping assemble in one thread several great models-to-observations comparisons, and also to Rud Istvan, micro6500 and others for links to other sites and posts, several new to me.
Every new AR, we get new spaghetti graphs and, superimposed on them, observations that bump along moving towards the bottom of the envelope of the spaghetti graph before finally punching out through the bottom of the envelope. Next AR we do it all over again, and the last batch of spaghetti goes in the memory hole. Maybe in a big El Nino temperature spike, the observations jumps up near the middle of the spaghetti, and you can see an example of that above, but then the observations drop right back down again. How the practitioners in this field continue to think that’s okay, and that these spaghetti graphs constitute some kind of accurate forecast, and that the practitioners vociferously defend these projections, and why no one managing or funding this fails to rein them in, baffles the mind.
And finally, a special shout out to Mosher (a great career as a Literature PhD wasted, that one) for a) agreeing that the models run hot and b) saying he thinks that’s a good thing. Here I think that, as a Berkeley Earth team member, he speaks for the sentiments of the alarmist climate establishment. Scaring the rest of us with exaggerated projections isn’t a bug, it’s a feature. The mask slips.
I seem to recall a paper describing problems assigning the droplet size at initial formation of water from vapour. The modelers varied the size, somewhere from 2-10um I think, but the best fit model was at an unrealistic size………sounded like tuning to me. I csn probably to drag out that paper…
The circularity critique can also be formulated at a more macro level than what Curry articulates here. As I put it in a recent comment:
“The IPCC’s method for ‘estimating’ water vapor feedbacks is to ASSUME that all late 20th century warming was caused by Co2, then calculate how many times the tiny Co2 forcing effect would have to be multiplied up by feedback effects to have created the observed warming. Purely circular scientific fraud. Their claim that Co2 warming effects are strong enough to have caused recent warming is based entirely on the assumption that recent warming WAS caused by Co2.”
Curry is taking a narrower view, criticizing the consensoids for calibrating their models over the same period the models claim to explain (late 20th century warming). She is referring to the same estimation scheme But we are offering different critiques about it.
The estimation scheme starts with a bunch of highly contentious assumptions about forcings, assumptions which leave Co2 as the only possible explanation for late 20th century warming (Curry mentions the omission of indirect solar effects), leading to the estimation that these Co2-warming effects must be super-powerful (getting multiplied up as much as several times by water vapor feedback effects), if they are the only thing that could have caused the observed warming.
Not sure that What Curry is critiquing is actually circularity. If the model does yield a good fit to the data over the entire calibration period that would provide some evidence for it (keeping in mind Von Neumann’s warning that with three degrees of freedom he could wiggle an elephant’s trunk, while climate models have endless degrees of freedom and parameterizations up the wazoo). The evidence would be better if the models could make a prediction that is borne out, but as this post points out, they are running dramatically hot. If they have not already been completely falsified by The Pause they are on the verge of it. The weakness of the consensus position here is not from logical circularity but from empirical falsification.
My critique of circularity is based on the larger shape of the consensus argument. In order to support their grand claim that late 20th century warming was caused mostly by human increments to atmospheric Co2 they assume, in their claims about forcings, that it was caused by Co2, then they derive their estimate of water vapor feedback effects from this assumption.
That is a logically circular argument. If they were not being circular they would estimate water vapor feedback effects by the direct evidence about water vapor feedbacks. Is the increase in Co2 causing an upper tropospheric hotspot, as positive water vapor feedbacks would produce? No. Is warming accompanied by constant or rising relative humidity? No. Lindzen, Eschenbach, etcetera? No no no.
A non-circular analysis would then take the discrepancy between what water vapor feedbacks are directly estimated to be and what they would have to be for the claimed forcings to have created the observed warming and use this discrepancy to estimate how far off the claimed total forcing is from the actual total forcing, then try to figure out how to account for that discrepancy. It could be in the forcing estimates. It could be in the direct estimate of the feedback, but these have to both be estimated directly from their own available evidence. Using one to estimate the other is using circularity to jump over the discrepancy. Not logically allowed.
The IPCC shortcuts the whole scientific process of estimating discrepancy and and trying to account for it, replacing it with with an obvious circularity, using their assumption that Co2 has been the dominant forcing to justify their conclusion that Co2 has been the dominant cause of warming. The two are the same thing, translated only by the simple warming=forcing x feedback formulation that the IPCC employs.
Curry may well have been meaning to allude to the same thing. She only mentioned circularity parenthetically, but it does need elaboration. There is a whole normal scientific process that is being elided, short circuited, omitted, by the logical circularity of the IPCC argument.
“In order to support their grand claim that late 20th century warming was caused mostly by human increments to atmospheric Co2 they assume, in their claims about forcings, that it was caused by Co2, then they derive their estimate of water vapor feedback effects from this assumption …”.
===========================
Exactly, they first use a premise to prove a conclusion then use the conclusion to prove the premise, the amazing thing is that they fail to recognise it — or do they?
Mosher, Your airplane fuel model shows you use contemptible science – it’s not science- its not math – its religion and you’re trying to save me. There could not be a clearer post on fraud and condoning it.
I ran a normal distribution and the models definitely run “hot,” even compared to GISTEMP…
All of the temperature series fall within the 1 standard deviation of the model mean. Which they should because these are historical model runs. HadCRUT4 and Cowtan & Way barely fall within 1 standard deviation, 75-80% of the models are “hotter.” In terms of a “hindcast,” the models aren’t very good.
Once upon a time when I was a junior Army intelligence officer stationed in Korea I was sent to a 155mm howitzer battery to seek their advice on answering a question from the US Forces Korea commander. He wanted to know just how accurate/effective was the North Korean artillery likely to be if employed against our forces. Their tube artillery outnumbered ours by something like 35 to one so you could understand his concern. I had tons of data on NKA artillery practices and sat down with the Fire Direction Center chief and his team to compare and contrast theirs and ours. We spent three days going over everything and concluded that the CEP (circular area probable-radius) of their artillery fire was roughly the same as ours. Not what the General, and my own commander, wanted to hear. (Lots of other things impact effectiveness but the weapons performance were roughly similar.
Why is that problem applicable to the climate data discussion? The FDC chief explained it very simply– when you come down to it all we are doing is applying double precision arithmetic operations against highly estimated data. All I really know with any accuracy is the outside air temperature and the pressure at my guns when we pull the lanyard. I can only estimate those and similar factors enroute to the target. Map coordinates (pre-GPS days) are off 80-120 meters horizontally and up to twenty meters vertically. Forward observer range estimates and azimuths vary 10% or more. Powder performance varies by 3-5% between bags. Ogive shapes vary, detonator reaction times vary. Okay, you see the problem. All of these unconstrained variables, and there are dozens more, make the “perfect” solution impossible.
GPS and laser rangefinders make those measurements a little more accurate but friendly fire incidents still occur. And weather still surprises us on occasion.
Gavin Schmidt complaints that Judith Curry “…fails to give any quantitative argument to support her contention that human drivers are not the dominant cause of recent trends.” Here, Gavin, I present such a quantitative argument . It is your boys who were involved and they are hardly the human drivers you are looking for. The incident I have in mind happened about 2008. I was working on my book “What Warming” and noticed that temperature in the eighties and nineties was flat, what we now would call a hiatus. It was an 18 years stretch of temperature involved. On top of it was a wave train created by ENSO. It was comprised
of five El Nino peaks, with La Nina valleys in between. I put a yellow dot in the middle of each line connecting an El Nino peak with a neighboring La Nina valley. These dots lined up in a straight horizontal line which tells us two things, First, the ENSO oscillation was not warming up the world as an idiotic pseudo-scientist has claimed; and second, the wave train was on level ground. I used satellite data from both UAH and RSS and made it part of figure 15 in my book. But before it went to press, this temperature section was mysteriously transmogrified into a warming curve whose temperature rose at the rate of 0.06 degrees Celsius per decade. Worse yet, they extended ths fake swarming to the twenty-first century that followed, in a desire to create more warming. I protested but was ignored. The only thing I could do under the circumstances was to put a notice about it into the preface of my book. That, too was ignored and the fake warming even now is part of their official temperature curve. That is where the matter would have rested but luckily one of my readers unearthed the following NASA document from 1997:
“…. Unlike the surface based temperatures, global temperature measurements of the earth’s lower atmosphere obtained from satellites reveal no definitive warming trend over the past two decades. The slight trend that is in the data appears to be downward. The largest fluctuations in the satellite temperature data are not from any man-made activity, but from natural phenomena such as large volcanic eruptions from Mt. Pinatubo, and from El Nino. So the programs which model global warming in a computer say the temperature of the Earth’s lower atmosphere should be going up markedly, but actual measurements of the temperature of the lower atmosphere reveal no such pronounced activity.”
This leaves no doubt that the originally there was no warming and that the current warming in official temperature curves is a fake At the time thys fake warming was created, James Hansen was still in charge of NASA-GISS. He transferred out to Columbia University and Gavin Schmidt took over. Schmidt is well aware of my objections but refuses to do anything about it. I found a clue to his co-conspirators when I discovered that NASA-GISS, NOAA, and ,the Met Office in UK had all been subject to computer cleaning that unbeknownst to them left identical sharp spikes on top of that section of their temperature curves. The computer cleaning would only make sense if you are trying to hide something. like incompatible data. All this is sufficient to show that what Gavin Schmidt is complaining about in Judith Curry is wrong: The quantitative basis is there. It should be sufficient to justify an investigation into his shadowy dealings with global temperature curves. A large amount of public money may depend upon it.