Escape from model land

Reposted from Dr. Judith Curry’s Climate Etc.

Posted on October 29, 2019 by curryja

by Judith Curry

“Letting go of the phantastic mathematical objects and achievables of model- land can lead to more relevant information on the real world and thus better-informed decision- making.” – Erica Thompson and Lenny Smith

The title and motivation for this post comes from a new paper by Erica Thompson and Lenny Smith, Escape from Model-Land. Excerpts from the paper:

“Model-land is a hypothetical world (Figure 1) in which mathematical simulations are evaluated against other mathematical simulations, mathematical models against other (or the same) mathematical model, everything is well-posed and models (and their imperfections) are known perfectly.”

“It also promotes a seductive, fairy-tale state of mind in which optimising a simulation invariably reflects desirable pathways in the real world. Decision-support in model-land implies taking the output of model simulations at face value (perhaps using some form of statistical processing to account for blatant inconsistencies), and then interpreting frequencies in model-land to represent probabilities in the real-world.”

“It is comfortable for researchers to remain in model-land as far as possible, since within model-land everything is well-defined, our statistical methods are all valid, and we can prove and utilise theorems. Exploring the furthest reaches of model-land in fact is a very productive career strategy, since it is limited only by the available computational resource.”

“For what we term “climate-like” tasks, the realms of sophisticated statistical processing which variously “identify the best model”, “calibrate the parameters of the model”, “form a probability distribution from the ensemble”, “calculate the size of the discrepancy” etc., are castles in the air built on a single assumption which is known to be incorrect: that the model is perfect. These mathematical “phantastic objects”, are great works of logic but their outcomes are relevant only in model-land until a direct assertion is made that their underlying assumptions hold “well enough”; that they are shown to be adequate for purpose, not merely today’s best available model. Until the outcome is known, the ultimate arbiter must be expert judgment, as a model is always blind to things it does not contain and thus may experience Big Surprises.”

The Hawkmoth Effect

The essential, and largely unrecognized, problem with global climate models is model structural uncertainty/error, which is referred to by Thompson and Smith as the Hawkmoth Effect. A poster by Thompson and Smith provides a concise description of the Hawkmoth effect:

“The term “butterfly effect”, coined by Ed Lorenz, has been surprisingly successful as a device for communication of one aspect of nonlinear dynamics, namely, sensitive dependence on initial conditions (dynamical instability), and has even made its way into popular culture. The problem is easily solved using probabilistic forecasts.

“A non-technical summary of the Hawkmoth Effect is that “you can be arbitrarily close to the correct equations, but still not be close to the correct solutions”.

“Due to the Hawkmoth Effect, it is possible that even a good approximation to the equations of the climate system may not give output which accurately reflects the future climate.”

From their (2019) paper:

“It is sometimes suggested that if a model is only slightly wrong, then its outputs will correspondingly be only slightly wrong. The Butterfly Effect revealed that in deterministic nonlinear dynamical systems, a “slightly wrong” initial condition can yield wildly wrong outputs. The Hawkmoth Effect implies that when the mathematical structure of the model is only “slightly wrong”, then even the best formulated probability forecasts will be wildly wrong in time. These results from pure mathematics hold consequences not only for the aims of prediction but also for model development and calibration, ensemble interpretation and for the formation of initial condition ensembles.”

“Naïvely, we might hope that by making incremental improvements to the “realism” of a model (more accurate representations, greater details of processes, finer spatial or temporal resolution, etc.) we would also see incremental improvement in the outputs. Regarding the realism of short- term trajectories, this may well be true. It is not expected to be true in terms of probability forecasts. The nonlinear compound effects of any given small tweak to the model structure are so great that calibration becomes a very computationally-intensive task and the marginal performance benefits of additional subroutines or processes may be zero or even negative. In plainer terms, adding detail to the model can make it less accurate, less useful.”

JC note: This effect relates to the controversy surrounding the very high values of ECS in the latest CMIP6 global model simulations (see section 5 in What’s the worst case?), which is largely related to incorporation of more sophisticated parameterizations of cloud-aerosol interactions.

Fitness for purpose

From the Thompson and Smith paper:

“How good is a model before it is good enough to support a particular decision – i.e., adequate for the intended purpose (Parker, 2009)? This of course depends on the decision as well as on the model, and is particularly relevant when the decision to take no action at this time could carry a very high cost. When the justification of the research is to inform some real-world time-sensitive decision, merely employing the best available model can undermine (and has undermined) the notion of the science-based support of decision making, when limitations like those above are not spelt out clearly.”

“Is the model used simply the “best available” at the present time, or is it arguably adequate for the specific purpose of interest? How would adequacy for purpose be assessed, and what would it look like? Are you working with a weather-like task, where adequacy for purpose can more or less be quantified, or a climate-like task, where relevant forecasts cannot be evaluated fully? How do we evaluate models: against real-world variables, or against a contrived index, or against other models? Or are they primarily evaluated by means of their epistemic or physical foundations? Or, one step further, are they primarily explanatory models for insight and understanding rather than quantitative forecast machines? Does the model in fact assist with human understanding of the system, or is it so complex that it becomes a prosthesis of understanding in itself?”

“Using expert judgment, informed by the realism of simulations of the past, to define the expected relationship of model with reality and critically, to be very clear on the known limitations of today’s models and the likelihood of solving them in the near term, for the questions of interest.”

My report Climate Models for Laypersons, addressed the issue of fitness for purpose of global climate models for attribution of 20th century global warming:

“Evidence that the climate models are not fit for the purpose of identifying with high confidence the relative proportions of natural and human causes to the 20th century warming is as follows:

  • substantial uncertainties in equilibrium climate sensitivity (ECS)
  • the inability of GCMs to simulate the magnitude and phasing of natural internal variability on decadal-to-century timescales
  • the use of 20th century observations in calibrating/tuning the GCMs
  • the failure of climate models to provide a consistent explanation of the early 20th century warming and the mid-century cooling.”

From my article in the CLIVAR Newsletter:

“Assessing the adequacy of climate models for the purpose of predicting future climate is particularly difficult and arguably impossible. It is often assumed that if climate models reproduce current and past climates reasonably well, then we can have confidence in future predictions. However, empirical accuracy, to a substantial degree, may be due to tuning rather than to the model structural form. Further, the model may lack representations of processes and feedbacks that would significantly influence future climate change. Therefore, reliably reproducing past and present climate is not a sufficient condition for a model to be adequate for long-term projections, particularly for high-forcing scenarios that are well outside those previously observed in the instrumental record.”

With regards to 21st century climate model projections, Thompson and Smith make the following statement:

“An example: the most recent IPCC climate change assessment uses an expert judgment that there is only approximately a 2/3 chance that the actual outcome of global average temperatures in 2100 will fall into the central 90% confidence interval generated by climate models. Again, this is precisely the information needed for high-quality decision support: a model-based forecast, completed by a statement of its own limitations (the Probability of a “Big Surprise”).”

While the above statement is mostly correct, the IPCC does not provide a model-based forecast, since they admittedly ignore future volcanic and solar variability.

Personally I think that the situation with regards to 21st century climate projections is much worse. From Climate Models for Laypersons:

“The IPCC’s projections of 21st century climate change explicitly assume that carbon dioxide is the control knob for global climate. The CMIP climate model projections of the 21st century climate used by the IPCC are not convincing as predictions because of:

  • failure to predict the warming slowdown in the early 21st century
  • inability to simulate the patterns and timing of multidecadal ocean oscillations
  • lack of account for future solar variations and solar indirect effects on climate
  • neglect of the possibility of volcanic eruptions that are more active than the relatively quiet 20th century
  • apparent oversensitivity to increases in greenhouse gases”

With regards to fitness for purpose of global/regional climate models for climate adaptation decision making, there are two particularly relevant articles:

“When a long-term view genuinely is relevant to decision making, much of the information available is not fit for purpose. Climate model projections are able to capture many aspects of the climate system and so can be relied upon to guide mitigation plans and broad adaptation strategies, but the use of these models to guide local, practical adaptation actions is unwarranted. Climate models are unable to represent future conditions at the degree of spatial, temporal, and probabilistic precision with which projections are often provided which gives a false impression of confidence to users of climate change information.”

Pathways out of model land and back to reality

Thompson and Smith provide the following criteria for identifying whether you are stuck in model land with a model that is not adequate for purpose:

“You may be living in model-land if you…

  • try to optimize anything regarding the future;
  • believe that decision-relevant probabilities can be extracted from models;
  • believe that there are precise parameter values to be found;
  • refuse to believe in anything that has not been seen in the model;
  • think that learning more will reduce the uncertainty in a forecast;
  • explicitly or implicitly set the Probability of a Big Surprise to zero; that there is nothing your model cannot simulate;
  • want “one model to rule them all”;
  • treat any failure, no matter how large, as a call for further extension to the existing modelling strategy.”

“Where we rely more on expert judgment, it is likely that models with not-too-much complexity will be the most intuitive and informative, and reflect their own limitations most clearly.”

“In escaping from model-land do we discard models completely: rather, we aim to use them more effectively. The choice is not between model-land or nothing. Instead, models and simulations are used to the furthest extent that confidence in their utility can be established, either by quantitative out-of-sample performance assessment or by well-founded critical expert judgment.”

Thompson and Smith focus on the desire to provide probabilistic forecasts to support real-world decision making, while at the same time providing some sense of uncertainty/confidence about these probabilities. IMO once you start talking about the ‘probability of the probabilities,’ then you’ve lost the plot in terms of anything meaningful for decision making.

Academic climate economists seem to want probabilities (with or without any meaningful confidence in them), and also some who are in the insurance sector and the broader financial sector. Decision makers that I work with seem less interested in probabilities. Those in the financial sector want a very large number of scenarios (including plausible worst case) and are less interested in actual probabilities of weather/climate outcomes. In non financial sectors, they mostly want a ‘best guess’ with a range of uncertainty (nominally the ‘very likely’ range); this is to assess to what degree they should be concerned about local climate change relative to other concerns.

As argued in my paper Climate Change: What’s the Worst Case?, model inadequacy and an inadequate number of simulations in the ensemble preclude producing unique or meaningful probability distributions from the frequency of model outcomes of future climate. I further argued that statistical creation of ‘fat tails’ from limited information about a distribution can produce very misleading information. I argued for creating a possibility distribution of possible scenarios, that can be created in a variety of ways (including global climate models), with a ‘necessity’ function describing the level and type of justification for the scenario.

Expert judgment is unavoidable in dealing with projections of future climates, but expert judgment on model adequacy for purpose is arguably more associated with model ‘comfort’ than with any rigorous assessment (see my previous post Culture of building confidence in climate models .)

The ‘experts’ are currently stymied by the latest round of CMIP6 climate model simulations, where about half of them (so far) have equilibrium climate sensitivity values exceeding 4.7C – well outside the bounds of long-established likely range of 1.5-4.5C.   It will be very interesting to see how this plays out – do you toss out the climate model simulations, or the long-standing range of ECS values that is supported by multiple lines of evidence?

Application of expert judgment to assess the plausibility of future scenario outcomes, rather than assessing the plausibility of climate model adequacy, is arguably more useful.

Alternative scenario generation methods

An earlier paper by Smith and Stern (2011) argues that there is value in scientific speculation on policy-relevant aspects of plausible, high-impact scenarios, even though we can neither model them realistically nor provide a precise estimate of their probability. A surprise occurs if a possibility that had not even been articulated becomes true. Efforts to avoid surprises begin with ensuring there has been a fully imaginative consideration of possible future outcomes.

For examples of alternative scenario generation that are of particular relevance to regional climatic change (which is exceptionally poorly simulated by climate models), see these previous posts:

Historical and paleoclimate data, statistical forecast models, climate dynamics considerations and simple climate models can provide the basis for alternative scenario generation.

Given the level and types of uncertainty, efforts to bound the plausible range of future scenarios makes more sense for decision making than assessing the probability of probabilities, and statistically manufacturing ‘fat tails.’

Further this approach is a heck of lot less expensive than endless enhancements to climate models to be run on the world’s most powerful supercomputers that don’t address the fundamental structural problems related to the nonlinear interactions of two chaotic fluids.

Kudos to Thompson and Smith for their insightful paper and drawing attention to this issue.

0 0 votes
Article Rating
97 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
October 30, 2019 6:35 pm

A non scientist family member said to me recently about climate change .. But scientist say… I said no computer models say and climate scientists just regurgitate what the models say.

Loydo
Reply to  Mike McHenry
October 31, 2019 12:56 am

No, scientists say because that is what is observed.
comment image

(I see that you started the off topic statement cascade that has nothing to do with the posted article, then you persist in it over and over. You should get on topic or your comments will get snipped out) SUNMOD

MarkH
Reply to  Loydo
October 31, 2019 2:55 am

Did you start that chart in the late 1970s for any particular reason? Why not go back to the early 20th century.

That cherry has been picked too many times to count.

Loydo
Reply to  MarkH
October 31, 2019 5:18 am

If you’d been bothered to do even the most cursory of checks you’d have realised its all the satellite data. So instead of blindly rejecting anything that hints at challenging you prejudice, maybe you should find out the facts. You’ll see how the volume – a metric many consider more important because it is a thermal buffer to faster warming- has dropped even more precipitously. In other words what remains is fragile mostly 1st year ice highly prone to further area reduction.

Reply to  Loydo
October 31, 2019 7:33 am

Presented some random graph of sea ice delta % from a baseline period has nothing to do with the discussion of climate models skill. Loydo is just throwing random crap out like an ignorant troll with no technical grasp of the discussion.

Moa
Reply to  Loydo
October 31, 2019 7:32 pm

Loydo, what you are showing is completely consistent with the natural warming that has been going on since the end of the Little Ice Age. It is not evidence for Anthropogenic Global Warming.

There is a test that discriminates between natural and anthropogenic effects. This is the warming differential between the Lower Tropical Troposphere (LTT) and the surface. The faster the surface warms relative to the LTT the more likely it is that the effect is natural (which is consistent with the fact we see correlated warming in other solar system bodies).

Here is the Danish graph of ice cover for your convenience (NOAA is far too corrupted by Lysenkoism and targeted grants to be objective):
http://ocean.dmi.dk/arctic/icecover.uk.php

If you think ice cover is a litmus test of AGW it appears you don’t know the basics of the claims made by IPCC. We’d be happy to get you up to speed on that and the observational evidence that falsifies the IPCC AGW hypothesis – if you are interested.

Loydo
Reply to  Loydo
October 31, 2019 9:45 pm

Moa “We’d be happy to get you up to speed on that and the observational evidence that falsifies the IPCC AGW hypothesis

The data at your link doesn’t refute the AGW hypothesis – it shows ice currently at lowest or close to lowest.

Jake
Reply to  Loydo
October 31, 2019 5:51 am

Hey Loydo, what would the slope of that graph look like from 1700-present, since we’ve emerged from the LIA …. or better still, from 1700-1850, before the internal combustion engine.
C’mon man ……

Reply to  Loydo
October 31, 2019 6:49 am

You have to have more than stamp collecting and numerology to have a scientific theory.
What you have there is local numerology which by itself, does not say much.

Joe Campbell
Reply to  Loydo
October 31, 2019 7:47 am

Loydo: What in the hell is presented there?…

Captain Climate
Reply to  Joe Campbell
October 31, 2019 12:17 pm

The number of women he’s slept with since he went vegan. I think. (Sarcasm)

Reply to  Mike McHenry
October 31, 2019 9:10 am

Mr. McHenry:

Climate models are the opinions of the person, or team, who program the computer.

These personal opinions are disguised with complex math, but that does not change what they are.

Climate model predictions are so far from reality, that the personal opinions about climate physics, that serve as the foundation for the models, have been falsified.

That’s why I call climate models “computer games”.

Unfortunately, the word “falsified” does not apply to junk climate science, where a CO2 – average temperature theory from the 1970’s seems to drive the 30 years of wrong computer game predictions.

One Russian model seems to make good predictions, but it appears to a “more of the same” model — extrapolating past climate change rates into the future ( At least it seems to be based on real climate observations, rather than being based on a theory completely unrelated to past climate change, that leads to consistently wrong predictions).

Of course the modest warming predicted by the Russian model can’t be trusted, because it is obviously colluding with Donald Trump ( I was told this by Adam “shifty” Schiff).

I’m not sure why this seems so complicated to so many people, but there are no real climate models, nor could one be constructed.

A real climate model must be based on a thorough understanding of climate physics.

No such understanding exists.

Therefore, a real climate model can not exist … until our understanding of climate physics has improved a lot.

Calling something a “climate model” does not make it a real climate model, of the climate change process on our planet.

Having a science degree, does not mean the work you do is real science.

Repeated, wrong, wild guess predictions of the future climate, are not real science.

They are climate astrology.

Real science requires repeated RIGHT predictions of the future climate.

And that has not happened yet.

If you MUST have a prediction of the future climate, I suggest examining the past 100 years of humans adding CO2 to the atmosphere, and assuming “more of the same (mild, harmless, intermittent global warming, mainly in the northern half of the Northern Hemisphere, mainly during the six coldest months of the year, and mainly at night.)

The real “existential threat” is a gross overreaction, to a nonexistent climate change problem, such as the very expensive Green New Deal (ordeal?) publicized by Alexandria Occasionally Coherent, and her mentor, Al “The Climate Blimp Gore.

FormerPE
Reply to  Richard Greene
November 4, 2019 11:53 am

Richard:

Regarding the trust (or lack thereof) one has in climate computer models, I was struck by your comment: “Climate models are the opinions of the person, or team, who program the computer” as well as by this section of the original post:

“In escaping from model-land do we discard models completely: rather, we aim to use them more effectively. The choice is not between model-land or nothing. Instead, models and simulations are used to the furthest extent that confidence in their utility can be established, either by quantitative out-of-sample performance assessment or by well-founded critical expert judgment.”

In a “former life” back in the 1980’s I acquired a MS degree in groundwater hydrology, a specialty of civil engineering at the school I attended. For several years I made a good living creating and running computer models of contaminant transport in groundwater (being the era of EPA Superfund and the like). Your statement is spot on. I knew the simulations I generated reflected my personal biases regarding uncertainties in the physical parameters that had to be specified for the model domain as well as the initial conditions. I was intimately aware of the shortcomings of the simulations I was running, especially the increasing lack of confidence in the accuracy of the results with increasing simulation time. Yet I still found the simulations to be useful for a couple reasons:
1. I was able to get a handle on the sensitivity of the system to parameter variability. If the system wasn’t particularly sensitive (only small changes in results for “normal” variability in one parameter, it didn’t make sense to spend time and money trying to gather more field data for that parameter. High sensitivity might warrant an effort to get a better handle on the input values for certain parameters in certain portions of the domain. In other words, it was a way to check the potential ramifications of my biases and and which ones could have a serious detrimental impact if I was too far off.
2. The computer modeling programs generated the data needed to make visual/graphic output to illustrate the impacts of the proposed solution (usually extraction wells, treatment system, and either re-injection of treated groundwater or removal via discharge to surface water). It greatly accelerated and automated the process of generating illustrations comparing various scenarios and thereby allowed a greater amount of virtual experimentation with different configurations than one could do without the model. However, we never lost sight of the fact that these simulations were all subject to great uncertainty and had to be evaluated in light of “well-founded critical expert judgment.”
3. In spite of my knowledge of the limitations of the simulations, I have to admit that I took advantage of the perception that any number coming out of a computer must be right. Most of the facility managers and financial decision makers funding the work my firm was doings seemed to trust computer output more heavily than expert opinions, even if the expert opinions were quantitative. The computer simulations made it easier to get buy-in on the cost of installing and operating the systems. I think the simulations accurately reflected my critical expert judgment regarding which solution was best overall. But I’m glad I didn’t have to compare actual field data to the simulated values 10, 15 or 20 years later. By that time often some other firm had the operational contract.

I think the uncritical perception that laypeople have toward computer simulations definitely is still around and explains some of the false confidence in climate computer models. I’m more puzzled by the false confidence placed on them by people with technical backgrounds. My guess is that they have been adversely impacted by healthy skepticism being tossed because of poor education or training, or because of the confirmation bias inherent in seeking grant funding.

Reply to  Mike McHenry
October 31, 2019 9:51 am

Ms. Curry’s discussion of the study is quite complicated. That’s because she has a PhD. I only have a Masters Degree, so I have trouble following her writing.

On Monday I summarized the 16 page “Model-land” economics study on my blog.

I tried to pick quotes related to climate change and climate models.

My short summary (of quotes) is much easier to read than Ms. Curry’s discussion.

And I included a link to the actual paper, if you wanted to read all 16 pages.

https://elonionbloggle.blogspot.com/2019/10/escape-from-model-land-quotes-from-very.html

philf
Reply to  Richard Greene
November 1, 2019 2:21 pm

Your web comments are good. But the following 6 links best describe the whole global warming situation.
————-
IPCC Intergovernmental Panel On Climate Change
GCM General Circulation Model (many, based on IPCC CO2 assertions)
—————————-
Pangburn
Shows that temperature change over the last 200 years is due to 3 things: 1) cycling of the ocean temperature, 2) sun variations and 3) moisture in the air. There is no significant dependence of temperature on CO2.
https://globalclimatedrivers2.blogspot.com/
—————————–
Connolly father & son
Shows the vertical temperature profile follows the ideal gas laws and is not caused by CO2. Millions of weather balloon scans and trillions of data points have been analyzed to come to these conclusions. One important conclusion is that there is no green house gas effect.
https://globalwarmingsolved.com/2013/11/summary-the-physics-of-the-earths-atmosphere-papers-1-3/
utube:
https://www.youtube.com/watch?v=XfRBr7PEawY
——————————
Pat Frank
Shows that GCM results cannot be extrapolated a few years, let alone 50 or 100.
https://www.frontiersin.org/articles/10.3389/feart.2019.00223/full
and
https://wattsupwiththat.com/2019/10/15/why-roy-spencers-criticism-is-wrong/
———————————
Joe Postma
Shows that the “flat earth model”of the IPCC is too simple. Their real models are built into the GCMs which don’t fit the real data.
https://climateofsophistry.com/2019/10/19/the-thing-without-the-thing/

Reply to  Mike McHenry
October 31, 2019 10:18 am

Many things “Confucius say,” he no say. Others say and then say he say.

chaamjamal
October 30, 2019 6:37 pm

“Assessing the adequacy of climate models for the purpose of predicting future climate is particularly difficult and arguably impossible”

particularly so for climate impacts as for example, the dramatic failure of decades of escalating predictions about the collapse of the Himalayan glacial system that would cause the Ganges, the Brahmaputra, and the Mekong to run dry.

https://tambonthongchai.com/2010/06/17/the-glaciers-in-tibet-are-melting/

October 30, 2019 6:53 pm

The ‘experts’ are currently stymied by the latest round of CMIP6 climate model simulations, where about half of them (so far) have equilibrium climate sensitivity values exceeding 4.7C – well outside the bounds of long-established likely range of 1.5-4.5C.

Could this be the first sign of a whole new round of “worse than we thought”? Will there be a barrage of “New studies reveal……….”?

Reply to  Smart Rock
October 31, 2019 4:24 am

That has been the case since the 1980s with James Hansen.
And Hansen started off with urgent terrifying scenarios and claims.

Reply to  Smart Rock
October 31, 2019 7:47 am

At some point an increasing ECS will end up with a measurable change in the atmosphere, such as water vapor (think clouds). Unless this can be physically seen, so called climate scientists will need to admit that the models are wrong and that the science is NOT settled.

Richard
October 30, 2019 7:08 pm

The ‘tags’ list on the headlines teaser page references ‘#Hawkmouth Effect’. You might want to amend that.

markl
October 30, 2019 7:23 pm

IPCC instilled the narrative of AGW/Global Warming/Climate Change with CO2 as the bogeyman. MSM supported it (who supported them is more important). Modelling climate became the accepted norm because MSM supported it not because it was accurate. The Left knows that, and denies it. Too many years have passed without meeting predictions and the people are skeptical. It will only grow.

pete m
October 30, 2019 7:28 pm

Not even wrong.

There, I saved mosher some time.

Hokey Schtick
Reply to  pete m
October 30, 2019 10:50 pm

+1. Except he wouldn’t have capitalized the initial letter, as his work and time are too important to waste on such trivialities, especially for an audience who fail to adequately appreciate his extraordinary grasp of climate science anyway.

RicDre
October 30, 2019 7:44 pm

“… sensitive dependence on initial conditions (dynamical instability), and has even made its way into popular culture. The problem is easily solved using probabilistic forecasts.”

Is anyone aware of a study that proves that “[sensitive dependence on initial conditions] is easily solved using probabilistic forecasts”?

Reply to  RicDre
October 30, 2019 8:24 pm

Yep,

that statement is equivalent to: 2 + 2 = 5(±2).

Technically it’s not wrong, but utterly useless for claiming to know anything.
Even a 1st grader probably knows that’s hogwash.
But that is today’s modelling world of GCM climate science.

Loydo
Reply to  Joel O'Bryan
October 30, 2019 9:58 pm

But when there are multiple lines of data to verify, even a 1st grader can read the writing on the wall.
comment image?itok=RPG6MRlA

Reply to  Loydo
October 30, 2019 10:37 pm

Loydo, you are apparently like Greta… she only knows what she’s been told to know.
No critical thinking required. Which is also why the Libs are steadily dumbing down the public education system and indoctrinating College kids in socialism and snowflakeology.

The middle class became too educated, too affluent, and now with the internet destroying the MMS liberal one voice, thus too uncontrollable.

So soon only the elites will be the only able to send their kids to private schools to get proper educations.

Loydo
Reply to  Joel O'Bryan
October 31, 2019 12:58 am

Wave data not yor arms.

Reply to  Joel O'Bryan
October 31, 2019 4:28 am

Bingo!
On target Joel!

Reply to  Joel O'Bryan
October 31, 2019 7:05 am

Loydo,

Joel made a fool of you, and it flew over your head.

Your chart is hilarious junk, since they use a very short anomaly baseline, which doesn’t address the different resolutions of the proxies, then apply it to a 2,000 year period.

Not only that they graft yearly temperature data onto a series of proxies that have different resolution baselines.

You seem oblivious to your stupidity.

Reply to  Joel O'Bryan
October 31, 2019 1:07 pm

Joel, “snowflakeology” is priceless! Thankyou

Michael Jankowski
Reply to  Joel O'Bryan
October 31, 2019 5:47 pm

Sunsettommy, to be fair, I don’t think Joel made him a fool. I think Loydo achieved that long ago on his own.

Reply to  Loydo
October 31, 2019 4:26 am

“Loydo October 30, 2019 at 9:58 pm
But when there are multiple lines of data to verify, even a 1st grader can read the writing on the wall.”

Again, lolly makes startlingly absurd claims from the world of make believe.

Loydo
Reply to  ATheoK
October 31, 2019 5:21 am

(SNIPPED, no more off topic comments) SUNMOD

Reply to  ATheoK
October 31, 2019 7:19 am

“Loydo October 31, 2019 at 5:21 am
A graph is not a claim.”

Your words are empty and meaningless, especially when you try make them a distraction from the topic discussed.

Which suggests, unsurprisingly, that you do not read comments.

(I wonder if he read the posted article, not a single comment of his has been on topic. He is being watched now as he has been doing this for a while) SUNMOD

Frenchie77
Reply to  Loydo
October 31, 2019 4:37 am

Loydo, you must work for “Big Pirate” as you seem intent on causing their return.

comment image

I congratulate you on your uncanny ability to see what you like. Imagine the sounds of my jazz hands* celebrating your success!

*is clapping banned where you live yet?

Loydo
Reply to  Frenchie77
October 31, 2019 5:22 am

(SNIPPED, no more off topic replies) SUNMOD

TheLastDemocrat
Reply to  Loydo
October 31, 2019 7:54 am

Loydo: what you are showing here are data with a crucial weakness:

When you depend upon those various sources of data for the recent 2,000 years, you in effect have a “low pass filter” operating to fail to include high values.

“Low pass filter” is on Wikipedia.

The measurement sources beyond 100 or 200 years ago are not capable of catching the high temps, as the more recent measurement sources are.

If my local high temps last week were 85, 83, 95, 78, and 84, and I use grass growth or moisture in the soil as a measure of temp, my measure will reflect an average for the week around 82 or 83, because the measurement is not good at capturing the anomalous 95 degree day.

If you try to measure how loud the noise is from a firearm, and your device measures each half second, in decibels, you will fail to measure how loud the firearm sound is because the peak of the sound is so brief that the half second time frame is not adequate to capture the very brief peak of sound. sampling in the time frame of milliseconds, not haf seconds, would be needed to come close.

Stoughton R: Measurements of small-caliber ballistic shock waves in air. J Acoust Soc Am 1997; 102(2): 781–7.

Moa
Reply to  Loydo
October 31, 2019 7:37 pm

Hi Loydo,
Here are the Danish data:
http://ocean.dmi.dk/arctic/icecover.uk.php

Note, according to the UN’s AGW hypothesis when the surface warms at the same rate or faster than the Lower Tropical Troposphere then that is evidence that humans are NOT the cause of the warming. Did you know this aspect of their hypothesis?

You have presented evidence that when coupled with the LTT measurements from satellites actually falsifies the UN’s hypothesis – but because you don’t understand the UN’s predictions you believe that decreased ice cover is evidence of AGW when it is insufficient.

Pittsburgh
October 30, 2019 7:48 pm

To paraphrase my brother the jet engine engineer, predicting the flow and response of a single turbine blade is unreliable without empirical verification. And he’s been working in CFD and multiphysics simulation for 30 years for major engine manufacturers. Imagine them trying to predict climate. A fools errand indeed. S an engineer myself, I am completely disgusted by the mockery of science by these so-called climate “scientists.”

J Mac
Reply to  Pittsburgh
October 30, 2019 9:04 pm

This engineer concurs. +100%

Reply to  Pittsburgh
October 31, 2019 7:51 am

ditto –> This engineer concurs. +100%

Frenchie77
Reply to  Jim Gorman
October 31, 2019 8:39 am

ditto, and I routinely have to count on the output from thermal models in my work which, btw, always run too hot as compared to test results. At least I do get test results to directly correlate back into the thermal models.

Unfortunately, GCMs do not correlate real results into their models in order to predict. Instead they “tune” their models, after all it is hard to correlate with real data after it has been abused so much that even the unicorns are crying.

Owen Suppes
Reply to  Frenchie77
October 31, 2019 12:17 pm

Even the act of tuning while it might help with fitting, could struggle to properly assign values to variables. Here’s where the error begins to propagate. Maybe we bump up ECS and we underestimate natural variability, maybe we underestimate aerosol cooling and slightly overestimate cloud and water vapor feedbacks. Great we have a nice fit, but what does it mean? How useful it is to achieve fit?

To have outputs which are fit for purpose we need to isolate the knows variables and we have to constrain the values and subsequent relationships.

Accepting an range of ECS between 1.5 and 4.5 C should illuminate everyone that we have a lot of work to do before we can confidently use model outputs to craft good policy.

And while we work to constrain the myriad values and relationships foundational to models, there is still chaos in the system to deal with.

DMA
October 30, 2019 7:58 pm

A new analysis of radiosonde data shows there is no greenhouse effect in our atmosphere. See ( https://www.youtube.com/watch?v=XfRBr7PEawY ) at 1Hr-01 Min for their conclusions. These include that the IPCC was wrong to conclude that recent climate changes were due to greenhouse gasses and current climate model projections are worthless.
Model land must be willing to evaluate new findings and modify the models as necessary. The Connollys have done in depth analysis of 20 million data sets spanning 70 years and state “the data show categorically there is no greenhouse effect” the atmosphere is in thermodynamic equilibrium and more greenhouse gasses do not change temperature but act according to Einstein’s postulate.

Alasdair Fairbairn
Reply to  DMA
October 30, 2019 9:45 pm

Thanks DMA. A good analysis of this video.

Thomas Homer
Reply to  DMA
October 31, 2019 6:26 am

DMA quotes The Connollys: “the data show categorically there is no greenhouse effect”

I’m not aware of anything being measured that directly refutes this claim.

DMA
Reply to  Thomas Homer
October 31, 2019 9:19 pm

The Connollys claim that the greenhouse effect is assumed because of the assumption that the atmosphere is not in thermodynamic equilibrium but the radiosonde analysis they are the first to do shows that it is. All the models assume many layers in the atmosphere that are not correlated. They show good correlation of the troposphere and stratosphere to the tropopause. I think the video is well worth the time it takes to watch and study it.

Rob_Dawg
October 30, 2019 8:05 pm

Bad models do real harm in the real world. The game Sim City will always drive the player to provide public transit or face gridlock. Every urban planner under 50 played that game as a youngster and as adults are imposing those false truths on urbanites and taxpayers.

October 30, 2019 8:09 pm

Climate Model Land is precisely Lewis Carrol’s Alice in Wonder Land — Everyone is lying to one another and everyone there is mad as the Hatter. Even attempts at simple math in Model Land like Wonder Land devolves to 4×5=12 in a confusing psychobabble.

“The ‘experts’ are currently stymied by the latest round of CMIP6 climate model simulations, where about half of them (so far) have equilibrium climate sensitivity values exceeding 4.7C – well outside the bounds of long-established likely range of 1.5-4.5C.
Does anyone need more evidence the CMIP community members are Mad as the Hatter?

The first mistake of course is going down that Rabbit Hole of Garbage Climate Models. After that it’s all insanity and junk science.

Reply to  Joel O'Bryan
October 31, 2019 9:14 am

Joel,

Good stuff as usual. Any idea why the model boys are behind schedule? I have an idea, pursuant to Pat Frank’s work on model uncertainty, that they’re trying to reduce the amount of cloud error in the GCM outputs, and that’s causing the ECS to pop. Just my opinion, but I think the modelers are stuck between maintaining political or scientific plausibility.

F. Ross
October 30, 2019 8:37 pm

“It’s tough to make predictions, especially about the future.”

― Yogi Berra

Robert of Texas
October 30, 2019 9:12 pm

One cannot predict the future, only prepare for it – whatever it brings.

The only certainty I know is that no one understands how climate really works. Like most natural systems, it is messy and complex.

Crispin in Waterloo
Reply to  Robert of Texas
November 1, 2019 2:32 am

I predict the cold cometh. Certainly for the rest of my life.

commieBob
October 30, 2019 9:38 pm

However, empirical accuracy, to a substantial degree, may be due to tuning rather than to the model structural form.

Another way of saying the same thing is:

With four parameters I can fit an elephant, and with five I can make him wiggle his trunk. link

Another way of saying that is:

… one should not be impressed when a complex model fits a data set well. With enough parameters, you can fit any data set.

The only valid way to do a deterministic model is to write it based entirely on the physics, without tuning to make it match the record. If it works without tuning, it is likely to be valid. With tuning, it’s just an exercise in curve fitting and has zero predictive value.

Lorenz, a pioneer climate modeler, pointed out the above. We’ve known it for nearly as long as we have had electronic computers. How is it that so many people think it doesn’t apply to them?

Reply to  commieBob
October 31, 2019 12:52 am

A deterministic model is not always required to know something about a system behavior and have predictive value.
In CE hydraulics we have the Hazen Williams equation for determining water flow in pipes. It is an empirical equation (determined by trial and error experimentation).

The Hazen–Williams equation is an empirical relationship which relates the flow of water in a pipe with the physical properties of the pipe and the pressure drop caused by friction. It is used in the design of water pipe systems such as fire sprinkler systems, water supply networks, and irrigation systems.
It works well for water only. But It doesn’t account for temp or associated viscosity changes.
https://en.m.wikipedia.org/wiki/Hazen–Williams_equation

As long as you understand the limitations H-W works well for designing piping systems and knowing what the headlosses will be to properly size pipes and pumps. That’s Predictive value.

In effect though its a lot like Ptolemaic models of the solar system and epicycles. Depending on your need and you understand the limitations they work. Ptolemaic equations are used in mechanical planetarium projectors to depict the movements of the planets and stars, at least the accuracy needed for the planetarium viewer. Predictive vlaue as long as you understand the limitations.

Hazen-Williams essentially uses “tuned” parameter look-up tables as a coefficients and even its structural form is not physical, it just works.

But H-W works because it has empirical experimentation/trial and error behind its development to fine tune the coefficients.

Now the climate modellers may claim they are using first principles with radiative physics fluid dynamics (Navier Stokes, etc). But once they use parameters for the convective-clouds-water physics parts, they are essentially devolved to an empirical calculation with no way to determine the critical tuning parameter value in nature because their is no practical trial and error experimentation on the Earth climate system. So they guess, they see what works in silico. They have huge degeneracy on the parameters as the parameter values are poorly constrained by observation.

Climate models have no predictive value because there is no physically testable way to know what the dozen or so parameter values are with any certainty. Thus their uncertainty is huge. Far far larger uncertainties that propagate error through their calculations. An uncertainty that the modellers are willing to admit openly, because projecting out 30 years to 2050 is junk science and 80 years to 2100 is then absurd junk. This is worse that zero predictive value. Its negative value added.
GCMs used to project temps 30 or 80 years into the future have misleading predictive value, a false sense of knowledge that is certainly wrong. That’s Negative learning. You’re better off not knowing it because its wrong.

Reply to  Joel O’Bryan
October 31, 2019 8:15 am

Love your explanation. “This is worse that zero predictive value. Its negative value added.” Ain’t that the case!

Steve Z
Reply to  Joel O’Bryan
October 31, 2019 9:47 am

The Hazen-Williams equation mentioned by Joel O’Bryan was developed in the early 19th century, and there have been many improvements on it since, which do take into account effects of density and viscosity of liquids other than water.

For design of pumps and piping systems handling only liquids, engineers now use equations where frictional pressure drop is equal to a “friction factor” times pipe length/diameter * liquid density * velocity squared, where the friction factor is an empirical function of Reynolds number and pipe roughness.

It should be noted that analytical solutions to differential equations work well for laminar flow (low velocity and high viscosity), but in one-dimensional turbulent flow of liquids in pipes, the equations are much more complex, and deviations of calculated values from experimental results can be up to 10%. Engineers tend to circumvent these inaccuracies by designing pumps capable of delivering 20 – 40% more pressure than calculated, and allowing control valves to absorb the extra pressure.

For liquid flow in pipes, the liquid is assumed to be incompressible, meaning that the density does not vary with pressure. The atmosphere is a mixture of gases, where density (even under ideal conditions) is proportional to absolute pressure and inversely proportional to absolute temperature. If a volume of the atmosphere is transported from high pressure to low pressure, its temperature decreases (adiabatic expansion), and the variation of temperature with pressure depends on the Cp/Cv ratio of the gas, which is itself a function of temperature.

So far, we have gone from a turbulent one-dimensional system at constant density (which already has its own modeling errors) to a turbulent three-dimensional system whose density varies with temperature and pressure. Add into this a major component (H2O) which can exist as a vapor, liquid, or solid in the atmosphere, with a huge latent heat of vaporization, and the atmosphere is in contact with a practically limitless supply of liquid water in the oceans. Water vapor has a lower molecular weight (about 18) than the other gases in the atmosphere (about 29) so that humid air is lighter than dry air at the same temperature and pressure, and tends to rise.

Any computer model attempting to predict what will happen to this three-dimensional, turbulent, chaotic system in contact with oceans, mountains, vegetation, and cities with varying heat input from the sun has difficulty predicting what will happen to the water (vapor or liquid) over the next several days, given the accuracy of weather predictions over such a period.

Carbon dioxide is present in the atmosphere at about 0.04% by volume (much less than water vapor, which varies from about 0.20% to 1.5% by volume), and is non-condensable at ambient temperatures over most of the earth (except Antarctica). Given the inability of computer models to predict what happens to the water in the atmosphere over more than about 10 days, how is a computer model supposed to predict the effect of a much smaller concentration of CO2 on the atmosphere 50 years from now, a time period about 1800 times longer than the forecast period of weather models?

Crispin in Waterloo
Reply to  Steve Z
November 1, 2019 2:55 am

Excellent inputs, guys. I would like to point out that the method above is not really a model. It is a calculator that gives an answer “good enough for government work.”

One might argue that a calculator is a model but I disagree. A model is something constructed from first principles. For such a model everything must be considered because it is intended to be real, a simulation of a constructable reality that will be built if the water flow is adequate.

A calculator is like the water flow predictor in the Village Technology Handbook from VITA. It has four nonlinear scales on a page. With a ruler and two steps it will predict water flow in a pipe. Its output, the water flow read from the Flow scale, is calibrated against real pipes. That makes it a calibrated, validated calculator or predictor with an uncertainty. It is not a model of water flow in a pipe.

A useful climate model would replicate the 20th century without tuning. Even tuning the current models with the 20th century has not given them the ability to predict the 21st century’s initial two decades, let alone the last two.

The producers of the Old Farmer’s Almanac has a calculator that gives pretty good regional weather forecasts 18 months out. So far that’s the best we’ve got. It is much better than Environment Canada’s climate model predictions which are wrong ~85% of the time.

Tom Abbott
Reply to  commieBob
October 31, 2019 5:13 am

“The only valid way to do a deterministic model is to write it based entirely on the physics, without tuning to make it match the record.”

Yeah, and “the Record” they are trying to match is a bastardized version of the true global temperature profile. That must *really* complicate things. 🙂

Alasdair Fairbairn
October 30, 2019 9:55 pm

All scientific equations have a cop out clause – “ Ceteris Paribus” ( all things being equal).
Engineers know that this is not true and that is why things tend to work and bridges rarely fall down.

Alasdair Fairbairn
October 30, 2019 10:12 pm

Never mind all the statistics, probabilities and mathematics; If the basic assumptions are wrong or there is omission, the result will be in error.
The IPCC assumption that water provides a net positive feedback to the purported GHE is an error. This feedback is NEGATIVE and the science supports this view.
This fact goes a long way to explain why these models are running hot. The reason being that at the phase change of water the Planck sensitivity is zero and this fact should be incorporated into the climate sensitivity calculation; otherwise too high a figure will result.

Loydo
Reply to  Alasdair Fairbairn
October 31, 2019 1:13 am

“This feedback is NEGATIVE and the science supports this view.”

Care to say how?

LdB
Reply to  Loydo
October 31, 2019 2:01 am

As if Loydo would actually discuss anything … where is your random graph of the day.

Reply to  LdB
October 31, 2019 7:27 am

Its above in his nonsensical response to one of my comments. He posted a paleo temp reconstruction graph of many series for the past 2kyr. Utterly an off topic link to that of model projections from Loydo, and like I couldn’t interpret it I guess.

Richard S Courtney
October 31, 2019 1:22 am

Charles The moderator:

I MADE A LONG POST THAT COMMENTED ON YOUR ABOVE FINE ARTICLE. MY POST CONTAINS A FORMATTING ERROR FOR WHICH I SINCERELY APOLOGISE.

PLEASE REPLACE IT WITH THIS CORRECTED POST. THANKING YOU IN ANTICIPATION.

Your above article makes the important point about climate models:

”The nonlinear compound effects of any given small tweak to the model structure are so great that calibration becomes a very computationally-intensive task and the marginal performance benefits of additional subroutines or processes may be zero or even negative. In plainer terms, adding detail to the model can make it less accurate, less useful.”

JC note: This effect relates to the controversy surrounding the very high values of ECS in the latest CMIP6 global model simulations (see section 5 in What’s the worst case?), which is largely related to incorporation of more sophisticated parameterizations of cloud-aerosol interactions.”

YES! Absolutely!
For decades I have said this in several places including Review comments for the IPCC. For example, in this item
http://allaboutenergy.net/environment/item/2208-letter-to-senator-james-inhofe-about-relying-on-ipcc-richard-courtney-uk
I wrote,
“Ron Miller and Gavin Schmidt, both of NASA GISS, provide an evaluation of the leading US GCM. They are U.S. climate modelers who use the NASA GISS GCM and they strongly promote the AGW hypothesis. Their paper tiltled ‘Ocean & Climate Modeling: Evaluating the NASA GISS GCM’ was updated on 2005-01-10 and is available at
http://icp.giss.nasa.gov/research/ppa/2001/oceans/
Its abstract says:
This preliminary investigation evaluated the performance of three versions of the NASA Goddard Institute for Space Studies’ recently updated General Circulation Model E (GCM). This effort became necessary when certain Fortran code was rewritten to speed up processing and to better represent some of the interactions (feedbacks) of climate variables in the model. For example, the representation of clouds in the model was made to agree more with the satellite observational data thus affecting the albedo feedback mechanism. The versions of the GCM studied vary in their treatments of the ocean. In the first version, the Fixed-SST, the sea surface temperatures are prescribed from the obsevered seasonal cycle and the atmospheric response is calculated by the model. The second, the Q-Flux model, computes the SST and its response to atmospheric changes, but assumes the transport of heat by ocean currents is constant. The third treatment, called a coupled GCM (CGCM) is a version where an ocean model is used to simulate the entire ocean state including SST and ocean currents, and their interaction with the atmosphere. Various datasets were obtained from satellite, ground-based and sea observations. Observed and simulated climatologies of surface air temperature sea level pressure (SLP) total cloud cover (TCC), precipitation (mm/day), and others were produced. These were analyzed for general global patterns and for regional discrepancies when compared to each other. In addition, difference maps of observed climatologies compared to simulated climatologies (model minus observed) and for different versions of the model (model version minus other model version) were prepared to better focus on discrepant areas and regions. T-tests were utilized to reveal significant differences found between the different treatments of the model. It was found that the model represented global patterns well (e.g. ITCZ, mid-latitude storm tracks, and seasonal monsoons). Divergence in the model from observations increased with the introduction of more feedbacks (fewer prescribed variables) progressing from the Fixed–SST, to the coupled model. The model had problems representing variables in geographic areas of sea ice, thick vegetation, low clouds and high relief. It was hypothesized that these problems arose from the way the model calculates the effects of vegetation, sea ice and cloud cover. The problem with relief stems from the model’s coarse resolution. These results have implications for modeling climate change based on global warming scenarios. The model will lead to better understanding of climate change and the further development of predictive capability. As a direct result of this research, the representation of cloud cover in the model has been brought into agreement with the satellite observations by using radiance measured at a particular wavelength instead of saturation.
This abstract was written by strong proponents of AGW but admits that the NASA GISS GCM has “problems representing variables in geographic areas of sea ice, thick vegetation, low clouds and high relief.” These are severe problems. For example, clouds reflect solar heat and a mere 2% increase to cloud cover would more than compensate for the maximum possible predicted warming due to a doubling of carbon dioxide in the air. Good records of cloud cover are very short because cloud cover is measured by satellites that were not launched until the mid 1980s. But it appears that cloudiness decreased markedly between the mid 1980s and late 1990s. Over that period, the Earth’s reflectivity decreased to the extent that if there were a constant solar irradiance then the reduced cloudiness provided an extra surface warming of 5 to 10 Watts/sq metre. This is a lot of warming. It is between two and four times the entire warming estimated to have been caused by the build-up of human-caused greenhouse gases in the atmosphere since the industrial revolution. (The UN’s Intergovernmental Panel on Climate Change says that since the industrial revolution, the build-up of human-caused greenhouse gases in the atmosphere has had a warming effect of only 2.4 W/sq metre). So, the fact that the NASA GISS GCM has problems representing clouds must call into question the entire performance of the GCM.

The abstract says; “the representation of cloud cover in the model has been brought into agreement with the satellite observations by using radiance measured at a particular wavelength instead of saturation” but this adjustment is a ‘fiddle factor’ because both the radiance and the saturation must be correct if the effect of the clouds is to be correct. There is no reason to suppose that the adjustment will not induce the model to diverge from reality if other changes – e.g. alterations to GHG concentration in the atmosphere – if are introduced into the model. Indeed, this problem of erroneous representation of low level clouds could be expected to induce the model to provide incorrect indication of effects of changes to atmospheric GHGs because changes to clouds have much greater effect on climate than changes to GHGs.

Richard

Reply to  Richard S Courtney
October 31, 2019 5:20 am

“Richard S Courtney October 31, 2019 at 1:22 am

This abstract was written by strong proponents of AGW but admits that the NASA GISS GCM has “problems representing variables in geographic areas of sea ice, thick vegetation, low clouds and high relief.
…”

The authors of the program also ignore that water is broadly radiative wavelength interactive through all three physical states; gas, liquid and solid.

“The abstract says; “the representation of cloud cover in the model has been brought into agreement with the satellite observations by using radiance measured at a particular wavelength instead of saturation” but this adjustment is a ‘fiddle factor’ because both the radiance and the saturation must be correct if the effect of the clouds is to be correct.”

Which is a complex way of stating that the climate program’s authors had to introduce sufficient parameters so they can wiggle trunks, dance on tiptoe and hack up elephant sized hairballs.

N.B. “radiance measured at a particular wavelength instead of saturation” but this adjustment is a ‘fiddle factor’”
No doubt, a CO₂ interactive wavelength; which ignores H₂O’s broad wavelength spectrum interactivity.
Focusing on a specific wavelength means the authors ignore or minimize the multiplicity of atmospheric and surface impacts via radiative wavelength interactions; e.g. albedo changes via high altitude and low altitude cloud cover, sea ice, snow cover, etc.

One would think that real scientists would first try to incorporate the largest atmospheric interactive mechanisms and molecules, before utilizing minor atmospheric components…

Richard; excellent comment!
PS I like the first format version of the comment better.

October 31, 2019 1:30 am

As mentioned by DMA October 30, 2019 at 7:58 pm, above, surely the research carried out by Dr.s M., I. and R. Connolly has shown that the climate models are nothing more than elaborate computer games? See:
https://globalwarmingsolved.com/

Their analysis of thousands of meteorological balloon records determined that the assumptions made by climate modellers were unwarranted, being unrelated to the real world atmospheric conditions. It would appear that the UN IPCC and climate modelers have never tried to test the validity of the assumptions built into their computer climate models. However, why would they?

The whole purpose of the models was to demonstrate that increasing CO2 concentration caused global warming, not to discover the ‘truth’.

In the process of their research, the Connelly’s just happened to show that there is no atmospheric Greenhouse Effect.

Tom Abbott
Reply to  Bevan Dockery
October 31, 2019 5:26 am

“In the process of their research, the Connelly’s just happened to show that there is no atmospheric Greenhouse Effect.”

This is going to be interesting. I want to see the alarmist arguments countering this. So far, they have been rather silent.

“The Science” is definitely not settled.

Richard M
Reply to  Bevan Dockery
October 31, 2019 6:48 am

Not so sure the claim of “no greenhouse effect” is correct. It is probably more along the lines of it being so small it has little overall effect. I think the reason may be related to this comment by our old friend RGB several years ago.

“The problem with pressure broadening in e.g. Modtran is this: Pressure (mostly collision) broadening is governed by the fourier transform of a continuous wave train with delta-correlated phase shifts caused by phase-interrupting collisions. The result is the familiar Lorentzian line shape which in turn contributes to the integrated absorptivity when one sums over lines and integrates over the spectrum. van Vleck and Weissskopf wrote the seminal paper on computing this shape from a comparatively simple quantum description, where the governing parameter is the mean free time between collisions. Petty’s excellent book walks one through much of this.

That doesn’t mean that broadening doesn’t depend at all upon the species colliding, only that it is a less important factor, usually, than the MFT itself. As Modtran correctly notes, same-species e.g. CO_2 on CO_2 collisions can have a slightly different lineshape than CO_2 on N_2 or CO_2 on O_2. However, this isn’t really likely to be an order of magnitude effect, as the bulk of the lineshape depends on the properties of the line itself to first order, not second order effects in a short, impact approximation interruption of an effectively slowly-varying oscillation.

Still, Modtran has code to correct the overall absorptivity by separately counting e.g. CO_2-N_2 broadening at concentration (1-q) vs CO_2-CO_2 broadening at concentration q, where q < 0.001. It is reasonable to expect that broading due to doubling from q = 0.0003 to q = 0.0006 would have no more than a 0.001 relative effect on the total atmospheric absorptivity computed — almost certainly completely negligible as far as the effects of line broadening are concerned! That is, Beers-Lambert might change from the direct reduction of the mean free path of IR photons, but the changes in the integrated spectral absorptivity would hardly change at all.

Even this seems like it would be an egregious overestimate. The collision time in van Vleck and Weisskopf that could lead to the same-species increase in spectral line width isn’t the general mean free time between any old collisions, it is the mean free time between same species collisions. This means that the lines are sharpened relative to what they might be from the ordinary MFT by a factor of q, or would be up to the limit of pure spontaneous emission (one cannot sharpen a line any farther than permitted by the spontaneous emission lifetime). When I read the Modtran documentation on the subject, it appears that this additional suppression of the same-species lineshape is neglected — although I have not looked at the source code to be sure. Either way, this is an additional factor of 0.001 (or less), more than enough to completely suppress any additional broadening of CO_2 lines other than what they already have not from the partial pressure of CO_2 but from the absolute pressure of the bulk atmosphere. Partial pressure induced variations in atmospheric absorptivity should be literally indetectably different from the general variability in absorptivity brought about by baseline atmospheric pressure that varies locally by several orders of magnitude more than total CO_2 partial pressure at any location on as little as an hourly basis, plus the overall large scale modulation due to water vapor.

This doesn’t mean that CO_2 is not a greenhouse gas — quite the opposite. It does mean that there is very, very little variation in its functioning as a greenhouse gas with partial pressure as long as the partial pressure is less than perhaps 1% of the total, at least as far as its base radiative properties (absorptivity of its bands) are concerned. Those bands are utterly dominated by CO_2 colliding with N_2, O_2, H_2O and Argon, in order and almost all of the lineshape is due to the time between impact-approximation delta-correlated collisions with any of these species, not any sort of “slow” species-species interaction.

As a consequence, I’ve simply never understood what people mean when they assert that there is some sort of pressure broadening contribution to the expected GHE due to increasing CO_2. No, there is not. There is an effect due to increased concentration and a reduced mean free path of IR photons, but this effect is known to be extremely weak as it is long since saturated. I’m curious as to just how much R&C predict that the pressure of the tropopause should change if integrated linewidths do not change but concentration does, as to me it seems likely that pressure broadening changes due to increasing CO_2 concentration is utterly negligible, and would still almost certainly be negligible as concentration approached 1% (as 0.01^2 = 0.0001 — a 1% effect from same species absorptivity as a fraction of all absorptivity, suppressed by a factor of 0.01 due to the fact that only one in a hundred collisions is between the same species).”

What this says is the greenhouse effect is saturated in the CO2 bands and the claims that pressure broadening extends the effect are essentially false. Pressure broadening only has a minor effect. This means the base claim of climate science that CO2 has a 1 C warming effect from doubling is completely false. At current concentrations the base effect is probably less that .1 C

Once this is used then feedback becomes irrelevant. This is likely what led to the finds of the Connolly’s.

Reply to  Richard M
October 31, 2019 7:06 pm

Richard M,
the Greenhouse Effect is not just the radiative properties of a class of gaseous molecules, it is also the claim that radiation from these atmospheric gases causes heating of the Earth’s surface. Meaning that a part of the Earth’s emitted infrared reflected back to the Earth’s surface causes an increase in its temperature. Surely this is nonsense. We all know that the temperature of an object can only be raised by receiving radiation from a source of higher temperature and that does not apply to the back-radiation.
If radiation reflected back onto a source caused it to get hotter then everything would be getting hotter due to the radiation from the surrounding objects, if all of the same temperature, and the Universe would have been continually heating. As far as I am aware, there has been no sign of such heating during the existence of the Solar System or longer.

October 31, 2019 2:55 am

Quote “The ‘experts’ are currently stymied by the latest round of CMIP6 climate model simulations, where about half of them (so far) have equilibrium climate sensitivity values exceeding 4.7C – well outside the bounds of long-established likely range of 1.5-4.5C. “

Where and when did this notion of climate sensitivity arise? Analysis of UAH satellite lower troposphere temperature with respect to station CO2 concentration gave the following results from the application of a First Order Autoregression Model:

Mauna Loa Observatory: correlation coefficient of 0.036 with 466 degrees of freedom and a t-statistic of 0.77 implying a probability of 44% that the correlation coefficient is equal to zero from the two-sided t-test.

Macquarie Island, Southern Ocean: correlation coefficient of -0.009, 308 deg. of free., t-statistic -0.15, probability of zero correlation 88%.

Mt Waliguan, Tibetan Plateau: correlation coefficient of -0.13, 302 deg. of free., t-statistic -2.30, probability of zero correlation 2.2%.

Point Barrow, Alaska: correlation coefficient of 0.06, 462 deg. of free., t-statistic 1.23, probability of zero correlation 22.1%.

South Pole Station: correlation coefficient of 0.007, 454 deg. of free., t-statistic 0.15, probability of zero correlation 88%.

Cape Grim, Tasmania: correlation coefficient of 0.018, 462 deg. of free., t-statistic 0.39, probability of zero correlation 70%.

My conclusion: the temperature is independent of the CO2 concentration so the term ‘climate sensitivity’ with respect to the two variables is meaningless and no such factor should be included in any climate model.

October 31, 2019 4:19 am

I was wondering why the push to petaflops :
“Exploring the furthest reaches of model-land in fact is a very productive career strategy, since it is limited only by the available computational resource.”

That covers Big Data, Climate, Deep Learning, Total Surveilance, the Big Bang.
Only question is who got that first?

Doug Huffman
October 31, 2019 4:24 am

The Hawkmoth needs to beware of the Black Swan lest it be gobbled up.

US Thanksgiving approaches, time to retell N. N. Taleb’s Thanksgiving Turkey story of the Black Swan’s effect. A farmer had a prize turkey. The turkey thought this the best of all possible worlds, a cage against the foxes and daily feeding for 999 days, sure proof. Then Thanksgiving approached and the farmer approached his prize turkey with his dispatch axe. Beware the Black Swan hiding in the fractal complexity of reality – just over the horizon of the next sun rise.

Sara
October 31, 2019 5:11 am

Model Land ignores Chaos Factor. There is no Chaos in Model Land because it is not programmable.

In the Real World, Chaos Factor runs things, and no matter how close the Weatherbirds are to real results with their forecasts, Chaos Factor still rules.

Not meaning to change the subject, but the weather forecast for yesterday was snow mixed with rain, but we got only rain. Snow fell where it was colder than in my Kingdom, by only a few degrees. Snow started early this AM, around 4AM, and continues, on the last day of October (Hallowe’en, Samhain, All Hallows Eve), exactly six months after the last snow fell on April 30. If this gap of time reoccurs next year and is perhaps shorter, it would be common sense to say “things are changing, we must be prepared for it”.

Weather forecasting seems to take the Chaos Factor into account – guess I’m not off topic, after all – but climate modelling does not. Without the Chaos Factor, which is not controllable (unless you’re babysitting 3 & 4-year-olds) , Model Land is Totally Bogus!!!!

don rady
October 31, 2019 6:01 am

to dumb this down:

there are too many RANDOM effects within the climate system to get climate computer models correct.

For example it is almost impossible to predict clouds a few days out, let alone years down the road, add volcanoes, sun activity, ocean currents, lightning, all things man can’t predict with accuracy in the future.

Thus computer models for future world temperatures are most likely not accurate.

TheLastDemocrat
October 31, 2019 6:21 am

When we do science or scholarly writing, we need to be careful with “causal language.” If we follow a huge cohort of people across decades, and find a slight mathematical relation between coffee drinking and dementia, we have to be careful in how we present and explain this. This type of relation might be a clue to a genuine phenomenon, but it is only a clue. If we say, “coffee consumption leads to dementia,” we have made a logical, and philosophical, mistake. We are wrong. We can say “coffee consumption had an association with dementia.”

We have a similar challenge with mathematical models. I have heard this since college days, and have heard it a lot. It is the mistaken idea that these natural processes operate by programming or guidelines that inherently have these mathematical functions at their heart. This is wrong, as far as we can know, logically and philosophically.

We do NOT know that any phenomenon in Nature, such as cancer recurrence, is “driven” by a Poisson Distribution. We simply do not.

We may gather data on cancer recurrence, and find that a Poisson distribution can be used to explain the pattern of data fairly well. That does not mean that Nature is guided by this mathematical operation.

But, we get lured into thinking that we can discover the underlying mathematical function guiding phenomena in Nature. As far as we know, Nature does not operate by mathematical functions.

If we play a video game that presents occasional phenomena, we are pretty sure that there IS a mathematical model directing when the phenomenon shows up. If I have space aliens showing up “randomly” and I have to shoot them, they are popping up by some mathematical algorithm. If I play a fishing game, the fish show up by a mathematical algorithm. If I play a “casino” game, then same.

But Nature is not operating this way.

Just as we have to be careful with letting ‘causal” language creep into our thinking and speaking when investigating possible cause-and-effect associations, we have to be careful and not allow our minds to start thinking these models are how Nature actually operates.

I hear modelers speak in these terms fairly regularly, though. Not good.

TheLastDemocrat
October 31, 2019 6:37 am

“Due to the Hawkmoth Effect, it is possible that even a good approximation to the equations of the climate system may not give output which accurately reflects the future climate.”

The climate does not operate according to equations. That is an utterly wrong way to look at things.

All models are wrong, but some are useful.

The map is not the terrain.”

October 31, 2019 7:10 am

My particular experience writing ‘models’ was in the financial world.

Even single purpose models projecting workload volumes had an extremely short shelf life. Literally days.
Which means I ran some models daily, using actual data through yesterday.

Chaotic variables, in my case human nature decisions and experiences drastically affect workloads. Changes due to human factors can be roughly estimated based upon historical data; but are never simulated with any accuracy.

Workhour and workhour cost estimates were trash as they came off the printer. Work hour costs are dependent upon workloads and upon manager/supervisor applying labor to properly process workload. A few bad decisions quickly magnify workhour costs.
All that a daily workhour cost model displayed of value, was how bad workhour usage had been up through yesterday. Even then, payroll adjustments took at least three days until a day’s workhour costs were roughly accurate.
Bad messages to deliver to the bosses, especially as they didn’t want to hear the caveats.

Simple models, using excellent highly detailed historical data.

Modeling climate is not simple. Available data is not highly detailed and often is of questionable accuracy.
Modifying (adjusting) historical data to feed claimed simulations of climate, is a travesty.

Being proud of forcing an immensely complex model simulating an extremely large near infinite complex situation to meet the modeler’s assumptions is sheer hubris.

October 31, 2019 7:15 am

I am amused that the graphic’s central black spot resembles a pacman.
Very apropos!

Well done Thompson and Smith article here and Dr. Curry’s posting and description are masterful.

Carlo, Monte
October 31, 2019 7:53 am

From my point-of-view, the climate models are just grandiose exercises in extrapolation (and there is good evidence they are nothing but first-order regression, see P. Frank) — scores of variables are empirically adjusted so that the output resembles available data, then they are run into the future to see what happens. As anyone familiar with basic statistical linear regression can tell you (dependent variable Y on independent variable X), a linear regression is pretty much useless except inside the interval of the X data. And extrapolations from higher-order polynomial regressions, such as 3rd, 4th, 5th, etc., are typically wildly wrong.

The notion of looking at the standard deviation of an “ensemble” of different models and calculating a confidence interval of future results is absurd. Again, basic statistics tells that the standard deviation of a single linear regression blows up outside of the X data interval. The various models are certainly not random samplings as they are completely independent of each other. Thus, no probability distribution can be determined (or even defined), which is required for transforming a standard deviation into a confidence interval, such as a 95% two-sigma interval.

Reply to  Carlo, Monte
October 31, 2019 9:25 am

Carlo –> “The various models are certainly not random samplings as they are completely independent of each other. Thus, no probability distribution can be determined (or even defined), which is required for transforming a standard deviation into a confidence interval, such as a 95% two-sigma interval.”

You have just hit upon a significance that I am working on with the actual temperature data. Each temperature reading is basically a stand alone population of 1. It has no probability distribution associated with it so you can not use it as a “sample” to create an uncertainty of the mean calculation. At best any averaging simply carries with it the uncertainty of each individual temperature reading taken. You can “create” a population from the individual readings but calculating a standard deviation from this doesn’t remove any uncertainty either.

Carlo, Monte
Reply to  Jim Gorman
October 31, 2019 1:08 pm

“Each temperature reading is basically a stand alone population of 1.”

I agree completely; in terms of the Guide to the Expression of Uncertainty in Measurement (the BIPM GUM), this is kind of situation is handled by assuming a population distribution (the Type B) based on “other than standard deviations” using experience and engineering judgement. Many times these come down to a rectangular distribution between two upper and lower limits, within which the result is estimated to lie anywhere with equal probability. For temperature, a Type B uncertainty could then be expressed as +/- 3C, for example (the GUM tells how to convert an interval like this into an uncertainty).

And the sigma/root(n) expression for an uncertainty is only valid if the n different measurements are all made under identical conditions. If the temperature being measured is constantly changing with time, n can only be equal to 1.

Reply to  Carlo, Monte
October 31, 2019 4:48 pm

You’re repeating some of the things I am working on in a multipart post.

Carlo, Monte
Reply to  Jim Gorman
October 31, 2019 6:08 pm

I shall be looking forward to reading them.

icisil
October 31, 2019 9:24 am

““It is comfortable for researchers to remain in model-land as far as possible, ”

Of course it is. Just like a lot of video gamers prefer the comfort of their virtual world to working a real job. The unknowns of life are uncomfortable; virtual reality is fun.

Michael Carter
October 31, 2019 9:56 am

It has just occurred to me that there is potential to expose the weakness of the AGW paradigm through an interesting exercise i.e. “prove” that increasing atmospheric CO2 levels result in global cooling.

Suppose for the moment that cooling of 1 c has occurred over the last century. Would the alarmists have followed the same path to substantiate their agenda? I believe they would have and will have found as much evidence (complete with models) as that supporting their existing theory on warming .

Here is an interesting exercise for the physicists out there. I expect that you would work with water vapour, cloud, latent heat, plus vegetation, photosynthetic organisms and impact of marine temperature.

I am picking that with some work, someone can come up with a theory just as robust and with just as much evidence as the AGW one, exposing the bunk for what it is. There is plenty of substantiating literature out there to cherry pick. One would follow exactly the IPPC method but with different key search terms.

It requires a mind-set reversal. The climate is cooling and will continue to do so catastrophically due to increasing CO2. Save the planet!

Sometime in the future there may well be exactly this situation after negative feedbacks really kick in and over-compensate. Patterns in natural systems indicate they will -at least for a period of time.

Consider the exercise a computer game 🙂

M

Beta Blocker
October 31, 2019 10:14 am

My comment here on WUWT is reposted from the remarks I made over on Judith Curry’s blog.
—————————————————-

Another year has gone by and it’s now the Fall of 2019. But winter will be here in another month. Having escaped the mountainous snow country of my youth for the dry boring flatlands of the US Northwest, I can’t say I miss it.

However, it’s time once again to put up ‘Beta Blocker’s Parallel Offset Universe Climate Model’, a graphical GMT prediction tool first posted on Climate Etc. and on WUWT in the summer of 2015.

Judith Curry’s blog post ‘Escape from model land’ seems like an appropriate place for my annual repost of this graph here on WUWT. So here it is:

Beta Blocker’s Parallel Offset Universe Climate Model

Referring to the illustration, three alternative GMT prediction scenarios for the year 2100 are presented on the same graphic.

Scenario #1 predicts a +3C rise in GMT by the year 2100 from the year 2015, roughly equivalent to a +4C rise from the year 1860, which should be considered the pre-industrial baseline year for this graphical analysis.

Scenario #2 predicts a +2C rise from 2015, roughly equivalent to a +3C rise from 1860.

Scenario #3 predicts a +1C rise from 2015, roughly equivalent to a +2C rise from 1860.

The above illustration is completely self-contained. Nothing is present which can’t be inferred or deduced from something else also contained in the illustration.

For example, for Beta Blocker’s Scenario #1, the rise in GMT of + 0.35 Degrees C / Decade is nothing more than a line which starts at 2016 and which is drawn graphically parallel to the rate of increase in CO2 which occurs in the post-2016 timeframe. Scenario #1’s basic assumption is that “GMT follows CO2 from Year 2016 forward.”

Beta Blocker’s Scenario #2 parallels Scenario #1 but delays the start of the strong upward rise in GMT through use of an intermediate slower rate of warming between 2025 and 2060 that is also common to Scenario #3. Scenario #2’s basic assumption is that “GMT follows CO2 but with occasional pauses.”

Beta Blocker’s Scenario #3 is simply the repeated pattern of the upward rise in GMT which occurred between 1860 and 2015. That pattern is reflected into the 2016–2100 timeframe, but with adjustments to account for an apparent small increase in the historical upward rise in GMT which occurred between 1970 and 2000.

Scenario #3’s basic assumption is that “Past patterns in the rise of GMT occurring prior to 2015 will repeat themselves from 2016 on through 2100, but with a slight upward turn as the 21st Century progresses.”

That’s it. That’s all there is to it. What could be more simple, eh?

All three Beta Blocker scenarios for Year 2100 lie within the IPCC AR5 model boundary range — which, it should also be noted, allows the trend in GMT in the 2000–2030 timeframe to stay essentially flat while still remaining within the error margins of the IPCC AR5 projections. (For all practical purposes, anyway.)

Scenario #3 should be considered as the bottom floor of the three scenarios, which is approximately a two degree C rise from pre-industrial CO2 concentration levels. It is also the scenario I suspect is most likely to occur.

The earth has been warming for more than 150 years. IMHO, the earth won’t stop warming just because some people think we are at or near the top of a long-term natural fluctuation cycle. The thirty-year running average of GMT must decline steadily for a period of thirty years or more before we can be reasonably certain that a long-term reversal of current global warming has actually occurred.

How did Beta Blocker’s Parallel Offset Universe Climate Model come about?

Back in 2015, I had been criticizing the IPCC’s climate models as being a messy hodge-podge of conflicting scientific assumptions and largely assumed physical parameterizations. Someone at work said to me, “If you don’t like the IPCC’s models, why don’t you write your own climate model?”

So I did. However, not having access to millions of dollars of government funding and a well-paid staff of climate scientists and computer programmers to write the modeling code, I decided to do the whole thing graphically. Back in 2015, the illustration you see above took about thirty hours to produce. In October, 2019, I updated its labeling to directly include the 1860 pre-industrial baseline datum.

If I’m still around in the year 2031, I will take some time to update the illustration to reflect the very latest HadCRUT numbers published through 2030, including whatever adjusted numbers the Hadley Centre might publish for the period of 1860 through 2015.

In the meantime, I’ll see you all next year in the fall of 2020 when the topic of ‘Are the IPCC’s models running too hot’ comes around once again.

And, given that the topic of climate change will be an important issue in the 2020 elections — unless it isn’t — then nothing in this world is more certain but that in another year’s time, the topic will in fact come around once again.
———————————–

Reply to  Beta Blocker
October 31, 2019 11:28 am

Brilliant piece of work! The obvious question that arises when you see the projections is: when will warming accelerate to twice the current rate to match the predictions? It hasn’t happened yet. The longer we go at the current benign rate of warming, the higher the future acceleration in warming must be to match the predictions. The obvious conclusion: the models are wrong.

Beta Blocker
Reply to  stinkerp
October 31, 2019 2:06 pm

Stinkerp, you note correctly that the longer we go at the current rate of warming, the higher the future acceleration in warming must be to match the IPCC’s predictions.

It seems to me that this characteristic of the IPCC’s models is a factor which ought to be addressed in evaluating the uncertainties of those models, and hence their value and credibility for use in public policy decision making.

Please note as well that the trend lines for each projected temperature rise beyond 2016 are based upon assumed trends of peak hottest years, and are either partially or wholly linearized across the 2016 – 2100 time span.

Where have we heard a lot of discussions recently about how trend linearization affects the level of uncertainty associated with a climate model?

It’s been said that those who control the assumptions control the world.

Beta Blocker’s Parallel Offset Universe climate model is based 100% on assumptions. Change a few of its assumptions and the model changes accordingly, which is the means by which each of the three alternative scenarios is being produced.

And yet, the Parallel Offset Universe projections for the year 2100 lie within the boundaries of the IPCC model projections. Does this characteristic of the Beta Blocker model add to its credibility? I suppose that depends on who is looking at the model, and for what reasons.

It’s been my view for some time now that as long as the thirty year running average trend in GMT is above + 0.1 C / decade, then mainstream climate scientists will continue to claim that real-world temperature observations verify the IPCC models.

October 31, 2019 10:39 am

From IPCC AR5 (2013), these pearls:

because the climate system is inherently nonlinear and chaotic, predictability of the climate system is inherently limited. Even with arbitrarily accurate models and observations, there may still be limits to the predictability of such a nonlinear systemAnnex III p.1460

The IPCC AR5 Technical Summary, Box TS.3, p.64, displayed this graphic comparing model outputs to measured temperatures showing how poorly the models perform, validating the statement above:

comment image

The foundation of all the apocalyptic claims of climate alarmists…er…”scientists” rests on the climate models. Measurements of temperature, sea level rise, ocean “acidity”, extreme weather, etc. contradict the model projections, contradict the findings of related research based on the CMIP models, and contradict all the claims of the alarmists. The climate models are the modern equivalent of haruspicy, though one could reasonably argue that a haruspex may be more accurate. At least the haruspex gets some tasty mutton out of the bargain. All the modelers get is existential angst.

October 31, 2019 10:45 am

“Using expert judgment, informed by the realism of simulations of the past, to define the expected relationship of model with reality and critically, to be very clear on the known limitations of today’s models and the likelihood of solving them in the near term, for the questions of interest.”

Does not parse. Perhaps the last phrase should be: “…form the questions of interest.”

Stevek
October 31, 2019 2:43 pm

The models are not completely useless, they make very good random number generators.

October 31, 2019 3:58 pm

The failure of most models has been picking the wrong molecule. Since both have been accurately measured worldwide, the increase in water vapor molecules has been about 37 times more effective at global warming than the increase in CO2 molecules.

Richard
November 1, 2019 7:19 am

Judith Curry is wonderful, and I greatly appreciate her insight. However, she is missing the real point. The world’s politicians do not want better informed decision making. They see an opportunity to gain absolute control over the unwashed masses, and they’re seizing it with zeal.