By Andy May
There are three types of scientific models, as shown in figure 1. In this series of seven posts on climate model bias we are only concerned with two of them. The first are mathematical models that utilize well established physical, and chemical processes and principles to model some part of our reality, especially the climate and the economy. The second are conceptual models that utilize scientific hypotheses and assumptions to propose an idea of how something, such as the climate, works. Conceptual models are generally tested, and hopefully validated, by creating a mathematical model. The output from the mathematical model is compared to observations and if the output matches the observations closely, the model is validated. It isn’t proven, but it is shown to be useful, and the conceptual model gains credibility.

Models are useful when used to decompose some complex natural system, such as Earth’s climate, or some portion of the system, into its underlying components and drivers. Models can be used to try and determine which of the system components and drivers are the most important under various model scenarios.
Besides being used to predict the future, or a possible future, good models should also tell us what should not happen in the future. If these events do not occur, it adds support to the hypothesis. These are the tasks that the climate models created by the Coupled Model Intercomparison Project (CMIP)[1] are designed to do. The Intergovernmental Panel on Climate Change (IPCC)[2] analyzes the CMIP model results, along with other peer-reviewed research, and attempts to explain modern global warming in their reports. The most recent IPCC report is called AR6.[3]
In the context of climate change, especially regarding the AR6 IPCC[4] report, the term “model,” is often used as an abbreviation for a general circulation climate model.[5] Modern computer general circulation models have been around since the 1960s, and now are huge computer programs that can run for days or longer on powerful computers. However, climate modeling has been around for more than a century, well before computers were invented. Later in this report I will briefly discuss a 19th century greenhouse gas climate model developed and published by Svante Arrhenius.
Besides modeling climate change, AR6 contains descriptions of socio-economic models that attempt to predict the impact of selected climate changes on society and the economy. In a sense, AR6, just like the previous assessment reports, is a presentation of the results of the latest iteration of their scientific models of future climate and their models of the impact of possible future climates on humanity.
Introduction
Modern atmospheric general circulation computerized climate models were first introduced in the 1960s by Syukuro Manabe and colleagues.[6] These models, and their descendants can be useful, even though they are clearly oversimplifications of nature, and they are wrong[7] in many respects like all models.[8] It is a shame, but climate model results are often conflated with observations by the media and the public, when they are anything but.
I began writing scientific models of rocks[9] and programming them for computers in the 1970s and like all modelers of that era I was heavily influenced by George Box, the famous University of Wisconsin statistician. Box teaches us that all models are developed iteratively.[10] First we make assumptions and build a conceptual model about how some natural, economic, or other system works and what influences it, then we model some part of it, or the whole system. The model results are then compared to observations. There will typically be a difference between the model results and the observations, these differences are assumed to be due to model error since we necessarily assume our observations have no error, at least initially. We examine the errors, adjust the model parameters or the model assumptions, or both, and run it again, and again examine the errors. This “learning” process is the main benefit of models. Box tells us that good scientists must have the flexibility and courage to seek out, recognize, and exploit such errors, especially any errors in the conceptual model assumptions. Modeling nature is how we learn how nature works.
Box next advises us that “we should not fall in love with our models,” and “since all models are wrong the scientists cannot obtain a ‘correct’ one by excessive elaboration.” I used to explain this principle to other modelers more crudely by pointing out that if you polish a turd, it is still a turd. One must recognize when a model has gone as far as it can go. At some point it is done, more data, more elaborate programming, more complicated assumptions cannot save it. The benefit of the model is what you learned building it, not the model itself. When the inevitable endpoint is reached, you must trash the model and start over by building a new conceptual model. A new model will have a new set of assumptions based on the “learnings” from the old model, and other new data and observations gathered in the meantime.
Each IPCC report, since the first one was published in 1990,[11] is a single iteration of the same overall conceptual model. In this case, the “conceptual model” is the idea or hypothesis that humans control the climate (or perhaps just the rate of global warming) with our greenhouse gas emissions.[12] Various and more detailed computerized models are built to attempt to measure the impact of human emissions on Earth’s climate.
Another key assumption in the IPCC model is that climate change is dangerous, and, as a result, we must mitigate (reduce) fossil fuel use to reduce or prevent damage to society from climate change. Finally, they assume a key metric of this global climate change or warming is the climate sensitivity to human-caused increases in CO2. This sensitivity can be computed with models or using measurements of changes in atmospheric CO2 and global average surface temperature. The IPCC equates changes in global average surface temperature to “climate change.”
This climate sensitivity metric is often called “ECS,” which stands for equilibrium climate sensitivity to a doubling of CO2, often abbreviated as “2xCO2.”[13] Modern climate models, ever since those used for the famous Charney report in 1979,[14] except for AR6, have generated a range of ECS values from 1.5 to 4.5°C per 2xCO2. AR6 uses a rather unique and complex subjective model that results in a range of 2.5 to 4°C/2xCO2. More about this later in the report.
George Box warns modelers that:
“Just as the ability to devise simple but evocative models is the signature of the great scientist so overelaboration and overparameterization is often the mark of mediocrity.”[15]
Box, 1976
The Intergovernmental Panel on Climate Change or IPCC has published six major reports and numerous minor reports since 1990.[16] Here we will argue that they have spent more than thirty years polishing the turd to little effect. They have come up with more and more elaborate processes to try and save their hypothesis that human-generated greenhouse gases have caused recent climate changes and that the Sun and internal variations within Earth’s climate system have had little to no effect. As we will show, new climate science discoveries, since 1990, are not explained by the IPCC models, do not show up in the model output, and newly discovered climate processes, especially important ocean oscillations, are not incorporated into them.
Just one example. Eade, et al. report that the modern general circulation climate models used for the AR5 and AR6 reports[17] do not reproduce the important North Atlantic Ocean Oscillation (“NAO”). The NAO-like signal that the models produce in their simulation runs[18] is indistinguishable from random white noise. Eade, et al. report:
“This suggests that current climate models do not fully represent important aspects of the mechanism for low frequency variability of the NAO.”[19]
Eade, et al., 2022
All the models in AR6, both climate and socio-economic, have important model/observation mismatches. As time has gone on, the modelers and authors have continued to ignore new developments in climate science and climate change economics, as their “overelaboration and overparameterization” has become more extreme. As they make their models more elaborate, they progressively ignore more new data and discoveries to decrease their apparent “uncertainty” and increase their reported “confidence” that humans drive climate change. It is a false confidence that is due to the confirmation and reporting bias in both the models and the reports.
As I reviewed all six of the major IPCC reports, I became convinced that AR6 is the most biased of all of them.[20] In a major new book twelve colleagues and I, working under the Clintel[21] umbrella, examined AR6 and detailed considerable evidence of bias.
From the Epilog[22] of the Clintel book:
“AR6 states that “there has been negligible long-term influence from solar activity and volcanoes,”[23] and acknowledges no other natural influence on multidecadal climate change despite … recent discoveries, a true case of tunnel vision.”
“We were promised IPCC reports that would objectively report on the peer-reviewed scientific literature, yet we find numerous examples where important research was ignored. In Ross McKitrick’s chapter[24] on the “hot spot,” he lists many important papers that are not even mentioned in AR6. Marcel [Crok] gives examples where unreasonable emissions scenarios are used to frighten the public in his chapter on scenarios,[25] and examples of hiding good news in his chapter on extreme weather events.[26] Numerous other examples are documented in other chapters. These deliberate omissions and distortions of the truth do not speak well for the IPCC, reform of the institution is desperately needed.”
Crok and May, 2023
Confirmation[27] and reporting bias[28] are very common in AR6. We also find examples of the Dunning-Kruger effect,[29] in-group bias,[30] and anchoring bias.[31]
In 2010, the InterAcademy Council of the United Nations reviewed the processes and procedures of the IPCC and found many problems.[32] In particular, they criticized the subjective way that uncertainty is handled. They also criticized the obvious confirmation bias in the IPCC reports.[33] They pointed out that the Lead Authors too often leave out dissenting views or references to papers they disagree with. The Council recommended that alternative views should be mentioned and cited in the report. Even though these criticisms were voiced in 2010, I and my colleagues, found numerous examples of these problems in AR6, published eleven years later in 2021 and 2022.[34]
Although bias pervades AR6, this series will focus mainly on bias in the AR6 volume 1 (WGI) CMIP6[35] climate models that are used to predict future climate. However, we will also look at the models used to identify and quantify climate change impacts in volume 2 (WGII), and to compute the cost/benefit analysis of their recommended mitigation (fossil fuel reduction) measures in volume 3 (WGIII). As a former petrophysical modeler, I am aware how bias can sneak into a computer model, sometimes the modeler is aware he is introducing bias into the results, sometimes he is not. Bias exists in all models, since they are all built from assumptions and ideas (the “conceptual model”), but a good modeler will do his best to minimize it.
In the next six posts I will take you through some of the evidence of bias I found in the CMIP6 models and the AR6 report. A 30,000-foot look at the history of human-caused climate change modeling is given in part 2. Evidence that the IPCC has ignored possible solar influence on climate is presented in part 3. The IPCC ignores evidence that changes in convection and atmospheric circulation patterns in the oceans and atmosphere affect climate change on multidecadal times scales and this is examined in part 4.
Contrary to the common narrative, there is considerable evidence that storminess (extreme weather) was higher in the Little Ice Age, aka the “pre-industrial” (part 5). Next, we move on to examine bias in the IPCC AR6 WGII report[36] on the impact, adaptation, and vulnerability to climate change in part 6 and in their report[37] on how to mitigate climate change in part 7.
Download the bibliography here.
https://wcrp-cmip.org/ ↑
https://www.ipcc.ch/ ↑
(IPCC, 2021) ↑
IPCC is an abbreviation for the Intergovernmental Panel on Climate Change, a U.N. agency. AR6 is their sixth major report on climate change, “Assessment Report 6.” ↑
There are several names for climate models, including atmosphere-ocean general circulation model (AOGCM, used in AR5), or Earth system model (ESM, used in AR6). Besides these complicated computer climate models there are other models used in AR6, some model energy flows, the impact of climate change on society or the global economy, or the impact of various greenhouse gas mitigation efforts. We only discuss some of these models in this report. (IPCC, 2021, p. 2223) ↑
(Manabe & Bryan, Climate Calculations with a Combined Ocean-Atmosphere Model, 1969), (Manabe & Wetherald, The Effects of Doubling the CO2 Concentration on the Climate of a General Circulation Model, 1975) ↑
(McKitrick & Christy, A Test of the Tropical 200- to 300-hPa Warming Rate in Climate Models, Earth and Space Science, 2018) and (McKitrick & Christy, 2020) ↑
(Box, 1976) ↑
Called petrophysical models. ↑
(Box, 1976) ↑
(IPCC, 1990) ↑
“The Intergovernmental Panel on Climate Change (IPCC) assesses the scientific, technical and socioeconomic information relevant for the understanding of the risk of human-induced climate change.” (UNFCCC, 2020). ↑
Usually, ECS means equilibrium climate sensitivity, or the ultimate change in surface temperature due to a doubling of CO2. but in AR6 sometimes they refer to “Effective Climate Sensitivity,” or the “effective ECS” which is defined as the warming after a specified number of years (IPCC, 2021, pp. 931-933). AR6, WGI, page 933 has a more complete definition. ↑
(Charney, et al., 1979) ↑
(Box, 1976) ↑
See https://www.ipcc.ch/reports/ ↑
CMIP5 and CMIP6 are the models used in AR5 and AR6 IPCC reports, respectively. ↑
(Eade, Stephenson, & Scaife, 2022) ↑
(Eade, Stephenson, & Scaife, 2022) ↑
(May, Is AR6 the worst and most biased IPCC Report?, 2023c; May, The IPCC AR6 Report Erases the Holocene, 2023d) ↑
https://clintel.org/ ↑
(Crok & May, 2023, pp. 170-172) ↑
AR6, page 67. ↑
(Crok & May, 2023, pp. 108-113) ↑
(Crok & May, 2023, pp. 118-126) ↑
(Crok & May, 2023, pp. 140-149) ↑
Confirmation bias: The tendency to look only for data that supports a previously held belief. It also means all new data is interpreted in a way that supports a prior belief. Wikipedia has a fairly good article on common cognitive biases. ↑
Reporting bias: In this context it means only reporting or publishing results that favor a previously held belief and censoring or ignoring results that show the belief is questionable. ↑
The Dunning-Kruger effect is the tendency to overestimate one’s abilities in a particular subject. In this context we see climate modelers, who call themselves “climate scientists,” overestimate their knowledge of paleoclimatology, atmospheric sciences, and atomic physics. ↑
In-group bias causes lead authors and editors to choose their authors and research papers from their associates and friends who share their beliefs. ↑
Anchoring bias occurs when an early result or calculation, for example Svante Arrhenius’ ECS (climate sensitivity to CO2) of 4°C, discussed below, gets fixed in a researcher’s mind and then he “adjusts” his thinking and data interpretation to always come close to that value, while ignoring contrary data. ↑
(InterAcademy Council, 2010) ↑
(InterAcademy Council, 2010, pp. 17-18) ↑
(Crok & May, 2023) ↑
https://wcrp-cmip.org/cmip-phase-6-cmip6/ ↑
(IPCC, 2022) ↑
(IPCC, 2022b) ↑
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Andy,
In geology, one of my past companies started out with the Tennant Creek gold, copper, bismuth ore field. Ore deposits were within discrete bodies with much magnetite, enough to disrupt the earth’s magnetic field at the surface. Airborne magnetometers found about 200 of these bodies, so gridded ground work refined them. A model was started before computers, using hand-operated calculators. As the model improved, it was able to estimate the depth to the top of the body, its 3 axial ellipsiodal dimensions and major axes and some of its magnetic strength properties and directions. Before computers, matching of measurement with model was via profiles drawn on transparent paper overlays. Disagreement to the width of a pencil line mattered. This was but one model that grew the tiny prospecting company into one of Australia’s largest and richest corporations in 40 years.
So, I accept implicitly that models can work in the earth sciences, do work and are required to understand effects too large for the mind to absorb.
We eventually found about 6 new mines from the 200 ironstones on offer at Tennant Creek.
Part of the geophysical skill was to help decide which ironstones could be eliminated as barren, as early and as cheaply as possible.
That is where this model seems to diverge from the methodology used in CMIP climate models, where there seems to be little determination to reject research results and concepts to allow concentration on the best performers. Maybe this is done in CMIP work, but is not emphasised in reporting, so I stand corrected if I have it wrong. (Though I remain unimpressed by CMIP comparisons of predicted versus observed).
Geoff S
This is something I have never understood. In the well known spaghetti charts we have a bunch of models of varying degrees of failure when compared to observations. Why do we not simply throw out the ones that fail and only use the ones that have the best fit to observations? Apparently there is one decent one, the Russian one.
Instead, if I understand it properly, what we do is average all of them, good and bad, and use the results for prediction. It makes no sense.
And, what is the criteria for getting your model included in the spaghetti graph in the first place? And why, if it fails, it remains included
You’re dealing with Communists. You’re bringing a letter opener to a gun fight. These people hate you, me, everyone, and freedom. They are in it for the money–science and logic be damned.
This month Intuitive Machines landed a probe on the moon, using a model based on Laws of Motion and Celestial Mechanics discovered centuries ago, as close to settled science as it gets. They didn’t write multiple models and “simply throw out the ones that fail and only use the ones that have the best fit”.
The ongoing destruction of our energy systems is possible due to the widespread ignorance of the true complexity of climate systems, but the average person knows rocket science is reliable, although they don’t understand it, and supposes the same of “climate science”.
The model that needs to be “thrown out” is this one, that has brainwashed the masses.
Actually, they did.
Routines were tested against observation. Any error in a routine versus reality meant returning to the routine’s algorithm and coding, then correcting the formulae’s errors.
As routines mature, they were included in overall model design with each routine fulfilling part of the mission or mission support.
These are repeatedly tested against all conditions whether in miniature under lab conditions to full size tests outside.
Whereas in climate, routines are not tested against observation.
Otherwise that internal CO₂ warming calculation would be removed until ready for production quality.
All this is enabled after the program designers have decided that actually modeling the climate as thirty years of sequential ever changing and surprising weather is much too difficult.
Meaning the programmers decided to focus on some partially understood weather interactions, ignore less understood reactions, e.g., clouds and subdivide the globe into certain sized tall columns of weather calculations.
The model is not a model of weather. It is a bunch of estimating guesses that fall into complete error within days after the model is run.
Out of all the alarmist confirmation bias models, the most accurate is the Russian model that lacks CO₂ confirmation bias.
Not a surprise, actually.
More like a confirmation of the wrong ideals, erroneous assumptions, eyebrow high funding, significant publicity seekers under noble cause corruption duplicity, compared to the same agency before 1985 CE.
From about the mid 1960s, there were calculator games where one lands a rocket. Failure to mentally calculate retrorocket use in one’s mind meant a crash.
There were papers written to estimate statistic percentages for where in a series of calculations the fatal error was input.
When computers became more common in the 1980s, the retro rocket game move to the bigger machines only with visual results.
Progress that simple calculation with corrective actions taken over 60 years and you get the moon lander’s code quality of today.
Keep in mind that of the recent spate of moon landings, only one almost landed correctly and Japan’s landed upside down. The rest failed their landing.
Observation must be used to correct every model component.
This is exactly the kind of argument that leads non scientists to trust that “climate science” is as reliable as celestial mechanics that predicts solar eclipses centuries into the future and therefore can predict weather a century into the future.
The problem is ignorance, the solution is educate, not obfuscate.
Actually the landing was only partly successful, apparently they had problems with the automatic landing program and had to go instrument landing. Causing the lander to to come in at an angle and brake one of its legs.
All well and good, but the lander tipped over.
Missed it by that much.
INM CM5 (Russia) is the only CMIP6 model that does not have a tropical troposphere hotspot—a result so important they published a paper on it. It also has an ECS of 1.8, closest to EBM estimates.
Interestingly, one of its parameters, ocean rainfall, was tuned using ARGO estimates that are about twice the CMIP6 average. Which means all the rest run hot because of too much water vapor feedback.
Computer games can not scare people without a lot of water vapor feedback. They are designed to scare people. Those pesky Russians seem to have other goals. Someday their INM will be sanctioned and thrown out of CMIP. Not scary enough. It also took a long time for INM to get into CMIP in the first place.
It’s a tough read….but here is some info on the workings of the Canadian CMIP6 program, giving the highest ECS of the CMIP6 models at 5.6
https://climate-scenarios.canada.ca/?page=pred-cmip6-notes
Some interesting statements contained therein:
“The Max Planck Institute for Meteorology Earth System Model version 1.2 (MPI-ESM1.2) experiments were prepared as part of CMIP6.”
”The ECS of both versions of the MPI-ESM1.2 is the same as it was tuned explicitly to 3°K.”
“The Norwegian Earth System Model version 2 (NorESM2) experiments were prepared as part of CMIP6.”
”The atmosphere-land resolution of each aforementioned model is approximately 1° and 2°, respectively. The two resolutions of NorESM2 have very similar ECS values at 2.54°K for NorESM2-LM and 2.50°K for NorESM2-MM. Thus, the higher resolution NorESM2-MM was selected for inclusion into the ensemble of predictor variables”
So….many variables are input based on many “data sets”…CMIP6 models are very far from “first principles” models. If you take the “worst case” as the basis for your next model, you are increasing your bias. Who knows how many CMIP6 models use the Max Planck Institute ESM-1.2 which is based on Mauritsen et al. (2019) and Müller et al. (2018) ?
A very big problem is that “experiments” are really computer simulations and “data” is often the output of a predictive computer program rather than a statistical analysis program. The result is tendency towards “experiments in researchers’s cognitive bias” rather than actual scientific analysis and predictive ability
And what’s that 1 or 2 C accuracy ? That’s pretty poor since politicians moving toward cutting advanced economies by 40% and putting the rest of the world into energy poverty…..using a 2 C criterion. One is led to the possibility of ulterior motives….
The computer games are really trying to predict the climate in a few hundred years. 50 years of observations are not enough to judge which ones are “accurate” … I say any computer game that appears somewhat accurate is just a lucky guess. They all predict whatever they are programmed to predict based on the ausumptions inputted. That is just a computer game with no predictive ability.
The difference is you lost your job if the models were wrong, the climate modellers keep their job based on how scary their results are.
Keep in mind that not one of the models averaged is correct for any serious succession of days.
A little numerical error through egregious numerical error are averaged together, then the alarmist researchers pretend that their numerical chicanery is of use.
Reminder: Model results likely will not become reality, by incredible odds.
Only in climate science can one assume that by averaging all the wrong answers, one will get the right answer.
My suspicion is that the more models and runs that are included, the larger the uncertainty range will grow. It is a variant of the old joke that a man who only owns one watch ‘knows’ what time it is; however, the man who owns many watches is never sure about the time.
+100
The thing never fully divulged is all the little tweaks and constraints they apply to stop the model leaving the rails.
Artificially removing, and not reporting, results which are “obviously wrong” doesn’t mean that the model didn’t produce them.
Excellent – concise and precise.
The CMIP process could not simply throw out the models that were invalidated by observations as this would have ended the modelling funding of dozens of high profile academics and institutions. It would be highly embarrassing both to the modelers and the politicians who threw boatloads of tax-payer funds at them. One might imagine that those whose models were tossed would immediately go about trying to discredit the ones that were kept. The solution – just average all the models – is a compromise that is morally and scientifically wrong.
I have used many engineering models over a 40+ year career that determine how materials and structures will react to loads and stresses in their working life. For example there are several models built into various CAD software packages that will calculate performance of roof and floor trusses. These models are extensively validated by construction and physical testing of modeled trusses. As commercial products these models have gotten better and more capable over time. Even so, there are many cases where a design that meets project requirement based on model outputs must still be physically constructed and tested. If the test results don’t closely match the model predictions the design will not be accepted. This process demands that models be continually reevaluated, errors corrected and uncertainty reduced.
By the way, there are several engineering models that include “finite element” or “finite difference” analysis to determine things like thermal conductivity and stress distribution. This is very similar to the workings of GCM’s but without the need for many assumptions and parameterizations. They rely almost entirely on well established material properties and precise design details. If you did an “intercomparison” of these models you would expect to get virtually identical results. I know, because I have done this myself when deciding on what software package to buy to replace older less capable products.
New car designs are “crashed” hundreds of times in a computer before there is a real crash in the lab. As a result, cars are safer and cheaper (than they otherwise would have been).
Assuming by “spaghetti models” you mean the projected storm tracks for tropical cyclones, I don’t think that all of the models are necessarily “averaged” as a straight mathematical calculation, but the ones that are frequently displayed on the charts are certainly accounted for in coming up with a projected storm track and “cone” of probability, based upon past data.
The fact that different models come up with different solutions is not senseless. Even direct measurements of known variables have natural variability as measured statistically by various statistical models. That’s where “probability” comes into play. Take a dice with numbers printed on it (usually six), and the probability of any one number hitting is equal to one divided by the total count of numbered faces (or 1/6 for a typical six sided dice cube). But you aren’t going to get precisely that outcome when the dice is thrown. You can still hit the same number two, three, or more times in a row and the probability has not changed.
Not all models are created equal, as Andy May is saying. It all depends upon what physical factors are taken into account, limiting factors, assumptions, and biases used that can make the model a better or lesser predictor of outcomes. Models that have little predictive value should be discounted or discarded over time.
All engineering is done by models. All physical laws and theories are models. There’s a tendency among climate skeptics to dismiss all models as BS that that is no more correct than it is to say the models are perfect predictors.
To quote George Box (again): “All models are wrong, some are useful.” I think the jury is still out concerning the usefulness of GCMs.
IMHO, the jurors that aren’t in the tank for climate alarmism came in a while back with a finding that the models are unfit for the purpose of projecting future climate conditions. The evidence is manifold –
And finally there’s the fact that projected radiative ‘forcing’ is dwarfed by the radiative impact of cloud error in the models, hence the inescapable conclusion that climate models can not provide meaningful predictions of future climate states.
Well, I don’t disagree. I guess equivocation is the wrong tact to take here.
Once you ask the extra question “Useful to whom?”, things become somewhat clearer.
There’s also “clearer to whom?” Apparently obvious feedback terms are not clear to Mr. Stokes.
The reason climate skeptics dismiss “climate models” is because unlike the other models you mention, their output is NOT *tested against reality* and modified so that they more closely conform to it.
The fact that they continue to assume that atmospheric CO2 is the primary temperature driver when no empirical evidence says it is discredits them completely. If an engineering model consistently said something is adequately built and structures built in accordancewith “the model” kept falling down for 30 years, and the model kept telling you they shouldn’t, that model would be dismissed too.
The latest climate models are running hotter than ever, while updated ECS numbers that STILL do not account for the NEGATIVE feedbacks of the climate system have been shrinking.
They therefore are deservedly dismissed.
When you refer to the “spaghetti charts,” I’m presuming that you are referring to the ensembles that represent different runs with different initial conditions, and usually different models. Unlike functions that you may have dealt with in calculus that converged to the correct answer, Global Circulation Models don’t do that. Typically, there is no way to determine what models, or runs of particular models, are acceptable until long after the model results have been published. That is, during the time that was being projected. Therefore, they have little utility for predicting the future.
What should be done is to look at the r^2 values of a least-squares regression. The r^2 value tells one what percentage of the variance is explained or predicted by the correlation. When was the last time you saw a stated r^2 value associated with the spaghetti?
Another way of assessing the predictive value of the ensembles is to compute the standard deviation over an approximately linear section (optionally de-trended). I suspect that the standard deviation of all the runs for a given emissions scenario would be so embarrassingly large that nobody would want to call attention to the fact that the average value would be of little utility in making any forecast, prediction, or projection. Take your pick of the synonym. The uncertainty envelope, as estimated by the standard deviation, will probably be two or three orders of magnitude larger than the precision of the temperatures that are measured.
However, based on the track record over the last few decades, most of the model builders should probably be de-funded, and climatologists should rely on the Russian model(s).
You don’t understand climate science. All measurement uncertainty is random, Gaussian, and cancels.
And, if the spaghetti plots were of absolute temperature instead of “anomalies”, the spaghetti effect would be much, much worse.
“ Why do we not simply throw out the ones that fail and only use the ones that have the best fit to observations? Apparently there is one decent one, the Russian one.”
We don’t because no one knows why any given model does or does not “work”, i.e. produce results that match observations. The important word in that statement is “does.” Just because a model produces results that match existing observations doesn’t mean it will continue to do so in the future. And there’s no way to compare it to future observations until, well, the future.
You are right that averaging all of the models makes no sense. That it is done at all is probably the most damning indictment of “climate science” there is. Implicit in the practice of averaging “ensembles” of numbers produced by climate models is that the output of those models is random. That is a necessary, though not sufficient, condition for the validity of the practice.
So either “climate scientists” are too stupid to realize how thoroughly they demonstrate their incompetence by advertising the use of ensemble averages on the grand stage of an IPCC report – or they think we’re too stupid to catch their con job. I suspect there’s a mix of the two attitudes.
“Just because a model produces results that match existing observations doesn’t mean it will continue to do so in the future.”
That’s because the climate “models” are not actually models. They are data matching algorithms. When they fail to match the data then parameters are changed and more differential equations are added to correct the match.
Interestingly, the Russian model also has the lowest ECS.
Great story. I remember that before computers, when seismic interpretation was done by hand, the geologists who did the work were called “Geological Computers.” That was their title.
I agree, the biggest flaw in the CMIP effort is they lump all the models into one bundle and take the average as if they were all equal. They tried to weight them for CMIP6 but failed. They really should just choose one that matches observations best, but they don’t like the observations, they are not alarming enough.
Andy,
I put it down to accountability.
When a model demonstrates an outcome of value, use it and improve it.
When the model does not have a valuable output, discard it.
We reached the stage where modelling success alone more than paid our salaries. We were accountable for our incomes.
I cannot see that the vast monies for supercomputers and hordes of researchers has ever helped pay the incomes of those involved. Success should be rewarded, but first you have to define success and how to hold participants accountable for success.
Geoff S
“When the model does not have a valuable output, discard it.”
I suspect that a valuable output is a highly alarming output.
I would suggest that a test of utility would be that a model produce a prediction 30-years out whose nominal or average value is at least within the 1-sigma uncertainty of the actual meteorological measurements during that period. That should include temperature, precipitation, and heat index as a minimum. Number of storms and their intensity would be a plus.
“They really should just choose one that matches observations best, but they don’t like the observations, they are not alarming enough.”
That’s the truth.
Any normal person would use the best model (the Russian model) and throw out the rest. But, as you say, that’s not scary enough for people who are duty-bound to find a human fingerprint on the Earth’s climate, so they confuse the issue by averaging models that don’t represent reality.
It’s a poltiical game and a money game. If they threw out the models that don’t represent reality, and keep the one that is closest to reality, then they throw a Lot of people out of work, and bureacracies just don’t do that, especially when the bogus models serve the purpose of the IPCC bureaucracy.
The IPCC’s reason for being is to find a human fingerprint on the Earth’s climate and they claim they have been successful, although they have no evidence to back up their claims so they confuse the subject by using dozens of bogus models and taking an average of them for political purposes.
It’s all a Scam. A tremendously destructive CO2 scam detrimentally affecting millions of people.
“Any normal person would use the best model (the Russian model) and throw out the rest.”
Any normal person would recognize that long term climate forecasting has been a failure and not use any computer games
THE CLIMATE IN 100 YEARS
will be warmer, unless it is colder. That’s all we really know.
We also know that few people in past centuries preferred colder than average temperatures, but most people liked warmer than average temperatures.
It just recently dawned on me the absurdity of the phrase, “the foreseeable future.” The foreseeable future is literally none of the future.
I think that the “foreseeable future” fits the definition of an oxymoron.
The average computer game represents a climate consensus
A specific computer game represents one opinion and [erhaps one lucky guess.
You need over 100 years of observations to determine if a computer game is “accurate”.
A computer game that
looks good” after 50 years of observations may not look so good after 200 years.
Most of the alleged water vapor positive feedback takes a few hundred years, and is still allegedly causing warming after 400 years. Not much in the first 50 years.
By a computer game, do you mean the code, or one run?
One half of the planet starts warming up every morning and cools down every night, so nearly all of the water vapor feed back occurs every day. The only part that doesn’t is warming of Sea Surface Temp over time and it likely varies by a couple of degrees over each deep water turnover, 500 to 800 years, but we don’t have thermometric, cloud cover (main driver of planetary albedo), or rainfall stats to say for sure. And tree rings and lake bottom pollen seem to only have inconsistent couple of degrees accuracy.
We’re pretty sure about the last glaciation being several degrees colder…./s
“I agree, the biggest flaw in the CMIP effort is they lump all the models into one bundle and take the average as if they were all equal.”
Just not rue. They don’t do that.
Citation?
“Citation?”
Did he make the claim? You either don’t seem to understand how exchanges work in superterranea, or are admitting that WUWT does not operate there. Maybe, after the umpteenth time, this will take.
“That which can be asserted without evidence, can be dismissed without evidence.” – Christopher Hitchens
Nick, the models that are “lumped” into the primary ensemble for computing ECS and TCR are the so-called “CMIP6-Endorsed” models. They also have prescribed initial conditions and are run from 1850 to 2019. See page 224 in AR6 WGI. They have other ensembles for different purposes.
p 224 says they select a subset with matching initial conditions, and some QC. It says nothing about averaging them.
It’s a good read Mr May and certainly got me thinking.
Einstein is credited with the suggestion that if you do not know something well enough to explain it to a very young child then you do not know that something at all.
In making ‘something’ accessible to a computer programming language a programmer must do as Einstein suggests with heaps of added simplicity to get over the many to many relationship hurdles you will encounter. Failing to do this successfully right at the beginning of your work with a problem will negate any or all future work and you will soon learn that tinkering halfway through a project is the hallmark of failure. As in athletics a perfect start is required if you want a perfect finish and a perfect race.
I have scrapped many projects in the early stages and have learned that I must understand not just the problems with data but also the many layers of relationships data has with not so obvious as well as obvious links. This is time consuming and often frustrating until my brain clicks into gear and I instinctively know I am almost over the brow of the hill you have to conquer to have any chance of getting to where you want to be. And sometimes I cannot get there and I look for distraction.
For something like climate science where we may have little or no idea which pieces we understand well enough to tinker with it is the start that we are most likely failing with and that is because we have yet to conquer the problem at any level let alone the whole. Weather is all and everything in climate science to the point that we ought to re-christen it weather science and start over. Perhaps one day we will make a perfect start and have the belief in racing to a perfect finish. I just don’t think we know enough; have clear enough minds; have clear enough finishing lines or are open minded enough about the problem to have any chance of producing a worthwhile model except through pure chance. We should be breaking stuff down and proving each part works.. The Imperial College models for COVID-19 showed us how far off ‘okay’ our understandings are. Climate is perhaps even worse than that and has a bias problem too – what could possibly go wrong?
Mr Box was absolutely right about that mark of mediocrity which in a program designer’s eyes must be the kiss of death leading to a completely fresh start. I just don’t see evidence that we truly understand the weather and without that just where do we think we are going?
In your, and Geoff’s comments, are embedded the fundamental rules for creating/writing a programme/model.
If a programme/model, fails any one of these rules, then it can’t be written.
The creators of the models, fail the first rule, they don’t use sun activity, clouds, volcanoes etc. In fact, they don’t even know if there are factors, that they haven’t considered.
It would appear, they don’t understand feedback mechanisms properly, so rule 2, fails.
There are so many facets, to the last two rules, is it any wonder their models don’t work.
Another thought, that strikes me about their models. They divide the atmosphere into cells. They then compute the energy transferred from one cell to another, via the cell interface (side). Where two adjacent cells share an interface, they can compute the transfer. But, it’s a 3D model, what about adjacent cells, which do not share an interface, such as the corners, or an edge? In reality, those cells, will still interact with each other.
There is also the issue, of how they run the model, is it a linear process? Such as, the transfer from one cell to another and so on, or do they calculate the transfers between all cells surrounding a cell, that is the six neighbouring cells? Do they calculate, the feedback between those cells dynamically?
Also, what happens if they run their models from different starting locations, do they get different results, some of which might not give the results they desire?
I’m not convinced that, they will ever successfully, model climate, even if they used quantum computers.
I see the question, is the atmosphere as a whole acting as they assume the cells acts and interacts alone ?
Models can be used to help you understand complex systems, even when your understanding is incomplete.
Build a model with your best current understanding of the system.
Run the model. Figure out where your model output differs from reality, then figure out why your model differs from reality.
In this instance, models can be useful in helping you to figure out what it is you don’t know and where more research in the real world is needed.
These types of models can’t be used for making predictions.
This is how most climate models began their lives, and this is the mode that all the climate models are currently in.
The big error that the politicians who are pushing the climate scare is trying to use these models in a way they were never designed for.
I agree Mark. If I were involved in the model(s), my goal would be to discover where and why it did not match current observations, and try to make it emulate them. Alas, they’re only interested in getting more funding.
While alarmists will piously claim that the existing GCMs are just physics, they are wrong. The energy exchanges in clouds are not known perfectly, and even if they were, our computers are not powerful enough to calculate the energy exchanges at the spatial resolution necessary, so that requires parameterization. In other words, substitute subjective opinions of the net or average behavior, while everything else is calculated. That is like saying Einstein’s famous equation is actually E = mc^2 +/- U, where U is an unknown quantity that may be very large or small.
It’s not just clouds. They parameterize lots of things through averaging. Humidity, CO2 density, land surfaces, etc. It gets averaged through their use of homogenization and “infilling” of temperatures if no other way.
Can a coupled, non-linear, chaotic system ever be parsed so finely that one eureka! value pops out at the end of infinite numbers of equations, and the whole system is declared “settled / solved”?
Simple answer, no. Only with the ability to control the parts could this be done. Think of making hydrocarbons from the constituent parts with nothing but fire!
If there is predictability in climate then the problem may be better solved starting at the surface stations and engineering your way up to the sky and back to the ocean layer, with a set of AI derived connections and weights connected to the appropriate time series. And these connections will not be intuitive, that is one way of solving complex multi-dimensional and decoupled problems in modeling.
Do you try to model Agung, El-Chicon and Pinatubo or remove their effects, to get a better estimate of the decadal signal through the 60’s,80’s and 90’s?
There are three types of systems in physics: linear, nonlinear, and nonlinear-chaotic. There are no chaotic linear systems. If your linear system becomes chaotic, then it wasn’t linear to begin with.
The definition of a linear system is simple. If y1 is the result of system S to input x1; y2 is the result of system S to input x2; and a*y1 + b*y2 is the result of system S to input a*x1 + b*x2 (a and b are constants), then system S is linear–otherwise it is nonlinear.
We know that weather and by extension climate are nonlinear, chaotic systems.
“The butterfly effect does not imply that chaotic systems are unpredictable. They in fact are predictable in the short term because of their deterministic character. But they become unpredictable after a certain amount of time, called the horizon of predictability. It’s the time required for tiny errors to double in size. For a chaotic electrical circuit, the horizon is something like a thousandth of a second. For the double pendulum, it’s a few tenths of a second. For the weather, it’s unknown but seems to be roughly a week or two, and for the entire solar system, it’s about 5 million years (as determined by very careful computer simulations).”
–Professor Steven Strogatz–Cornell University
Even Mr. Stokes agrees that GCMs are weather programs. Running them longer than a few weeks is nonsense–because of the horizon of predictability limit.
We seem to know many of the equations that define weather phenomenon. I have yet to run across any equation that represents climate phenomenon.
An important feature of weather is cumulus convection. GCMs do not have the resolution to model cumulus convection, therefore it is parameterized. That alone should bring GCMs into doubt. But as any programmer knows–many, many things are parameterized in programs–especially programs modeling physics, weather, and climate.
you don’t understand ‘non linear’
Basically, a non-linear system contains one or more singularities, (only one is more than sufficient to define it)
i.e. Non-linear systems require a Division By Zero
All other systems are linear
Squiggly lines on a graph are NOT = non-linear.
By definition, they are = lines and are thus = linear,
Oddly enough, lines are linear.
Huh?
I quote from my EE textbook:
“In the case of nonlinear systems, it is not possible to write a general differential equation of finite order that can be used as a mathematical model for all systems, because there are so many different ways in which nonlinearities can arise and they cannot all be described mathematically in the same form. It is also important to remember that superposition does not apply in nonlinear systems.”
A nonlinear system is a system that is not linear. And I know what a linear system is. So your criticism is wrong.
Peta seems to be arguing from the position that anything that can be drawn by a pencil, is a line and therefore linear.
Linear by definition is a straight line. That’s where she’s getting lost.
Even by your standards, that was bizarre. It is also completely incorrect.
Linear, by definition is a straight line.
Division by zero is not defined, and it generally results in what is called a singularity.
Division by zero is called a “pole” in a transfer function. As opposed to a “zero” where the transfer function becomes zero.
Climate models should probably be considered as “transfer functions”.
H(s) = N(s)/D(s).
Even by your standards, that was bizarre. It is also completely incorrect.
Linear, by definition is a straight line.
Thanks,
Very good comment. Interesting, informative, and well written.
Thanks.
A mathematically chaotic system has just two properties.
Weather, and by extension climate, are chaotic. IPCC AR3 WG1 even said so. Which means long term climate modeling is a fools errand.
Some things in climate are very predictable, but that science project has not been tested thoroughly, because of the quagmire created by the consensus.
When the most amount of black carbon particulate matter observed appeared in the Stratosphere created by the Aust. Brush fires in 2019 – 2020, it is almost guaranteed that 1 – 2 years later, will cause havoc for winter in the Northern Hemisphere. Did any climate models issue any alerts in advance, no they didn’t, children and adults died from hypothermia in their sleep in Texas. The last power crisis from cold in Texas, almost certainly the effects of Tall volcanic eruption.
Confident that their government is taking care of them spending billions on climate research that is blatantly tainted with a corrupt and error riddled infrastructure.
The one good thing that is happening, now is private funding of climate models / research, because the current models are worthless, from a capitalist’s point of view.
“A mathematically chaotic system has just two properties”
Not true. Lorenz butterfly is the exemplar of chaos, but has no feedback.
Fluid flow is chaotic (turbulent). CFD is a major engineering activity. It works by averaging velocity and applying conservation laws. It is not a fools errand; it works very well and is used by all the major industries.
“. . . has no feedback.”
The Lorenz system equations are:
dx/dt = σ*(y – x)
dy/dt = x*(ρ – z) – y
dz/dt = x*y – β*z
where:
σ = 10
ρ = 28
β = 8/3
Not all values of rho, sigma, and beta lead to chaos, but these do.
I see lots of feedback. I guess you missed one there.
Where? It is just a system of differential equations.
Seriously?
Yes. Can you identify what you are calling feedback? Quantify it?
You solve the three equations by time slice. So it’s dx = ()*dt; dy = ()*dt; and dz = ()*dt, Then you plug those values back into the equations for the next time slice. It’s an obvious feedback to me.
That relates to a time discretisation for numerically solving. Then it depends on how you solve. If you use simple Euler, there is no “feedback”; each new variable depends only on past values. Implicit methods do, and you could kinda call that feedback, but it is a property of the solution methods, not the differential equations.
Kinda? Heh!
I created a Javascript/webGL gadget here for solving and visualising the Lorenz equations. I used fourth order Runge-Kutta, as is common. Each step is explicit; there is no useful notion of feedback.
You are a real piece of work. A chaotic system is deterministic. That means that the current state determines the next state. That a deterministic system can be chaotic was one of the surprising things about chaos. There is no way you can solve those equations without feedback.
Well, I do. As I say, using in this case fourth order Runge Kutta. I’ve worked in differential eqiations for fifty years. It is not normal to describe such equations as having feedback, and I don’t know what it would mean.
I analysed here the origins of the chaotic behaviour of the Lorenz equations.
“I’ve worked in differential eqiations for fifty years.”
Big deal. I started with differential equations in the late sixties and early seventies. I have my own website of the Lorenz equations, so big deal there too. You can even rotate the image with your mouse.
Well, on mine (first link) you can supply the parameters, choose the solution range, and rotate the image with your mouse.
I wrote the Lorenz version in Javascript. I was planning on rewriting it in Java (my favorite language), and using GWT to convert it to JS.
I too wrote in Javascript with WebGL. You can of course track everything through the html, but the javascript file is here.
“. . . you can supply the parameters . . . .”
That’s a great idea. I will add that feature to my rewrite of the code. Unfortunately, GWT doesn’t allow the creation of screens as easily as Java Swing.
My basic engineering inclination is to invoke feedback for this case. As a EE student, I learned that amplifiers oscillate and oscillators don’t–at least in student labs. Anything that oscillates is experiencing feedback–the squeal of public address systems for example. Bistable multivibrators oscillate. Clearly, the Lorentz equations oscillate. You may deny it for millennia, but there is at least one feedback loop involved.
“Clearly, the Lorentz equations oscillate”
Well, they don’t really. What is the frequency? An oscillation returns to its initial state over a period. A Lorenz trajectory doesn’t.
But you can have feedback without ocillation, as in a proper amplifier.
The real question is, how does invoking feedback help you? And hen the questions – what is the feedback coefficient?
OMG! Seriously! Sense when does an oscillation have to be a specific period? You are digging. You should stop.
Here is Merriam Webster on the electrical meaning:
“a flow of electricity changing periodically from a maximum to a minimumespecially : a flow periodically changing direction”
If it is periodic, it has a period.
Chaos is not oscillation – it would be much simpler if it were. The trajectories of the Lorenz system seem somewht oscillatory because they go round an round in a confined space. But they have no tendency to repeat themselves.
You’re still digging.
Yep, he’s still digging!
Since when do oscillations have to be periodic? Seriously!
Two words: Anisotropic oscillators.
OK, here is what Wiki says about that:
“With anisotropic oscillators, different directions have different constants of restoring forces. The solution is similar to isotropic oscillators, but there is a different frequency in each direction. Varying the frequencies relative to each other can produce interesting results. For example, if the frequency in one direction is twice that of another, a figure eight pattern is produced. If the ratio of frequencies is irrational, the motion is quasiperiodic. This motion is periodic on each axis, but is not periodic with respect to r, and will never repeat.”
There are two different observable periodicities, even where their ratios are irrational. But chaotic motion like Lorenz has nothing like that.
Wiki? You’re kidding!
OK, tell us your version
You ever hear the “Whoop!” of a police siren or ambulance? How is that generated? Does the period of the oscillation change over time?
And since when is going around and around not repeating previous states? Are you truly trying to say that temperatures never repeat? That seasons never repeat?
We are talking about the Lorenz equations.
Interesting that the Runge-Kutta is still in use, I used a version of it in my thesis calculations 50 years ago. That was modeling a nonlinear, chaotic system.
Yes, RK (120 years old) is so fast and accurate that there is really no need to try for anything more complicated.
“each new variable depends only on past values”
And any error in each time step is amplified in the next step….
ie… it is a feedback calculation.
WRONG… Fluid mechanics strenuously avoids areas of turbulence in its calculations.
Absolute nonsense. You can’t avoid turbulence; it is everywhere. FM pays special attention to boundary layers where turbulence originates. It has to.
Temperature is not fluid flow. It has no “average velocity”.
It ‘works’ for simple geometries and processes , e.g., a reactor vessel, and even then it has to be continually modified as the process is scaled up from the lab bench, to the pilot plant and, finally, to a commercial-scale facility, where it will continue to be tweaked. Modeling ‘climate’ on a global scale? You must be kidding.
Btw, speaking of ‘conservation laws’, have you heard that GCMs conserve neither mass nor energy?
“and even then it has to be continually modified as the process is scaled up from the lab bench, to the pilot plant and, finally, to a commercial-scale facility”
Physical modelling has that problem. One of the key advantages of CFD is that it can model any scale directly.
CFD and GCMs are based on conserving mass and energy. It is the only thing that keeps them going, as they do.
“CFD and GCMs are based on conserving mass and energy.”
Even KT97 conserves energy. It’s just that many of the energy flows are in error by 100% or more.
Climate Models are an appallingly bad joke being played out on naive, gullible, overly obedient people who have been told they are intelligent & well educated when little could be further from the truth.
The average climate model is no more than an exercise I might have set/given when I met my first ever ‘computer’ exactly 50 years ago.
It was the size of a fridge, cost £25,000 (at that time), had a one-line red-LED display and was programmed with postcards and a thick black pencil.
Made by Hewlett Packard, the room-lights blinked in the entire block when you switched it on
Models are devices that add random numbers to a (straight-line) trend-line.
That trend-line is derived from claims that temps have risen by x-amount over y-years and, as CO₂ levels have also risen over that very select time-period, CO₂ must be the cause.
This is the poorest sort of kindergarten science ever, it’s right up there in belief in the Tooth Faerie
The models then assume CO₂ will continue rising and to try make themselves look clever (by supposedly noting that ‘weather is changeable‘) add random numbers to the trend-line it created from the contrived and adjusted historical data.
Hence the countless ‘spaghetti graphs‘ that are endlessly pushed into our faces, along with the firm admonishment that if we don’t understand them we are more stupid that a really stupid person/thing.
The claim that CO₂ controls temperature is again a childish misunderstanding of really basic thermodynamics – the objects somehow ‘share energy‘ and thus it is routine for heat energy to move up a thermal gradient. This MUST be true because ‘Energy can never be destroyed’
Energy does not move up thermal gradients. It never has done. It never will do.
Further compounded by the routine and deliberate confusion between: surface, atmosphere (and its component parts), land, ocean, globe and planet = that these places are always ‘sharing’ heat energy between themselves and each other.
No. The energy only travels one way.
There is grudging recognition that water has something to do with temperatures but that its role is subservient to CO₂, because CO₂ is ‘man-made’ and that This Entire World is merely our plaything- we were put here to dominate it.
This entirely disregards every observation that anyone might make that wherever water does anything on this planet, it has a cooling effect.
Dare to mention that at your own personal Flat Earth peril – you will be instantly cast out if not find yourself attached to a Ducking Stool.
Further wilful and childish disregard of thermodynamics happens in 2 notable claims:
1/ That a warmer atmosphere means a Heating Earth = the notion that when electromagnetic energy is absorbed by a ‘green house gas’ that it is interminably trapped – as in the childishly contrived ‘black sphere with a hole in it‘ that appears in every explanation of GHGE
e.g. Like if me or you are hauled up by a cop and thrown in jail = that we simply give up and sit there for all eternity.
No. Heat Energy always gets out of jail and that a warming atmosphere can ONLY mean that Earth is losing heat energy faster than it did at any time previous. Did even Star Trek use ‘energy trapping‘ technology?
Please demonstrate anywhere here in real life where what happens at ‘someplace in the sky‘ happens down here in (e.g.) our own kitchens
2/ The relentless confusion between Temperature and Energy
i.e. That ‘more temperature’ will cause more storms and wild weather.
Again= childish beyond belief, storms are powered by Energy. temperature is involved but only as Temperature Difference, (between some Point A and somewhere else Point B
A very large number of us have a device hanging on the wall of our house (usually in the hallway) which perfectly explains what is happening.
And what is happening is frightening beyond our wildest nightmare come true – yet we dreamily imagine it is a ‘nice’ and desirable thing – perfectly exemplified in The Mediterranean Climate
We are in for a shock
The Romans created the Mediterranean Climate in exact same way as we are creating our own version over continental sized regions of Earth – instead of just the lands surrounding the eponymous sea.
Where are The Romans now?
Oh, you say: The climate changed.
OK, Why did it change?
Climate Models are the most grotesque manifestation of Childish Hubris there ever has been in all of human history – they are doing incalculable damage by diverting attention the real issue out there, one is going to wipe us all out as it did the Romans
“2/ The relentless confusion between Temperature and Energy”
There’s really no confusion. It’s bad scholarship. A few of my fellow students would state that temperature is equal to the average kinetic energy. That statement is nonsense. Temperature as in Kelvin is not equal to energy as in Joules. A more correct statement is that temperature is proportional to the average kinetic energy of a gas particle (the definition comes from the kinetic theory of gases). An even better statement is that temperature is proportional to the kinetic energy of the average velocity of a gas particle (some would say it’s a distinction without a difference).
From Wiki on Boltzmann constant:” In particular, the SI unit kelvin becomes superfluous, being defined in terms of joules as 1 K = 1.380649×10−23 J.[19)”
As usual, you find nonsense everywhere. The definition of Boltzmann’s constant is k = 1.380649×10^-23 J/K. So it’s still a proportional relation: k*T is an energy value, but you have to multiply temperature by Boltzmann’s constant.
Find a better Wiki.
If they give bad info in something that is straight forward like this I think you are correct.
If you multiply 1 Kelvin by Boltzmann’s constant you get that specific energy. Kelvin is defined by declaring that the triple point of water has the temperature of 273.16 K.
“Models are devices that add random numbers to a (straight-line) trend-line.”
Peta, that may be the final result, but they actually try to follow what is happening in the atmosphere, in the oceans, and on dry land. The complexity of the task is enormous. And if you believe model results, your final resting place in the Heavens is assured.
Mine was a PDP8E
Same here.
Can you remember what code 7402 meant? 😊
Mine was an IBM 1170, followed by a DDP-24 while in the Army.
I remember our college having an 1170.
Mine was a KDF-9 in 1968, first mini was a PDP-8 in the 70s.
An ICT machine, so did you work for them?
Which PDP-8, the I or E?
Peta,
It was 1970 when I privately purchased our first take-home computer, a PDP-8e from Digital Equipment Corporation of Boston. Smaller than a refrigerator, more the size of a modern microwave cooker. 8 K of ferrite core memory, no storage memory, print by ASR-33 teletype onto punched paper tape. To boot it up, you set a line of 8 toggle switches to up or down, hit the enter switch, select the next 8 toggle switch settings, enter, repeat about 30 times until it started. We nearly got it to the required task of data logging some scientific instruments instead of having operators with pen and paper reading dials. This is trivially simple now, but computers were fairly unforgiving back then. Even a short BASIC program took up all but a few K of the memory.
The geophysical modelling exercise for discrete concealed magnetite bodies in my previous comments was only one of a number of models used in exploration in the era 1970-2000 when I was in that work. They were excellent for training people in the arts of modelling, because the research was directed. There was one, big, understood objective – to find more large and profitable ore bodies.The test of each diverse model type was to see if it found or helped to find more ore bodies. If it was unhelpful, dismiss it as soon as possible.
In the particular Tennant Creek context, the magnetic modelling was critical, but it alone did not forecast if the body contained ore. That required drilling and chemical assaying. Drilling is expensive. You can hit the bigger body but miss smaller contained ore pockets. The best supplementary work to distinguish barren from fertile ironstones had 3 research outcomes. 1. Presence of minerals and textures attributable to a colloidal state during orogenesis. 2. Patterns from decrepitometry, a spectrum of the popping noises made at successive temperatures when a rock sample was heated to (say) 400C. 3. Differences in the abundance of some isotopes of lead, Pb, discovered late in the day by joint research that we funded CSIRO to perform.
Exploration success benefited from a diversity of scientific skills under the standard classes of geophysics, geology and geochemistry plus commercial management skills to write beneficial agreements with governments, joint venturers and customers. It was never easy.
Geoff S
I used to carry a series of ‘toggle’ programmes in my head, so that I could interrogate the machine status, when things went wrong.
Did you ever look at the core store? It was a very fine mat of ferrite rings with three wires passing through each one. Two of the wires were the addressing wires, the third was the write/ sense wire.
IPCC SPM Rule No. 1: All Summaries for Policymakers (SPMs) Are Approved Line by Line by Member Governments
IPCC Reports Rule No. 2: Government SPMs Override Any Inconsistent Conclusions Scientists Write for IPCC Reports
In some years, the SPM is written first, and then the chapter summaries are required to conform to it.
Absolutely 👍
I used to be so naive I thought the Summary or Policymakers was something written to help policymakers understand the report, not something commanded by policymakers as to what the report was ordered to say.
I know better now.
“climate modeling has been around for more than a century, well before computers were invented.”
Don’t think so …
c. 2700–2300 BC – The Abacus from Babylon,
100 BC. – The Antikythera mechanism.
1840s – Charles Babbage & Ada Lovelace
the Difference Engine and its successor the Analytical Engine.
1872 William Thomson tide-predicting machine, Four years later, his older brother, James Thomson, came up with a concept for a computer that solved mathematical problems known as differential equations. He called his device an “integrating machine”.
But you are correct re the modern electronic computer !!
The first digital electronic computer was developed by Arthur Halsey Dickinson in 1936 -1939, in the IBM Patent Department, Endicott, New York
1951, the Lyons Electronic Office (Leo) I computer, was the first electronic computer used for commercial business applications.
The rest is history.
I think Babbage’s “engine” is just an elaborate mechanical calculator, same with the abacus. The early IBM “tabulators” fall into the same category.
Unlike fixed mechanical calculators, Babbage’s “engine” was programmable.
The first digital computers were calculators with the program fed by punched tape. Then von Neumann created the stored program concept. All of our machines are von Neumann machines. With a stored program, loops became easy, and loops are what make computers the powerful calculators that they are.
The first practical programmable machines were the 1804 Jacquard looms (punched cards) developed from a 15th century Italian punched paper design.
Preprogramed pinned barrels were used from the 9th century to power musical instruments & later church bells.
c. 2700–2300 BC – The Abacus from Babylon,
We need a climate change abacus to replace the current Ouija Boards (climate confuser games)
Michael Mann is developing a new chart to replace the Hockey Stink Tree Ring Circus Chart. It is shaped like a cricket bat and based on a better proxy than tree rings: It is based on cave wall drawings. Mann is now claiming: “Those Neanderthals were smarter than they looked” and is predicting another Nobel Prize to add to his collection.
In modeling, the risk is that the mathematical model is no longer an expression of a conceptual model but a data matching algorithm, either to observations or to the outcome expected by the modeler.
Climate models are no longer mathematical models representing a conceptual model of the physical real world. They are a data matching algorithm to what the modelers believe will happen in the future. Pat Frank has shown this to be the case by developing a linear equation that very closely resembles the “ensemble” prediction of the climate models. The modelers are trying to tell the future by looking in a cloudy crystal ball and are trying to match what they think they see in the crystal ball.
Couple this with the inane assumption that all error in boundary conditions and initialization values cancel as the iterative process proceeds instead of growing in value with each iteration and it becomes obvious that the models are nothing but expressions of religious faith and confirmation bias.
I like it! You’re an error fanatic, like we all should be.
In my crystal ball, I see Pat Frank’s work enduring the test of time. His emulator (also independently described by Willis Eschenbach, I should add) shows clearly what is happening in the tuned GCMs.
If you walk along the sea front at Blackpool, you’ll encounter a cubicle, emblazoned with the title Gipsy Lee. Pull back the curtains and enter. You will find a crusty old hag, gnarled hands caressing a crystal ball. Go past her into the back room, and you’ll end up in Ernie’s Palace 😊
The climate modelers also get to constantly update their failed predictions and hide just how wrong their previous ones were.
A lot of gamblers would love to be able to constantly change their bets for free during the course of a horse race.
I bet they would also like to “adjust” the outcome of what are supposed to be “games of chance” (as do “climate scientists”/activists who “adjust” the data to support their preconceived conclusions).
Excellent write-up!
“When the inevitable endpoint is reached, you must trash the model and start over by building a new conceptual model.”
In respect to diagnosing the climate response to incremental non-condensing GHGs, I would like to nominate the “forcing + feedback” conceptual model for disposal. The (mis)use of highly parameterized GCMs to study GHGs arose from this framing.
There was an alternate conceptual model already available. Energy conversion. Lorenz described it. The modern ERA5 reanalysis model computes it as the “vertical integral of energy conversion” in units of W/m^2.
Kinetic energy <-> [Internal energy + Potential energy]
The very minor radiative “warming” effect of 2XCO2, and any “feedback” to it, are lost in the general circulation. One cannot reasonably expect to ever isolate GHG “warming” for reliable attribution.
This very short video puts the daily max, min, and median ERA5 “viec” values into time-lapse motion for 2022 at 45N latitude. A more complete explanation with references is in the description of the video on Youtube.
https://youtu.be/hDurP-4gVrY
Climate models are great tools to let us know if what we think we understand is correct or not. If a climate model is given all the values from 20-30 years ago, and it cannot accurately predict the climate now, then we’ve obviously missed something in that model. Such failures by the models can help point us to what we’ve missed, but they should never be used to make political/financial decisions because we do know that they’re wrong!.
Obviously you are not paid to create or run climate models 🙂
What values are going to be input from 20-30 years ago? Humidity and pressure values from around the globe as well as temperatures (both measured and guessed at)? Climate science doesn’t use those values, if they did they could use enthalpies instead of temperature as a proxy for enthalpy.
And of course, no “tuning” to achieve a known outcome should be allowed (which in reality is done to an extreme, of course).
All you need do is take the “tuned” model that matches a known outcome only because of all of the “tuning” (read: fudge factors), keep it “as is” inclusive of said “tuning,” move the next “run” start date back 50 years, and see how well the model DOESN’T come anywhere close to reality to know how useless the “climate models ” are.
Absolutely one of the best articles I’ve read here. The comments are terrific as well. I’m looking forward to the rest of the series.
The main problem in climate models is their handling of water vapor. They use the basic climate science paradigm that relative humidity rises similarly at all altitudes as CO2 increases.
The historic data does not support this view. It shows RH decreases at higher altitudes which counters the warming effect from increased IR absorption.
https://bpb-us-w2.wpmucdn.com/sites.coecis.cornell.edu/dist/f/423/files/2023/12/simpson23pnas.pdf
https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2018EA000401
https://journals.ametsoc.org/view/journals/clim/36/18/JCLI-D-22-0708.1.xml
This is a great post. Andy starts out putting computer simulation itself in perspective, which is long overdue in the climate issue. He has characterized the simulation problem superbly in the first paragraph of the Introduction, which should be emailed to every news writer. He correctly points to the limitations of computer modeling itself.
Those of us who have spent any time trying to write code for physical simulation immediately sense the problem. After you have studied the numerical methods up and down and realize that in all but a few simple cases the truncation error will eventually swamp out the physics, you realize that the whole idea of simulation is itself misleading. You have to understand the physics before you can write the code, but if you already understand the physics you don’t need the code. The exceptions are the very simple problems in celestial mechanics and engineering. But those efforts should be more properly call design, not research.
Simulation is useful in engineering because the principles are already understood, there are only a few tens of state variables, and the strategy is to look for stability and steady states. They will take the form of boundary value statics, modal solutions, and in control systems the whole point is a sufficiently damped dynamic system, which means truncation error decays at least as fast as the solution.
The success and prevalence of computer simulation in engineering might naturally lead the layman and scientist alike to think it can be a valid research tool. But simulation in science is only useful to show us what we don’t know when it doesn’t work, which will be most of the time. A computer program doesn’t make discoveries or suggest new ideas. It just shows us what we already (don’t) know.
One of my first forays into System Dynamics models and feedback loops, I attempted to calculate the theoretical terminal velocity of an object falling in air. To my surprise, the output became unstable and oscillated wildly as the object approached what I expected the answer to be. I now realize that the single-precision calculations of my Atari 800 hit a barrier with truncation error.
“. . . hit a barrier with truncation error.”
It’s the downfall of many computer programs. People think that floating point in a computer is very precise. In fact, it isn’t. The area covered by a typical, single precision floating point representation turns out to be vary sparse. Fix point is more inclusive, but doesn’t cover as much territory.
Ugh! That should be “very sparse.”
Thanks, well said.
anything that references temperature changes at ground level or fails to measure from 0°K is not a model…
the chart the warmists show isn’t even a modele as it is JUST a composite linear projection
This is a very thorough easy to read and well documented report that treats certain computer programs with more respect than they deserve.
A real climate model requires a thorough understanding of very climate change variable and we are far from having such knowledge. Even with that knowledge, prediction of the future climate might be impossible. You could play “what if” games with such a computer game.
What we have are Computer Games, not models of the climate on our planet.
They are used to create fear of CO2 among the general public
To do that, they require CO2 to be treated as a strong greenhouse gas above the current level by including a strong positive feedback, which appears to be imaginary, or at least grossly exaggerated.
The Computer Games predict whatever the owners want predicted. And they want to their predictions to fit within the IPCC narrative (+2.5 to +4.0 degrees C.) and scare people. Above +4.0 is even better.
+2.5 to +4.0 degrees C. warming per CO2 x 2 is scarier than the old +1.5 to +4.5 range, and that’s why the range was changed.
There is no evidence that any Computer Game except the Russian INM game, is even trying to make an accurate forecast. These games are intended for climate scaremongering.
That’s why I call them Climate Confuser Games.
The Climate Confuser Gamers have had 50 years to revise their assumptions to create the appearance of more accurate predictions. They have not done so. That is evidence accurate predictions are an IPCC goal.
The USSR may have led the world in climate research in the 1960s. Their first attempts at computer games were in the 1970s. I assume the INM game was from the 1970s. The IPCC rejected INM in 1990, but later accepted it for CIMP. Two versions of INM are in CMIP now
The CMIP6 multi-model ensembles model list (canada.ca)
INM makes more modest predictions of future warming that are no longer in the IPCC range. But INM seems more reasonable than any other computer game. INM gets little or no attention. You’d expect the least inaccurate computer game to get a lot of attention. It does not/ More evidence the IPCC does not care about computer game accuracy.
An interesting study of the USSR climate research and climate computer games is at the following link:
Modelling the future: climate change research in Russia during the late Cold War and beyond, 1970s–2000 | Climatic Change (springer.com)
Andy May has a great climate blog and I always wonder why it is not mentioned when his articles get published here. I am calling for a Congressional investigation
Climate Blog – Andy May Petrophysicist
The Climate Confuser Games serve two purposes. One good and one bad. So far the bad one is winning
For leftists
(1) Climate confuser games scare people about CO2 more than writing a scary prediction on the back of an envelope. Leftist media sources do not report the apparent accuracy failures of the computer games. They report accuracy.
For conservatives
(2) Climate confuser games inform conservatives that scientists ave no idea what the climate will be like in 100+ years
For neutral people
Change the assumptions and the climate confuser games will predict whatever you want predicted. With a low ECR of CO2 and an RCP 3.4 for CO2 growth, the 1970s computer game predictions would have appeared to have been accurate as of 2023, 50 years later,
…
The moment the tropical hot spot was predicted and not observed, the climate models should have adjusted until they agreed with observations. But instead, they choose to scour the weather balloon datasets to find proof. And when that failed, I guess it generally goes something like, we are getting paid and this is a noble cause so don’t raise a fuss about it, how could everyone be wrong about this, and it seems logical so let us keep going.
[QUOTE FROM ARTICLE] “This climate sensitivity metric is often called “ECS,” which stands for equilibrium climate sensitivity to a doubling of CO2, often abbreviated as “2xCO2.”[13] Modern climate models, ever since those used for the famous Charney report in 1979,[14] except for AR6, have generated a range of ECS values from 1.5 to 4.5°C per 2xCO2. AR6 uses a rather unique and complex subjective model that results in a range of 2.5 to 4°C/2xCO2. More about this later in the report.” [END QUOTE]
I have developed a simple one-dimensional IR radiation absorption model for the effects of increasing CO2 concentration on surface temperature, based on the following assumptions:
1. Infrared re-radiation from earth toward space as a perfect blackbody (emissivity = 1.0) based on surface temperature, distributed among frequencies according to the Planck distribution.
2. Dry atmosphere, with pressure and temperature profiles according to the adiabatic lapse rate.
3. Comparison of IR absorption at higher CO2 concentrations with a baseline pre-industrial concentration of 280 ppm.
4. Infrared absorption obeys the Beer-Lambert Law according to experimental absorption spectra for CO2.
Such a model is only reasonable for clear skies over dry land, and cannot be applied for cloudy weather or over the oceans.
The model was run using initial assumed surface temperatures between 0 C and 35 C, and resulted in an ECS ranging from 1.40 to 1.55 C per doubling of CO2 concentration. The ECS tended to decrease slightly with successive doublings, and increased slightly with initial assumed surface temperature.
The Planck distribution for blackbody radiation intensity at temperatures between 273 and 308 K reaches a maximum near the main absorption peak for CO2 between 14 um and 16 um wavelength, but the intensity is much lower (less than 1% of the maximum) for the absorption peak around 4.3 um, and is even lower for the absorption peak around 2 um.
The model also showed that IR absorption only increased with CO2 concentration at very low altitudes (< 20 meters above the ground), and decreased at higher altitudes, so that the idea of a high-altitude “hot spot” is in conflict with the results of this model. This was due to “saturation” at the high-absorbing frequencies, so that most of the radiation is absorbed at low altitudes, with less radiation remaining to be absorbed at higher altitudes.
The altitude above which the increase in absorbed radiation becomes negative also decreases with increasing CO2 concentration, and is below 6 meters above the ground for a CO2 concentration of 1000 ppm.
The ECS calculated according to this dry-atmosphere model should be considered a maximum limiting value, because over the oceans, the latent heat of evaporation required to maintain constant relative humidity would be a negative feedback representing 50% to 70% of the temperature rise due to increased IR absorption by CO2.
Since 70% of the earth’s surface is covered by oceans, and considering a negative feedback of 50% over water, we could see an ECS of about 1.5 C/doubling over dry land, and 0.75 C/doubling over water. A weighted average of the ECS over the earth’s surface would result in about 0.30(1.5) + 0.70(0.75) = 0.98 C / doubling.
The ECS over forested land would likely be lower than over rocky ground or grassland, because the trees would remove CO2 from the atmosphere below the forest canopy, and transpiration of water vapor from leaves would also have a cooling effect.
This analysis shows that any assumed ECS by the IPCC above 1.5 C per doubling is likely exaggerated, and a more reasonable value would be between 1.0 and 1.5 C per doubling.
Agreed, and with some negative feedback due to additional cloudiness, which is likely, values below 1 deg. are possible.
Hi Dr. May,
Thanks so much for writing this insightful post. I am very much looking forward to new developments in your series. Throughout my skeptical journey, I can indeed confirm that the most effective strategy as a skeptic is to bring to light the flawed conceptualized nature of modern climate science. Models could be a brilliant tool that can help us learn, provided the modelers use and learn from them the right way, as you rightly noted. It’s unfortunate they are being wasted to promote what’s becoming more obvious over time: propaganda. Time is an ally in our battle, and more and more people will start realizing this farce for what it is as logic and reason deviate more and more from what modern climate science is saying.
Once again, thanks; please keep doing what you are doing.
Thanks for the kind words. I do not have a PhD, just a BS, so “Mr.” or just “Andy” is appropriate.
Dear Andy,
The most important model, the one you have not specifically mentioned, is a statistical model of the process generating the data of interest. For example, that temperature observed (y) is consistent with a physical process (x). Another example, is whether the output of conceptual model (y) is consistent with observed data (x).
Statistical verification against a reference frame is the only control we have that data (either observed or modeled) truly represents the real world.
The physical reference frame for maximum temperature is the First Law of Thermodynamics. The reference frame for a conceptual/physical model is homogeneous observed data, not data that has been mashed-up by some other process but data that are verifiably sound and fit-for-purpose.
By-eye curve fitting lacks statistical control – it tells the audience nothing about whether the underlying process has been well explained or poorly explained.
Visually comparing a supposed linear increase in temperature with an apparently logarithmic increase in CO2 is a good example of what not to do. Examining data as timeseries without checking the assumption that data are homogeneous (not affected by a second factor aside from time) is probably the most frequent mistake made by people skeptical of the warming narrative.
How many people have analysed maximum temperature for Marble Bar, Western Australia, allegedly the warmest place in Australia? Too many to count.
How many of those understood that the site moved from the original post office (where, before they installed a Stevenson screen, thermometers were hung on the back wall “in the shade”), to somewhere else when the PO was wrecked, then to a new PO in Francis Street, then to the licensed PO after the official one closed, then to the Travelers Rest Motel, then out into the scrub behind the motel where the automatic weather station is now.
Why is it hotter now?
Because they moved the site to different exposures; because observations were made by untrained staff, and were lackadaisical; because they reduced the screen size for 230-litres to 60-litres and because they scraped all the topsoil and vegetation from around the current site, so these days it resembles a gravel pit.
Short answer: data are not homogeneous and should never have been analysed as time-series. Excel produces easy but lousy stats that are useless for any kind scientific study.
The lazy thing to do is draw a few graphs, add a trend line, not understand that Excel’s R2 value is the square of the Pearsons correlation coefficient (not R2 adjusted for the number of terms and observations in the model), not check residuals, and then claim the ‘line’ means something.
My question to them is why? What is the underlying physical process that causes Tmax to increase with age? There isn’t one. There is no physical reason why “time” should explain Tmax, especially when all the trend in Marble Bar data is verifiably due to site relocations, poor data, and poor site control.
The report on Marble Bar that explains all this, complete with pictures, is available here: https://www.bomwatch.com.au/wp-content/uploads/2022/12/Marble-Bar-back-story-with-line-Nos.pdf
The purpose of homogenisation is to coerce Tmax data so they agree with models. Visual comparisons between models and their version of homogenised data are therefore misleading.
Failure to find a residual trend in Tmax data for Marble Bar and other sites across the Pilbara and the Kimberley Region of NW Australia means models depicting warming in Australia are disconnected from the real world.
Yours sincerely,
Dr Bill Johnston
http://www.bomwatch.com.au
Thanks for producing this important analysis Andy!
I’ve stopped mincing words as often and now call the purposeful rejection of empirical data FRAUD. The way media treats hypothetical model output as if it is unquestionable empirical data is maybe the greatest scientific hoax of all time.
Dear Andy,
Thanks for the article.
Why is the IPCC trying to keep the Earth cold when about 9 times as many people die from cold-related causes as from heat-related causes? What is the purpose?
About 4.5 million people die from cold-related causes compared to about 500,000 people dying from heat-related causes each year. Cold or cool air causes our blood vessels to constrict causing blood pressure to rise and that causes more strokes and heart attacks during the cooler months of the year.
‘Global, regional and national burden of mortality associated with nonoptimal ambient temperatures from 2000 to 2019: a three-stage modelling study’
https://www.thelancet.com/journals/lanplh/article/PIIS2542-5196(21)00081-4/fulltext
The Earth is still in a 2 million-year-plus ice age named the Quaternary Glaciation In a warmer but still cold interglacial period. Over 20 percent of the land is frozen either as permafrost or covered by glaciers. By definition, the ice age the Earth is in won’t end until all the natural ice on Earth melts.
https://en.wikipedia.org/wiki/Quaternary_glaciation
“This suggests that current climate models do not fully represent important aspects of the mechanism for low frequency variability of the NAO.”
They should know that negative NAO conditions increase during low solar periods. They should know that negative NAO regimes drive a warmer AMO and a warmer Arctic.
Not unlike the advice that T. C. Chamberlain gives in this treatise, “The Method of Multiple Working Hypotheses.”