CMIP6 and AR6, a preview

By Andy May

The new IPCC report, abbreviated “AR6,” is due to come out between April 2021 (the Physical Science Basis) and June of 2022 (the Synthesis Report). I’ve purchased some very strong hip waders to prepare for the events. For those who don’t already know, sturdy hip waders are required when wading into sewage. I’ve also taken a quick look at the CMIP6 model output that has been posted to the KNMI Climate Explorer to date. I thought I’d share some of what I found.

Figure 1. The CMIP6 13-member model ensemble from the KNMI Climate Explorer. The global temperature anomaly from the 1981-2010 average is plotted for all model runs. All 13 model runs are plotted from the ensemble. Notice there are two runs plotted for three of the models.

There are two model ensembles on the website currently, one contains 68 model runs and the other contains 13. Figure 1 shows all 13 runs for the smaller ensemble. There are two runs in the ensemble from the Canadian Centre for Climate Modeling (CanESM5 P1 and P2). There are also two for NCAR (CESM2-1 and CESM2-2) and two for the Jamstec model (MIROC). So, Figure 1 represents ten models total. The models use historical forcing before 2014 and projected forcing after.

All curves are global average temperature anomalies from the 1981-2010 average temperature. Notice the 19th century spread of results is over one degree C. The current spread of results is not much tighter and the spread at 2100 is over two degrees. All these model runs use the ssp245 emissions scenario, which is the CMIP6 version of RCP 4.5, as far as I can tell. Thus, it is the middle scenario.

Two of the runs are pretty wild, the pale blue model that is very low in 1960 and very high in 2100 is the Met Office Hadley Centre model run, UKESM1.0-LL (2018). The Canadian “CanESM5 P 1” model run follows the same path but is hidden by UKESM1.0. The other runs are bunched up in a one-degree scrum.

In Figure 2 we show three of the model runs. Both Canadian model runs are shown, in blue and orange, along with one of the NCAR models in gray. The black is the 13-model run “ensemble mean” as computed by the KNMI Climate Explorer. I chose an ensemble on the website; I did not make this ensemble myself. Likewise, the ensemble mean was provided by KNMI Climate Explorer, I did not calculate it.

Figure 2. Three individual model runs are compared to the 13-member ensemble mean. The vertical axis is the anomaly from the 1981-2010 average in degrees C.

Historical forcings are used prior to 2014 and projected values after. The blue and orange curves are from two runs from a single Canadian model. The two runs are over 0.2Ā°C different in 2010 and 2011, some months they are over 0.5Ā°C different. There are multiple periods where the model runs are clearly out-of-phase for several years, examples are 2001-2003 and 2014 to 2017. The period from 2015 to 2019 is a mess.

Figure 3 compares the same ensemble mean shown in Figure 2 to three weather reanalysis datasets, also downloaded from the KNMI Climate Explorer. The weather reanalysis datasets are shown in the fainter lines.

Figure 3. Three weather reanalysis data sets are compared to the model ensemble mean from Figure 2.

Weather reanalysis is done after the weather data are recorded but using a weather computer model. The reanalysis has many thousands of observations that it can incorporate, so generally the output is quite reliable, at least in my opinion. Notice all three weather reanalysis datasets, NOAA, NCEP and ERA5 (European Centre for Medium-Range Weather Forecasts) are in phase and track each other. Over periods of up to three years, the ensemble model mean is hopelessly out-of-phase with the reanalysis. This occurs when the model had historical data (before 2014) and after.

Conclusions

I’m unimpressed with the CMIP6 models. The total warming since 1900 is less than one degree, but the spread of model results in Figure 1 is never less than one degree. It is often more than that, especially in the 1960s. The models are obviously not reproducing the natural climate cycles or oscillations, like the AMO, PDO and ENSO. As can be seen in Figure 2 they often are completely out-of-phase for years, even when they are just two runs from the same model. I used the Canadian model as an example, but the two NCAR model runs (CESM2) are no better. In fact, in the 2010-2011 period and the 2015-2019 period they are worse as you can see in Figure 4.

Figure 4. Comparing the two runs of the NCAR CESM2 model to the ensemble mean and a Canadian model run.

The AR5 report was an expensive redo of AR4. Both abandoned any hope of finding solid evidence of human influence on climate and tried to use climate models to show that humans somehow control the climate with our greenhouse gas emissions. They tried to use solid evidence in SAR, the second report, and TAR, the third report, but they were shown to be wrong in both attempts. You can read about their SAR humiliation here and the TAR “hockey stick” humiliation here.

Forty years of work and billions of dollars spent and still no evidence that humans control the climate. Models are all they have, and these plots do not inspire confidence in the models. As we discussed in our previous post, averaging models does not make the results better. Ensembles are no better than their member model runs and can be worse. If the individual runs are out-of-phase, as these certainly are, you destroy data, and the natural cycles, by averaging them. See the evidence presented by Wyatt and Curry and Javier, here and here. If they are going to convince this observer, they need to do much better than this. And a word to taxpayers, why are we paying these huge sums of money to simply do the same thing over and over? Bottom line, is AR6 going to be any different from AR4 or AR5? Are any of these documents worth the paper and ink?

4.9 32 votes
Article Rating
355 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
February 11, 2021 2:13 am

I’m dreading the Climate Conference planned for the UK later this year.
Can you get full body waders ?

Reply to  Stephen Wilde
February 11, 2021 2:33 am

Don’t take any chances…

Tritonia_Lusitania_1935.jpg
Reply to  Climate believer
February 11, 2021 3:51 am

I’ll order 10.

February 11, 2021 2:20 am

OK, what will the Global Warming Potential (GWP) be for methane in the AR6?

AR1 63
AR2 56
TAR 62
FAR 72
AR5 85
AR6 ???

Three question marks, because it’s sure to be over 100

Pillage Idiot
Reply to  Steve Case
February 11, 2021 6:16 am

We should probably oxidize the methane in a co-generation power plant – just to be on the safe side.

Mark Pawelek
Reply to  Steve Case
February 11, 2021 6:34 am

They all have an R in them, or AHRRR, as we pirates like to say. That explains their careerism, made-up “science”, bad faith arguments, hypocrisy, and general nastiness towards anyone who tries to invoke actual science. They were just pirates all along.

Reply to  Mark Pawelek
February 12, 2021 8:46 am

They’re hunting for taxpayer “traaaaasuuure”. Arrrrrrr, matety.

Phil Rae
February 11, 2021 2:37 am

A sad state of affairs, Andy………..but most posters here wouldn’t expect anything else.

Genuine science on these matters has long been subsumed into the mire of politics, environmental activism and “trough feeding’ by those who depend on governments to fund their “work”.

Add on top of that, the new enthusiasm of the energy companies to demonize their own products while scooping up subsidies plus financiers’ delight at the opportunity to make vast sums out of unnecessary changes to essential infrastructure and we arrive at where we are today. Tragic!

Ron Long
February 11, 2021 2:39 am

Good posting, Eric. The amount of wasted taxpayer money producing these reports is not only disgusting, but, since it is wasted on sitting in an office torturing data into submission, it represents opportunity gone to do something of actual benefit to the taxpaying public. And China and India get a pass on everything because they are emerging economies? Makes one wonder how history will record this era?

AGW is Not Science
Reply to  Ron Long
February 12, 2021 7:57 am

As the West’s Lysenkoism, I suspect.

Reply to  Ron Long
February 12, 2021 8:52 am

Makes one wonder how history will record this era?

The Stupidocene.
The Corruptocene.

Mark Pawelek
Reply to  beng135
February 14, 2021 1:39 pm

The Obscene?

Ron Long
February 11, 2021 2:42 am

Sorry, Andy, I was reading Eric’s Mars report and chose wrong author, so, good posting, Andy.

cerescokid
February 11, 2021 3:57 am

ā€œThe models are obviously not reproducing the natural climate cycles or oscillations, like the AMOPDO and ENSO.ā€

They canā€™t. If they do, light bulbs across the globe will be going on and people will start asking questions. Questions are not wanted. Questions demand answers and they are not prepared for the answers they donā€™t want to hear. People are not stupid, given all available information. The problem has always been the people have not been given all available information.

Mark Pawelek
Reply to  cerescokid
February 14, 2021 1:41 pm

Best way to avoid hearing an answer you don’t want to hear is to stamp down hard on anyone asking questions.

Waza
February 11, 2021 3:58 am

Thank you Andy and others for providing a detailed analysis of the climate models.
I am just and ordinary engineer without the knowledge to dig deep into the physics and maths of the models.
However it is now apparent ( thanks to your effort) that I donā€™t have to.
I am now confident to say the models are useless for the purpose of projecting the climate 100 years into the future.

In this climate war, i need a hard hitting strike to challenge any alarmists.

CLIMATE MODELS ARE USELESS.
– Although Models have been continuously upgraded they still fail to predict recent climate.
– They are inconsistent with each other in hindcasting the past and predicting the future.
– The uncertainty in each model is too great making them useless for any valuable prediction.
– Their inconsistency and uncertainty forces the IPCC to average them. This is fundamentally wrong.
– The models are flawed because they fail to address key known ( but extremely complex) physical systems such clouds, and known ocean oscillations.
– THe guessing of current and past parameters / boundary conditions such as historic ocean temperatures and aerosols provides doubt on any output from the models.
For any of the above reasons the IPCC models are wrong and not fit for purpose.

The above points are my bullets to attack alarmists. I do not need to prove any more details. They need to defend each bullet.

Additional discussion.
I am still disturbed that alarmists brush off chaos theory.
If ALL physical mechanisms are known and ALL initial conditions are known and NO butterflies arrive, it is still not guaranteed that a climate ( or any other complex) model will provide the correct output.
Although I am grateful for the effort of Andy and others to get to the details of the models, I have this doubt in the back of my mind that due to chaos theory climate models can NEVER calculate the temperature in a 100 years time.

alastair gray
Reply to  Waza
February 11, 2021 4:12 am

My past experience of modelling complex systems is oilfield performance modelling. These models were proprietary commercial software run throughout the industry. They model past, present and future oilfield performance in terms of individual well flow rates of oil gas and water ā€“for all wells, as well as Pressure distribution throughout the reservoir. The input to the model at all times is
1)    Maps of top and bottom of reservoir with all geological faults
2)    Details of the distribution of the porosity (percentage of pore space available), Permeability ( degree of connectedness of pore space)
3)    Pressure distribution within the reservoir
I supplied input under category 1) and others contributed the rest. Between us all we would go back and forward to construct the most plausible model. If it did not history match from beginning of production all the way up to yesterday it would be considered a pretty useless model. Likewise if future performance did not match the model then the model parameters were tweaked until performance matched up to the new present day. Generally we would get to a model that worked or the field died and all models became redundant like the patient died.
With climate models there are some things I donā€™t understand.
1)    I believe a model is constructed to match a particular training period and is then hindcast to match the past. We then should have a model that, if it is worth anything, matches from say 100 years ago up to the date of model creation. We then sit back and observe the model performance against real world. When Christy shows his upper troposphere models vs observed comparisons with all zeroed at 1979 Gavin Schmidt gets irate , but surely any model is always past perfect, and they should all correctly indicate real world temp at any date in the past.
2)    How does carbon dioxide concentration input itself into the model? Is it a forcing in W/sq. m that is input or simply a concentration with an algorithm to convert that to temperature
3)    The famous 3 fold feedback for CO2 warming. Does that apply to any warming ā€“must do because the system cannot detect whether a calorie of warming comes from the sun or a CO2 molecule vibrating. Anyway how is that specifically brought into the model.
4)    The Climate sensitivity to CO2 Doubling. Is that an explicit input to model or a parameter of model output
What I am asking for is some knowledgeable person who knows about models to give us a sensible rundown on how these models are constructed with assumptions explicit and implicit detailed , effective date of the model, training period etc. how they handle response to future greenhouse gases, metric of quality of past model performance etc.
Hopefully such a paper exists and someone can give us a reference that is better than ā€œScientists say with state of the art Models blah blah blahā€      or from the other side ā€œModels are shonky nonsense that don,t match anything ā€œ   

Clyde Spencer
Reply to  Andy May
February 11, 2021 8:55 am

Andy,
You said, “…try to model climate from first principles.” The problem is that the “first principles” are a Potemkin facade supported with parameterizations of clouds, and various poorly supported assumptions, reinforced with subjective limits on the iterations to prevent the predictions from becoming unphysical. That is to say, the physics becomes a rationalization for the results; however, it is the subjective ‘fudge factors’ that are driving the models!

alastair gray
Reply to  Clyde Spencer
February 12, 2021 4:46 am

See my post above Clyde You might be the person to write the idiots guide to “What GSM models actually do and what assumptions go into them”

Reply to  Andy May
February 11, 2021 9:53 am

Consider an analogy….games of chance in Las Vegas….you can know the odds pretty much exactly, you can know lots of parameters such as how much and how long people gamble….You can put all this info into a computer with all kinds of algorithms and economic principles, and seemingly sound logic. Still, your chances of predicting who wins and loses simply ā€œdoes not computeā€, and your chances of predicting how much any given casino makes in profit is going to have maybe a +/- 30% error. Because of many casinos, the government estimate of its taxes on those casinos will likely have less than 5% error. Unfortunately, when it comes to climate, we only have one casino. Everyone thinks that more years of data will improve the 30% error, but all that really happens is more years confirming the 30%. Is your model useful ? Is it worth developing it further than just keeping basic stats? Not really, although someone will always say yes to allow themselves to be paid with other peopleā€™s money for satisfying their curiosity.

Lrp
Reply to  DMacKenzie
February 11, 2021 11:24 am

Their error is bigger than 30% and the tax much larger than that on casinosā€™ earnings

KevinM
Reply to  DMacKenzie
February 11, 2021 1:28 pm

Poor understanding of the casino industry.

Reply to  KevinM
February 11, 2021 6:58 pm

Nothing much to understand. The ‘joke’ in the 70’s – a chap drives up from Los Angeles to Las Vegas in a $6,000 Buick, and comes home in a $40,000 Greyhound.

Reply to  KevinM
February 12, 2021 5:02 am

First, it’s an analogy. Secondly, too few people understand mutually exclusive, i.e. independent. If a coin comes up heads, you will lose your shirt if you think the next flip MUST BE a heads. Or if you roll 1, 2, 3, 4, 5 and think the next one MUST BE a 6. Any game where the next throw is not dependent on the previous one is fraught with danger. The casino’s know this. Probabilities can’t predict the next state of independent things.

That is where climate falls down.

AGW is Not Science
Reply to  DMacKenzie
February 12, 2021 9:53 am

The problem is they don’t even know the parameters, the “models” are largely a collection of “adjustable,” “parameterized” fudge-factors which are “tuned” to achieve a known result, and a lot of incorrect “assumptions,” the big one of course being that CO2 is a “driver” of climate.

A fine example is that they acknowledge a poor understanding of, and inadequate data concerning, cloud behavior, a small change in which can completely erase even the hypothetical effect of CO2 on temperature, which is predicated on “all else held equal.” So they have essentially admitted that CO2 may have nothing to do with climate change, which by the way is the reality supported by empirical observation, yet continue to include this assumption in their useless “models.”

alastair gray
Reply to  Andy May
February 12, 2021 4:44 am

Hi Andy
Not a modeller myself. The Petroleum engineer was the organ grinder. I was just one of his monkeys. As a petrophysicist I guess you were another monkey!
However I did note that however badly I messed up the geophysics the modeller could always bludgeon it into some sort of history match. However by iteration and having all the monkeys critically evaluate their input generally I think we produced reliable models oilfield simulation models.
The whole modelling thing is the Achiles heel of the AGW scare and the more clarity that we can cast on the modelling system the better we can refute their nonsense .
I see the need for someone – Maybe me to write a clear exposition of how the models work focusing on
when is time zero for a particular model ? not much use =if it the day before yesterday
what were the initial conditions of the model and how derived
how do you go from CO2 concentration to temperature change
I am currently looking at the AR5 reference that you send -a real heavy plod, and Poyets ebook. Maybe and hopefully some real modelling expert will tackle the isssue

AGW is Not Science
Reply to  alastair gray
February 12, 2021 10:00 am

how do you go from CO2 concentration to temperature change

The fact that the “modelers” consider this to be a question is part of the problem; the question should be “how do you go from temperature change to CO2 concentration change,” on multiple time scales.

The actual climate driving forces are not even simulated or quantified in the models (to the extent they are know, which is part of the problem; they are not all known, we have insufficient data over an insufficient time period and what we have is of too poor a quality), which underscores how completely pointless they are.

The whole reason for existence of the “models” is to speculate about how much of an effect a non-factor like atmospheric CO2 will have on climate – in other words, to push the propaganda.

Reply to  alastair gray
February 13, 2021 6:30 am

I doubt that you “messed up the geophysics”. You undoubtedly provided the Eclipser with point in space, ranged estimates, for many, many, relevant rock properties, and info about how those ranges correlated. Almost all of these estimates were from many different initial measurement/processing techniques, but were sufficiently comparable to be used together. Then, (s)he did the same for fluid properties, celled it all, and ran multiple realizations, under multiple operational (lift, sand face pressure v cum produced, etc.) scenarios. These outputs aided in making Trumpian YUGE investment/OPEX decisions.

You were part of teams that, even with model output uncertainty, enriched your employers. FYI, your fit for purpose work processes used much more sparse, uncertain, info to reservoir model, than what the climate model builders have available. That’s why, despite the deflections here, they are at least as fit for purpose…

AGW is Not Science
Reply to  Andy May
February 12, 2021 8:18 am

More realistically, I’d say it is the assumption that CO2 contributes in any measurable way to climate change that is the problem.

There is NO empirical support for this. And plenty of empirical support that it does NOT.

The paleoclimate record clearly shows atmospheric CO2 is driven by temperature, and not the other way around. And their “science” version of the “dog ate my homework” excuse of the invisible “contribution” supposedly made when both temperature and CO2 are moving in the same direction doesn’t hold water, either (no pun intended). Repeat episodes of REVERSE correlation at every temperature trend reversal, coupled with the fact that temperatures always begin to RISE with CO2 levels near their LOW point, and the fact that temperatures always begin to FALL with CO2 levels near their HIGH point, conclusively show that atmospheric CO2 does not “drive” OR “contribute” a damn thing.

And comparing “proxy” CO2 measurements from ice cores (which have significant issues which are of course ignored) with modern instrument readings as if they measure the same thing is scientific incompetence, overstating the amount by which atmospheric CO2 has changed.

Mark Pawelek
Reply to  Andy May
February 14, 2021 1:57 pm

Modern climate models are effectively a good way to reverse scientific progress. Models embody the modeler’s bias. They entirely avoid the scientific method while claiming to be ‘settled science’. So allowing elites to claim the status of great scientists whilst merely peddling their prejudices. Modelers and climate gate-keepers don’t intend to reverse scientific progress; they just don’t care. They are full of themselves. Like aristocracy of the past who used divine right of kings to get away with murder. It’s a scam which keeps our people in charge and the Hoi Polloi sleeping in the stables. They intend to rule by fear.

Reply to  alastair gray
February 11, 2021 8:51 am

Alastair, all climate models merely project future air temperature by linear extrapolation of CO2 forcing.

That’s the whole ball of wax. Global warming is calculated by y = mx+b, hidden inside complicated code and with lots of PDEs to make it look sciencey but that average away to nothing.

See Propagation of Error and the Reliability of Global Air Temperature Projections. It’s all laid out in detail.
Climate models are a subjectivist narrative decorated with mathematics. The CO2 conclusion is built into the models from the start. And CMIP6 models are no better.
AGW is a pseudo-science crock from top to bottom.

AGW is Not Science
Reply to  Pat Frank
February 12, 2021 10:01 am

Thanks for getting the point across so eloquently, Pat.

Reply to  AGW is Not Science
February 12, 2021 3:14 pm

Thanks for not letting it float away into the ether, AiNS. šŸ™‚

fred250
Reply to  alastair gray
February 11, 2021 12:42 pm

“How does carbon dioxide concentration input itself into the model?”

.

Because of GISS adjustments to past temperatures, with the “adjustments” being in direct relationship to CO2 increase, the CO2 warming is “baked-in” to the hindcasting.

Climate models can NEVER be correct because they are fed bogus data from the start.

AGW is Not Science
Reply to  fred250
February 12, 2021 10:03 am

Yup, yet another problem. The so-called “data” hasn’t been “data” for years, and the “adjustments” are a textbook example of “confirmation bias,” one of the biggest sins of supposed “scientific” inquiry.

fred250
Reply to  alastair gray
February 11, 2021 12:46 pm

I’ve asked before, with no answer..

Show me a model that hindcasts Arctic temperatures of around 1940 as being similar to now.

If they can’t even do that, they really are in the “why bother” basket.

AGW is Not Science
Reply to  fred250
February 12, 2021 10:06 am

Show me a model that hindcasts Arctic temperatures of around 1940 as being similar to now.

….and if they could get them to do that, all you would need to show what a sham it is is change the “start” date for the model and re-run it, and it would no longer be able to do so. Elephant, trunk, wiggle. Change the start date for the model run and watch the dance change completely. LOL.

Reply to  Andy May
February 11, 2021 6:10 am

One model shown to be correct

That one lone little Russian model where predictions are close to observations.

Alasdair Fairbairn
Reply to  RelPerm
February 11, 2021 8:39 am

I couldnā€™t find the Russian model INMCM5 in figure 1.

AGW is Not Science
Reply to  Andy May
February 12, 2021 10:08 am

It is the best model and that may be why it is not in the ā€œofficialā€ ensemble.

BINGO! Not “alarmist” enough, therefore not worthy. To hell with being close to reality.

Reply to  Andy May
February 12, 2021 5:17 am

I sincerely doubt we will ever get good forecasts using just temperatures as a proxy for heat. Too many other variables that need to be closely measured and included in making “climate” forecasts. The other problem is chaotic cycles. It’s like playing the piano and just getting around the right key. It won’t make for good music!

leitmotif
February 11, 2021 4:07 am

Here is Donna Laframboise’s take on the IPCC process in September 2019. DL has written 2 books about the IPCC.

“The IPCC asks scientists to read published research on its behalf and to decide what it means. IPCC reports are themselves not science. They are a series of interpretations and judgment calls.

HOW THE IPCC WORKS

Scientists are invited to help write a certain chapter of a report. But they lack the power to even alter the title of their chapter. Theyā€™re given an outline and are expected to stick to it. They canā€™t ignore topics they consider unimportant, or discuss topics that havenā€™t been pre-approved.

Afterward, some of these scientists are tasked with writing a summary of the larger, overall report. If science ruled at the IPCC, that summary would be released directly to the public. Instead it gets rewritten by politicians, diplomats, and bureaucrats. During a multi-day meeting. The result is a politically negotiated summary that is then designated as the official truth. Everything bows down before it.

We teach children that a summary should accurately reflect a longer document. But things are topsy-turvy at the IPCC. The underlying report, the document written by scientists ā€“ the one that was supposedly being summarized ā€“ then gets modified. So that it conforms to the official, politically acceptable truth.

The IPCC calls these modifications trickle backs. After the summary of its recent report about climate change and land use was re-written at one of these meetings, 125 separate changes were made to the underlying report.

comment image?resize=547%2C196&ssl=1

15 changes were made to Chapter 1. More than 30 changes were made to Chapter 5. Even definitions were tussled over. Political actors thought they knew better than scientists how to define terms such as C02 fertilization, vegetation greening, and vegetation browning.
This isnā€™t a mistake. It isnā€™t a misunderstanding. And itā€™s not a secret. This is how the IPCC operates.”

https://nofrakkingconsensus.com/2019/09/04/astonishing-media-misinformation-about-the-ipcc/

Reply to  leitmotif
February 11, 2021 4:27 am

Donna is brilliant.

Didn’t she have to stop comments on her website because of the abuse she was receiving?

Editor
Reply to  leitmotif
February 11, 2021 6:41 am

leitmotif, you quoted Donna Laframboise, ā€œThe IPCC asks scientists to read published research on its behalf and to decide what it means. IPCC reports are themselves not science. They are a series of interpretations and judgment calls.”

The UN admits on its IPCC-history webpage that the IPCC was founded to write reports that support the UNFCCC. In real terms, the IPCC is a propaganda-writing entity that was founded to support political agendas, which Margaret Thatcher identified as “anti-growth, anti-capitalism, and Anti-American” in her book Statecraft.

Regards,
Bob

Tom Abbott
Reply to  Bob Tisdale
February 11, 2021 7:13 am

“The UN admits on its IPCC-history webpage that the IPCC was founded to write reports that support the UNFCCC. In real terms, the IPCC is a propaganda-writing entity that was founded to support political agendas”

That’s right, and the political agenda of the UNFCCC was to find evidence of human influence on the Earth’s atmosphere with the burning of fossil fuels.

The UNFCCC wanted to set itself up as the regulator of CO2 for the world, and the IPCC propaganda supports this aim to the point of lying and distorting the facts. The IPCC claims to see Human-caused Climate Change when there is no evidence for it, even in their own reports. It’s an unsubstantiated assertion, made for political purposes.

Editor
Reply to  Tom Abbott
February 11, 2021 12:50 pm

Tom, I disagree with this portion of your reply: “Thatā€™s right, and the political agenda of the UNFCCC was to find evidence of human influence on the Earthā€™s atmosphere with the burning of fossil fuels.”

The UNFCCC are actually a series of international treaties, not research.

Regards,
Bob

leitmotif
Reply to  Bob Tisdale
February 11, 2021 7:26 am

Also, Bob, the IPCC does not conduct any research itself and no peer review is carried out on the reports the IPCC produces.

How is it the IPCC is considered the gold standard on climate change if it performs neither of these tasks?

Gerald Machnee
Reply to  leitmotif
February 11, 2021 12:57 pm

Scientists analyze papers then prepare comments to be summarized. However, the top end of less than 100 “lead writers” then decide which comments they like and ignore the rest. Steve McIntyre was an expert reviewer on one of them. He asked for more information on a paper. He was told nit to ask any questions or he would be kicked off the panel. This is noted in his weblog.

Gerald Machnee
Reply to  leitmotif
February 11, 2021 1:03 pm

Donna said: “Afterward, some of these scientists are tasked with writing a summary of the larger, overall report.”
In actuality the lead writers are chosen beforehand. That is how Michael Mann inserted his “hockey stick” into the report. The small group of lead writers essentially decides the outcome of the report.
In addition the Summary for Policymakers is written before the final scientific reports are finalized. So the scientific reports are then changed to agree with the Summary. that is what Donna was talking about when she was noting all the changes. Science had to agree with policy.
The same events happened in 2018 on October when the IPCC put out their 12 year fear mongering warning. Those statements were not the result of science since no science can predict how temperatures will rise with CO2.

February 11, 2021 4:13 am

If the climate comics models have any kind of premonition prediction power, why isnā€™t there just one?

And why, according to Gavin, does the average of the models give “the right answer”?

Reply to  Andy May
February 11, 2021 4:28 am

It meant to be a rhetorical question, but thanks, Andy

AGW is Not Science
Reply to  Andy May
February 12, 2021 10:24 am

If they chose one model, they would have to model the oscillations and then everyone would easily see that CO2 is a minor influence on climate.

If the models didn’t build in an assumption of CO2 “influence,” there wouldn’t be any.

Reply to  Redge
February 11, 2021 4:36 am

The IPCC at its heart is a collaborative political process in the output. Thus politics of consensus building replaces the critical scalpel of the scientific method on that output. Consensus building is a collaborative process to keep all the ā€œplayersā€ (countries) on-board. Thus everyoneā€™s climate model gets thrown in one big ensemble so no one feels their scientists are being ignored. It becomes one big happy consensus that only looks like science to the duped public.

February 11, 2021 4:19 am

The climate dowsers are simply rent seeking in their divinations. The models they create, even though they use the best physics for the radiative transfer and many other processes, they resort to using parameterizations for all of the important physics of water and convection heat transfers such as precip rates, cloud condensation physics. These are processes which the 1st principle physics are unknown and/or occur on size scales far too small for their grid size. So they fake them in the models with poorly constrained hand tuned parameters. And then they call the output ā€œscience.ā€

A good analogy to what they are attempting is what I refer to as Dog Poop Cake. This is where a decent baker wants to make a chocolate cake but lacks chocolate The baker has no deep understanding of how make chocolate or acquire it. The baker uses the best flour, eggs, sugar and other ingredients, but that elusive chocolate is a real problem. So what is baker to do? Fake it.
Fake with some piles of fresh dog poop mixed with some confectioners sugar to get the color and texture like the elusive chocolate. And proceed to make the ā€œchocolate cakeā€ with the dog poop substitute. What results may look like a chocolate, but Iā€™d advise against taking a bite. It really isnā€™t what the baker claims it to be.

The climate models are like the dog poop chocolate cake. Not really the science they claim to be.

Reply to  Joel Oā€™Bryan
February 11, 2021 4:25 am

IIRC, in the 80s some mad professor baked biscuits made from human poo and presented them on a BBC programme for the host, Noel Edmonds, to try

No wonder the guy is as batty as bat poo

Reply to  Joel Oā€™Bryan
February 11, 2021 6:41 am

To further your analogy, there’s an old joke about selling toothbrushes for $1000 each after people have sampled the poop cake for free. We are sold a ruinous solution (carbon taxes) if we buy into the bad models.
The point is to use scare stories to panic people into bad decisions.

AGW is Not Science
Reply to  Mumbles McGuirck
February 12, 2021 10:27 am

LMFAO. And here I thought the “poop cake” analogy would be hard to top!

John Tillman
Reply to  Joel Oā€™Bryan
February 11, 2021 3:57 pm

Not to have to parameterize clouds would require on the order of ten billion times more computing power than now available. But they’d still get it wrong.

AGW is Not Science
Reply to  John Tillman
February 12, 2021 10:34 am

As long as they assume atmospheric CO2 drives the effing climate, they will always get it wrong.

Kind of like a mechanic who thinks the exhaust coming out of the muffler of a car determines which way a car steers will get it wrong. In terms of pure physics, he could argue that the exhaust flow “contributes” to the steering of the car, all else held equal. And technically, this may be true in some small and trivial way. In reality, however, it is completely meaningless – just like atmospheric CO2 is to “climate change.”

AGW is Not Science
Reply to  Joel Oā€™Bryan
February 12, 2021 10:26 am

LMFAO. That may be the best “climate model” analogy I’ve ever read!

Robert of Ottawa
February 11, 2021 4:27 am

Who knew there was a Canadian Centre for Climate Modeling

That must be right up there with their epidemoloy modelling department. A politicized institution to provide excuses for government behavior.

MarkW
Reply to  Robert of Ottawa
February 11, 2021 7:19 am

Shouldn’t that be the Canadian Centre for Climate Predictions?

Reply to  MarkW
February 12, 2021 9:24 am

Shouldnā€™t that be the Canadian CenteR for ClimAte Predictions?

Or, the above works too. šŸ™‚

February 11, 2021 4:37 am

All these model runs use the ssp245 emissions scenario, which is the CMIP6 version of RCP 4.5, as far as I can tell. Thus, it is the middle scenario.

Not to be pedantic (perish the thought …), but between SSP1-1.9 and SSP5-8.5 the closest to “the middle scenario” would probably be SSP4-6.0 (rather than SSP2-4.5), especially if you set the end-date on your graph to 2100.

The University of Melbourne has an excellent website for downloading the “projected” levels of atmospheric GHG concentrations, especially CO2 and CH4 :
https://greenhousegases.science.unimelb.edu.au/#!/view

This includes “historical” levels from year 1 (AD) to 2014, and per-scenario levels to 2500.

Climarco
February 11, 2021 4:44 am

Hello,

There is no warning until nearly 1975 in Figure 1 (or I missed something ? ). So, I have 2 questions :

  • How IPCC is going to explain the sea level rise from at least 1890 ? (see the Brest mean sea level evolution for instance, or New York).
  • How IPCC is going to explain melting ice since the 19th century ?

Thanks in advance.

Tom Abbott
Reply to  Climarco
February 11, 2021 7:26 am

The IPCC can’t explain sea level rise using computer-generated temperature charts. Their computer-generated temperature charts ignore the warmth that took place after 1850. These warming periods are ignored so that the alarmists can claim that we are currently living in the warmest period in human history. If they acknowledged that it was just as warm in the recent past as it is today, then they couldn’t make that claim, and they couldn’t claim that CO2 was causing unprecedented warming because there is a precedent in the recent past.

So the alarmists ignore the past because it interferes with their Human-caused Climate Change narrative. One cannot get to the facts of the matter about the Earth’s climate by ignoring the instrument temperature records, and that’s what the alarmists do. The alarmists are the real deniers.

Bubba Cow
Reply to  Tom Abbott
February 11, 2021 9:06 am

Never forget the 1st Law of Progressivism (Tucker Carlson) –
“Always accuse your opponent of doing exactly what you have already been doing i.e. denier, racist . . .

Reply to  Climarco
February 11, 2021 5:24 pm

The models provide surface temperature. Ocean thermal expansion is the result of the entire ocean not the top 100m or so. It takes millennia for oceans to achieve thermal balance so they never will given the variation in Earth’s orbit and solar output.

The current increase in temperature at depth is likely the emergence from the last period of glaciation.

Something I learnt a long time ago is that it is difficult to heat water from the surface. The natural forces oppose it. My local lake can be 20C on the surface and maybe 5C just 6m down. It is not exposed to any natural mixing so has a significant temperature gradient with depth.

Although the ocean surface is nominally temperature controlled, the tropical Atlantic sometimes fails to reach the controlled 30C. During glaciation it gets to a maximum of 26C and that eventually has some impact on the surface temperature in the tropical Pacific. There is very little impact of glaciation on the surface of the Indian Ocean. Glaciation causes a lot of the ocean water to cool. It takes a long time to recover the heat it loses.

rhoda klapp
February 11, 2021 5:21 am

The Canadians have always run hot. Wishful thinking? What has Canada to fear from a couple of degrees warming anyway?

Reply to  rhoda klapp
February 11, 2021 6:47 am

Within the hurricane forecast community the Canadian forecast models are usually dismissed for their propensity to over develop disturbances. Almost every tropical wave became a raging Cat 5 by 5 days out.

AGW is Not Science
Reply to  Mumbles McGuirck
February 12, 2021 10:48 am

LOL. Sounds typical of the anti-logic CO2 induced armageddon.

The irony is they are 180 degrees wrong. As Richard Lindzen so eloquently summarized it, (paraphrasing) “A basic book on meteorology will tell you that in a warmer climate, extra-tropical storminess will decline.”

Clyde Spencer
Reply to  rhoda klapp
February 11, 2021 9:04 am

The fear should be that Canada would become a destination for illegal immigrants who are climate-change refugees.

AGW is Not Science
Reply to  Clyde Spencer
February 12, 2021 10:56 am

The reality is close to Mexico being overrun with climate refugees as the next glaciation approaches.

Or if enough “climate policy” is enacted such that “temperate” climates become uninhabitable under current conditions anyway, which is what will happen if the Eco-Nazis manage to sufficiently restrict access to and use of fossil fuels.

Clyde Spencer
Reply to  AGW is Not Science
February 17, 2021 12:16 pm

I’m reminded of a remark by Mark Twain that there is not a square foot of land on Earth that is still in possession of its original inhabitants.

February 11, 2021 5:54 am

Andy,

Another good post.

I am continually dismayed at what passes for physical science today. From absolutely no understanding of uncertainty and how to handle it, to a total misunderstanding of what averaging does to data, to an inability to assess what causes a mid-range value to change it just saddens me to read about CAGW and CGM’s today.

Rhys Read
February 11, 2021 6:01 am

They have totally wiped out the global cooling from the 40s through the 70s. They disappeared my childhood.

Tom Abbott
Reply to  Rhys Read
February 11, 2021 7:29 am

And they wiped out the warming of the 1930’s, which was just as warm as it is today, no CO2 required.

George Daddis
Reply to  Rhys Read
February 11, 2021 1:22 pm

How DARE they!

Mickey Reno
February 11, 2021 6:15 am

Hmmm, I smell a rat. None of the models even hints at the extreme warm decade of the 1930s. I think I’ll ignore them all. Again. AR1-6, proving that bureaucracy trumps science.

Tom Abbott
Reply to  Mickey Reno
February 11, 2021 7:36 am

The global surface temperature chart does not match reality, it is science fiction, and the climate models are backcasting to it, so it’s no wonder the climate models are getting it wrong going forward. They started off wrong, and are just getting more wrong as they go along.

And I’ll bet everyone of these modellers have seen the regional temperature charts from around the world that show the warming in the Early Twentieth Century is the equivalent of the warming today, yet they then go and pretend a computer-generated global temperauture profile that erases the previous warming period is the temperature profile they should be using.

Garbage in, Garbage out. That’s what we have here with the climate models.

fred250
Reply to  Tom Abbott
February 11, 2021 12:57 pm

“The global surface temperature chart does not match reality, it is science fiction, and the climate models are backcasting to it”

.

Thanks Tom, I have said this MANY times.

If you hind cast to GISS et al, you are introducing a CO2-agenda based warming trend from the very start.

Everything from there is just JUNK and would only be useful if printed on toilet paper.

It is truly the peak instance of GI-GO

AGW is Not Science
Reply to  Tom Abbott
February 12, 2021 11:00 am

The other part of the “garbage in” being (1) the assumption that atmospheric CO2 does anything measurable to the Earth’s temperature and (2) all of the “fudge factor” parameterizing of various inputs they lack sufficient resolution and/or understanding of.

Sounds like “Poop Cake.” (See above) Or maybe “Poop Stew.”

DHR
February 11, 2021 6:15 am

Several years ago RGB@duke posted a good article about the errancy of averaging climate models from a statistics viewpoint. It should have been cast in gold. Can’t find it anymore.

Reply to  Andy May
February 11, 2021 8:09 am

I guess that to the climate modelers natural cycles are “unforced noise”. So they think getting rid of the natural cycles is getting rid of unforced noise. Just more idiocy.

Dave Fair
Reply to  Andy May
February 11, 2021 12:49 pm

Please note the “hide the pea” language in the quote. They go from asserting a single-model GCM ensemble will give the forced signal to using multi-model mean across numerous models to get the forced GMT signal. Scientific fraud.

Bubba Cow
Reply to  DHR
February 11, 2021 9:14 am
Reply to  Bubba Cow
February 11, 2021 11:39 am

Please note that rgb didn’t even get into the uncertainty aspects of the models and their ensemble!

alastair gray
Reply to  DHR
February 11, 2021 9:18 am

A few years ago there was a drive to root maths and physics into the real world so some idiot posed an exam question that gave a lot of facts about the solar system like orbital radii and masses of planets as input data and then asked students to, among other things,calculate the average distance of a planet from the sun. There is an answer to this in strictly mathematical terms but it has absolutely no sigificance or even physical meaning whatsoever. Similarly I think with average of different models . Possibly like writing the average book in the Library of Congress The first letter is the average of all first letters of all books in the collection regardless of language and then on to the second letter. I cant wait to read it

eyesonu
Reply to  DHR
February 11, 2021 1:32 pm

DHR,

I not certain but I believe you my be referring to a rather long comment RGB made in a post by someone else. I have no idea how to locate it.

Tom Abbott
February 11, 2021 6:49 am

From the article: “All these model runs use the ssp245 emissions scenario, which is the CMIP6 version of RCP 4.5, as far as I can tell. Thus, it is the middle scenario.”

That’s interesting. The modelers have finally decided that RCP 8.5 is too much.

Even so, their models look to be way off.

Reply to  Tom Abbott
February 11, 2021 10:10 am

The modelers have finally decided that RCP 8.5 is too much.

Not at all.

The SSPs (Shared Socioeconomic Pathways) that will be used in AR6 were developed to replace the RCPs (Representative Concentration Pathways) used in AR5 (see Riahi et al, 2016), but the nomenclature of “SSPx-y.z” keeps the “-y.z” suffix that indicates the (additional) radiative forcing in the year 2100 for comparison purposes between the 2.

SSP5-8.5, like RCP8.5, is a “pathway” through “+8.5W/mĀ² in 2100”, but whereas RCP8.5 resulted in a smooth “S-curve” of CO2 levels to a constant ~1960ppm from 2250 (via ~935ppm in 2100), the emissions profile of SSP5-8.5 is :

  • ~1135ppm in 2100
  • a peak of almost 2210ppm in 2243/3
  • a slow “decline” to ~2010ppm in 2500

NB : This is at least presented as a “worst-case, just for illustrative purposes …” scenario, but it remains to be seen how the media will present model outputs using that particular “pathway” as an input.

.

PS : The 7 main SSP “baseline scenarios” retained for AR6 are :

  • SSP1-1.9
  • SSP1-2.6 (cf RCP2.6)
  • SSP4-3.4
  • SSP2-4.5 (cf RCP4.5)
  • SSP4-6.0 (cf RCP6.0)
  • SSP3-7.0
  • SSP5-8.5 (cf RCP8.5)

Note that there is also a “SSP5-3.4 (Overshoot)” scenario that goes through “+3.4W/mĀ² in 2100” via a larger “peak and decline” pathway (peak at ~570ppm from 2060-2065) than SSP4-3.4’s “flatter” route (peak at ~490ppm from 2075-2085).

Carlo, Monte
Reply to  Mark BLR
February 11, 2021 11:21 am

What in tarnation do they mean by a ā€œpathwayā€? An alien term to me.

Captain Climate
February 11, 2021 7:03 am

Can someone take the CMIP6 models and compare them to CMIP5 side by side to show how the previous batch sucked and how CMIP6 is doubling down on stupid fear?

Reply to  Captain Climate
February 11, 2021 9:00 am

Captain Climate — here you go.

Reply to  Captain Climate
February 11, 2021 10:21 am

Only done for atmospheric CO2 levels, but …

SSP_CO2-levels_1.png
Reply to  Mark BLR
February 11, 2021 10:21 am

… and limited to the year 2100 …

SSP_CO2-levels_to-2100-only_1.png
Weekly_rise
Reply to  Mark BLR
February 11, 2021 12:27 pm

These are not model outputs, but emissions scenarios, yes?

Reply to  Weekly_rise
February 12, 2021 4:14 am

These are not model outputs, but emissions scenarios, yes?

Not quite.

They are the calculated atmospheric CO2 levels from the various emission scenarios output by AOGCMs, which are then used as inputs by climate models of “intermediate” complexity.

My understanding (which is most likely wrong …) is that the hierarchy of climate models is as follows :
1) 3-D AOGCMs : Inputs = GHG emissions; Outputs = CO2 levels, radiative forcing (RF) numbers, GMST, SLR, …
2) Intermediate level : Inputs = CO2 levels; Outputs = GMST, SLR, …
3) “Simple” models : Inputs = RF numbers; Outputs = GMST, SLR, …

NB (1) : AOGCMs include modules like “(Land) Biology” and “(Ocean and Atmospheric) Chemistry” that the other types don’t bother with, as well as having finer grids.

NB (2) : It is possible that some “models of intermediate complexity” output a set of RF numbers, given CO2 (and other GHG) concentrations as inputs, but this point is even more TBC than the rest of this post !

The (AOGCM output) CO2 and RF numbers tend to have (very !) narrow ranges across the various climate models for a given (input) scenario / pathway, which is why the University of Melbourne “atmospheric GHG abundances” downloads for the SSPs (or the PIK equivalents for the RCPs) can be taken as “correct / accurate”.

GMST, SLR, etc. tend to be rather more “noisy / fuzzy”, with significantly wider ranges. They do tend to be just “a delayed copy of the inputs with some ‘natural variability’ added” though.

Weekly_rise
Reply to  Mark BLR
February 12, 2021 5:50 am

Thanks, but that isn’t quite the question I’m getting at. Each of the projections in these graphs is showing a unique emissions pathway – i.e. whether they agree or not does not tell us anything about model skill.

Reply to  Weekly_rise
February 12, 2021 9:58 am

We simply do not have enough actual data to determine “model skill” yet.

Shifting the available CO2 emissions estimates to “line up with” the “Historical Data” for CMIP5 (RCPs from 1990 to 2005) and CMIP6 (SSPs from 1990 to 2015), and adding the 7% reduction in CO2 emissions estimated by the Global Carbon Project (GCP) to the data from BP, OWID and EDGAR (all only available to 2019) you get the attached image.

Which scenario / pathway comes “closest to reality” ?

Which one should we use in order to compare “modeled” GMST numbers against thermometer measurements, and hence determine “model skill” ?

Real-RCP-SSP_CO2-emissions_1.png
Weekly_rise
Reply to  Mark BLR
February 12, 2021 12:10 pm

Apologies for my sloppy wording. The RCPs are not emissions pathways, but concentration pathways. I don’t think it’s meaningful to plot emissions against them.

That said, I think using a “forcing adjusted” model output is typically done when making model/data comparisons (see, e.g. Hausfather et al., 2019). So we can determine model skill in simulating observed surface trends whether or not we have actually decided which pathway is the “best” one. And, of course, my original point, which is that differences in the concentration pathways tells us exactly nothing about model skill to begin with, which seemed to be your initial implication.

Reply to  Weekly_rise
February 13, 2021 3:10 am

The RCPs are not emissions pathways, but concentration pathways. I donā€™t think itā€™s meaningful to plot emissions against them.

From van Vuuren et al (2011), “The representative concentration pathways: an overview” :

The community subsequently designed a process of three phases (Moss et al. 2010):

1) Development of a scenario set containing emission, concentration and land-use trajectoriesā€”referred to as ā€œrepresentative concentration pathwaysā€ (RCPs).

When a graph line has the label “RCPy.z” (or “SSPx-y.z”) attached, it is for the reader to determine whether it refers to the “emission”, “concentration” or “land-use” sub-element of the RCP (/ SSP).

This is usually easy to do based on the context of the discussion, although care must be taken as the debate evolves over time.

. . .

That said, I think using a ā€œforcing adjustedā€ model output is typically done when making model/data comparisons (see, e.g. Hausfather et al., 2019).

As Yogi Berra famously said : “It’s tough to make predictions, especially about the future.”

The Hausfather paper highlights the fundamental difference between “hind-casting” and “fore-casting”.

As they put it : “When mismatches between projected and observed forcings are taken into account, a better performance is seen.”

For another example see this Real Climate page, where it was judged necessary to provide a “Forcing-adjusted” version of the (more recent, therefore theoretically “better” …) CMIP5 “projections” while leaving the (older, “worse” …) CMIP3 ones as they were when comparing them against the subsequent real-world measurements.

. . .

To this observer the value to be attached to the announced “model skill” in most cases is (approximately) inversely proportional to the number of “fiddle factors” used post hoc.

Reply to  Mark BLR
February 12, 2021 3:11 pm

Mark BLR, “We simply do not have enough actual data to determine ā€œmodel skillā€ yet.”

Yes we do. They have no skill whatever.

So we can determine model skill...”

W_r, you abandoned our conversation elsewhere, and now comment here as though it never took place.

Weekly_rise
Reply to  Pat Frank
February 15, 2021 9:36 am

I don’t think this use of “skill” is correct. There is a difference between how uncertain we are about future projections and how close to reality those projections turned out to be. The models turn out to be quite close to reality, so whatever the uncertainty surrounding projections is (and I disagree it is anything close to as large as you allege), the models are actually skillful.

Reply to  Weekly_rise
February 15, 2021 12:37 pm

Wr,

The problem is that the models are WAY OFF from reality.

The only one close to reality is the Russian model. All the rest are garbage.

If the modelers would actually do an uncertainty analysis of their inputs and project the uncertainty through into their output they would quickly find out why their models are so far off from reality.

But they would rather live in their own alternate reality.

Weekly_rise
Reply to  Tim Gorman
February 15, 2021 1:33 pm

Tim, the models don’t appear to be too far off from reality:

http://www.realclimate.org/images//cmp_cmip3_sat_ann-3.png

We are comparing model output against observed reality, there is no need to do an uncertainty analysis of inputs to see whether the models are skillful in reproducing observed trends.

Reply to  Weekly_rise
February 15, 2021 3:19 pm

Models are tuned so that they match past observations. That makes any match with reality put in by hand.

The correspondences are no better than tendentious. All the physical uncertainty is hidden by the tuning.

They have no idea whether the underlying physical theory is correct. They’ve merely adjusted the parameters in an ad hoc way to get correspondence.

Eqn. 1 in “Propagation …” will do just as good a job reproducing 1980-2000 air temperatures and projecting 2001-2020 as CMIP3, CMIP5, and CMIP6 models. I know that because I’ve done it.

You could do it, too, if you wanted to take the trouble to test your own sureties.

See Figures S9-1 and S9-2 in the SI.

What information would that plot convey about fidelity to climate?

If nothing, then why not by your lights? It would be just as good as the plot you tout.

I don’t “allege” the uncertainty, W-r. I’ve demonstrated it. Your argument is merely one from personal incredulity. It does you no credit.

Weekly_rise
Reply to  Pat Frank
February 15, 2021 7:12 pm

“Models are tuned so that they match past observations. That makes any match with reality put in by hand.”

This is not accurate. First, the models being tuned to historic trends would not guarantee consistency with future trends, which is what we generally see. Second, many (I’m hesitant to say “most”) modeling groups are not tuning to 20th century temperatures at all, yet are still reproducing the observed warming (but “most” are certainly tuning to radiative balance at TOA). Finally, it’s simply incorrect to characterize model tuning as being an ad-hoc process with no reasoning or justification behind it. And, of course, one of the reasons model intercomparison projects exist at all is precisely to understand model differences resulting from things like parameter tuning.

“They have no idea whether the underlying physical theory is correct.”

The basic underlying physics are quite well vetted. The uncertainty arises from parameterizations. It’s possible that some parameter values are wrong, but this wouldn’t undermine the basic theory.

“Eqn. 1 in ā€œPropagation ā€¦ā€ will do just as good a job reproducing 1980-2000 air temperatures and projecting 2001-2020 as CMIP3, CMIP5, and CMIP6 models. I know that because Iā€™ve done it.”

It’s not surprising that a regression model can be constructed that fits the data well, but such an exercise is completely ignoring all the other stuff that the models are simulating. Nor does such a model actually have any explanatory power (it says nothing about the processes driving the observed trend, which is the whole thing we want to use climate models to understand to begin with).

Reply to  Weekly_rise
February 15, 2021 10:04 pm

W_r, ā€œThis is not accurate.ā€

On the contrary:

ā€œClimate models are usually tuned to match observations.ā€œ [1]

ā€œDuring a development stage global climate models have their properties adjusted or tuned in various ways to best match the known state of the Earth’s climate system. [2]

various ways to best match” = ad hoc by hand

“These desired properties are observables, such as the radiation balance at the top of the atmosphere, the global mean temperature, sea ice, clouds and wind fields. The tuning is typically performed by adjusting uncertain, or even non-observable, parameters related to processes not explicitly represented at the model grid resolution.

“The practice of climate model tuning has seen an increasing level of attention because key model properties, such as climate sensitivity, have been shown to depend on frequently used tuning parameters.ā€ [2]

Climate sensitivity of models: a product of ad hoc tuning. There’s the way to get an accurate prediction, al right.

ā€œThe process of parameter estimation targeting a chosen set of observations is an essential aspect of numerical modeling. This process is usually named tuning in the climate modeling community. ā€¦ The goals of tuning are fairly uniform. ā€¦ the decisive (most important) metrics [are] global net top-of-atmosphere flux and then global-mean surface temperature. Based on these goals of tuning, there are a number of different parameterizations adjusted to achieve them. ā€ [3]

W_r: ā€œThe basic underlying physics are quite well vetted.ā€

Cloud formation and fraction are not. Convection is not. Precipitation is not.

And the physical theory is nowhere near able to resolve the effect of a 0.035 W/m^2 annual average perturbation.

[1] Stocker, T.F., Climate change: Models change their tune. Nature, 2004. 430(7001), 737-738

[2] Mauritsen, T., et al., Tuning the climate of a global model. Journal of Advances in Modeling Earth Systems, 2012. 4(3) M00A01

[3] Hourdin, F., et al., The Art and Science of Climate Model Tuning. Bulletin of the American Meteorological Society, 2017. 98(3): p. 589-602

Do you need more evidence?

Weekly_rise
Reply to  Pat Frank
February 16, 2021 8:59 am

Thanks, these papers were the basis for my belief that you are oversimplifying tuning procedures, and your comment seems to reinforce that. The point is that not all models are tuned to the surface trends (there would not be such a wide spread of modeled surface trends otherwise), yet almost all models reproduce the observed trends. And, of course, the fact that a model was tuned to historic data does not guarantee that it will match future data, even in cases where the models were tuned to 20th century surface temps.

Reply to  Weekly_rise
February 16, 2021 10:37 am

Your reply is mere dismissal, W_r.

The papers confirm my comment and contradict your stance.

You wrote, “not all models are tuned to the surface trends…

Yes they are. Every single one of them.

“… (there would not be such a wide spread of modeled surface trends otherwise),…”

Model forecasts spread after tuning because they all end up with different climate sensitivities. Jeffrey Kiehl published on that conundrum in 2007.

From Kiehl: “it is also well known that the same models that agree in simulating the anomaly in surface air temperature differ significantly in their predicted climate sensitivity. … most global climate models used for climate change studies vary by at least a factor of two in equilibrium sensitivity.

The question is: if climate models differ by a factor of 2 to 3 in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy.

Kiehl goes on to show that modelers adjust aerosol forcing to compensate for excess climate sensitivity to CO2.

He then offers a couple of hand-waving arguments about why it’s OK to project future temperatures anyway. His paper probably would not have been publishable without them.

Offsetting errors. That’s the whole story of climate modeling.

Modelers adjust their parameters so that errors offset in their tuning region, and they manage to reproduce known observables such as TOA balance or air temperature.

Then off they go projecting their physically meaningless climate futures.

The field is a complete FUBAR, W_r. There’s been virtually no improvement in climate physics since modelers took over climatology and forced out all the real scientists.

Weekly_rise
Reply to  Pat Frank
February 16, 2021 12:34 pm

“Yes they are. Every single one of them.”

This is a pretty significant claim that needs substantial evidence. The papers you cite do not support it. I do not deny that some modeling groups are tuning to the 20 century surface temps, but it is not something that is ubiquitous.

The takeaway seems to be that we could narrow our range of climate sensitivity estimates if we could reduce the amount of tuning that needs to be done to model parameters, which seems pretty reasonable to me.

Reply to  Weekly_rise
February 16, 2021 3:30 pm

Where do you see a this model but not that model qualifier in any of those papers, W_r?

Here, maybe this will do it for you. A paper by climate demigod and your personal go-to authority Gavin Schmidt, et al., (2017) Practice and philosophy of climate model tuning across six US
modeling centers Geosci. Model Dev. 10, 3207ā€“3223

https://doi.org/10.5194/gmd-10-3207-2017

Abstract line 1: “Model calibration (or ā€œtuningā€) is a necessary part of developing and testing coupled oceanā€“atmosphere climate
models regardless of their main scientific purpose.

Followed by, “While details differ among groups in terms of
scientific missions, tuning targets, and tunable parameters,
there is a core commonality of approaches.”

Section 2 title: “2 Why is climate model tuning necessary?

Need we belabor this topic any further?

No one estimates climate sensitivity. It emerges from the model. And varies from model to model.

Parameters, notably aerosol forcing, are adjusted after the fact to bring the model into conformance with known observables.

These are the models relied upon to predict future climate. Fat bleedin’ chance.

Weekly_rise
Reply to  Pat Frank
February 16, 2021 5:10 pm

I think you misunderstand – I am not contesting the idea that most modeling groups perform model tuning, I am contesting the notion specifically that all modeling groups use 20th century surface temperature trends as a tuning target. Gavin Schmidt denies this in this RC post:

“My experience is that most groups do not ā€˜preciselyā€™ tune their models to 20th Century trends or climate sensitivity, but given this example and the Hourdin results, more clarity on exactly what is done (whether explicitly or implicitly) is needed.”

Hourdin, 2017, indicates that the most common tuning target is TOA flux.

Reply to  Weekly_rise
February 19, 2021 1:35 pm

Hourdin, 2017 noted the, “decisive (most important) [tuning] metrics: global net top-of-atmosphere flux (70%) and then global-mean surface temperature (26%).

Hourrdin, 2017 also calls tuning a “dirty part of climate modeling, more engineering than science, an act of tinkering that does not merit recording in the scientific literature.”

Tinkering: ad hoc adjustment to match known observables. As I noted and as you denied

Weekly_rise
Reply to  Pat Frank
February 21, 2021 7:50 am

You are taking the passage out of context. Hourdin is saying that tuning is often “seen as” those things, not that the author himself views it as such a thing. Hourdin says of tuning:

“Tuning is an intrinsic and fundamental part of climate modeling that should be better documented and discussed as such in the scientific literature. Tuning can be described as an optimization step and follows a scientific approach.”

Reply to  Weekly_rise
February 21, 2021 8:22 am

Did you read the link Pat gave you about the global temperature?

The models are trying to predict something that is meaningless to begin with. It is a useless metric.

Reply to  Weekly_rise
February 21, 2021 10:22 am

Here’s the entire context: “In fact, the tuning strategy was not even part of the required documentation of the CMIP phase 5 (CMIP5) simulations. In the best cases, the description of the tuning strategy was available in the reference publications of the modeling groups (Mauritsen et al. 2012; Golaz et al. 2013; Hourdin et al. 2013a,b; Schmidt et al. 2014).

“Why such a lack of transparency? This may be because tuning is often seen as an unavoidable but dirty part of climate modeling, more engineering than science, an act of tinkering that does not merit recording in the scientific literature.

There may also be some concern that explaining that models are tuned may strengthen the arguments of those claiming to question the validity of climate change projections. Tuning may be seen indeed as an unspeakable way to compensate for model errors.

How does my extracted quote misrepresent the context?

Tuning “may be described as [following] a scientific approach, but that description would be wrong. Tuning is an engineering approach.

There’s nothing wrong with tuning an engineering model so that it reproduces observables within a bounded calibration region.

Engineers interpolate their models to predict behavior between data points. Nothing wrong with that, either.

But climate modelers extrapolate their engineering models very far beyond their calibration bounds. That is not a scientific approach. It’s a BS approach.

Reply to  Pat Frank
February 21, 2021 10:23 am

Also, tuning parameters to compensate for model errors, and for parameter errors (unsaid), does nothing to remove predictive uncertainty.

Reply to  Weekly_rise
February 19, 2021 2:58 pm

Hourdin, 2017 noted the, “decisive (most important) [tuning] metrics: global net top-of-atmosphere flux (70%) and then global-mean surface temperature (26%).” As I had already quoted.

Hourdin, 2017 also called tuning a “dirty part of climate modeling, more engineering than science, an act of tinkering that does not merit recording in the scientific literature.”

Tinkering: ad hoc adjustment to match known observables. As I noted and as you denied.

You wrote, “I am not contesting the idea that most modeling groups perform model tuning.

That’s exactly what you contested in your February 15, 2021 7:12 pm, as linked above. You’re shifting your argument.

In any case, the TOA budget isn’t known to better than Ļƒ = Ā±2 W/m^2, see here and here.

That uncertainty propagates into every simulation step of any climate model that is TOA tuned because the net tropospheric thermal flux is not known better than that.

Reply to  Weekly_rise
February 15, 2021 10:08 pm

W_r, “The uncertainty arises from parameterizations. Itā€™s possible that some parameter values are wrong, but this wouldnā€™t undermine the basic theory.

What happens when the parameter uncertainties are propagated through the sequential and step-wise calculations of a climate simulation?

Wrong parameter values means the simulated climate wanders away from the physically correct prediction in the climate phase space. But no one knows the resulting physical error in a futures prediction.

Hence the need for an uncertainty estimate.

Your position is defeated by your own argument, W_r.

Weekly_rise
Reply to  Pat Frank
February 16, 2021 9:04 am

The spread of model projections using models with differing parameterizations gives a strong indication of actual model uncertainty in projected trends. Reality is constrained by the laws of physics (TOA flux has to balance at equilibrium, e.g.), so uncertainty simply cannot approach infinite, as your uncertainty estimates indicate.

Worth noting also that the parameters vary together, so you cannot treat the uncertainties independently, as you do in your paper by focusing only only LW cloud forcing.

Reply to  Weekly_rise
February 16, 2021 10:46 am

W_r, “Reality is constrained by the laws of physics (TOA flux has to balance at equilibrium, e.g.), so uncertainty simply cannot approach infinite, as your uncertainty estimates indicate.

Uncertainty is not a physical magnitude. It is not at all constrained by physics.

Your comment completely misses the mark. How hard is it to realize that a statistic is not a physical quantity?

The uncertainty bars say nothing about possible projected temperatures. They say everything about whether to believe the projected temperatures.

It’s not hard. Every physics and chemistry undergraduate grasps the idea of uncertainty by the sophmore year. And yet, the concept invariably escapes the ken of Ph.D. climate modelers.

And you, W_r. Is the notion of uncertainty beyond you, too? Say it isn’t so.

LWCF is not a parameter. It emerges from cloud fraction. And that’s not a parameter, either.

Parameters are varied by hand. Their uncertainties do not vary and are independent.

Weekly_rise
Reply to  Pat Frank
February 16, 2021 12:46 pm

Uncertainty is a measure of how far from the true quantity our estimate of the quantity lies. The true quantity can’t take on values precluded by the laws of physics, so finding a range of uncertainty that allows the true quantity to take on impossible values means that the uncertainty estimate is not a valid one. You keep trying to steer the discussion to the topic of whether or not the uncertainty represents uncertainty in the projections, but that has never been what I’m arguing.

Happy to accept that I’ve misused the term “parameter” in this case, but my argument is conceptually unchanged – the LWCF is not independent of other components of the system being modeled, and that dependence can’t be ignored when trying to propagate LWCF error through to uncertainty estimates for projected surface temps.

Reply to  Weekly_rise
February 16, 2021 2:43 pm

“Uncertainty is a measure of how far from the true quantity our estimate of the quantity lies.”

You *still* don’t understand uncertainty. Uncertainty doesn’t tell you how far the true value *IS* from the estimate, it tells you how far it MIGHT BE from the estimate.

Uncertainty is *NOT* a physical value so it is not constrained by the physics of the measurement.

Reply to  Weekly_rise
February 16, 2021 3:41 pm

W_r, “Uncertainty is a measure of how far from the true quantity our estimate of the quantity lies.

Not when the uncertainty results from calibration error propagated through a long series of calculations.

We’ve been over that already. Measurement with your 1 foot ruler of Ā±1/8th inch accuracy has an uncertainty of Ā±0.4 inches after 10 sequential measurements.

After 9216 measurements, the uncertainty is Ā±12 inches — the entire length of the measurement ruler. And on it goes over a measurement series, with no upper limit of uncertainty.

You’ve shown no evidence of knowledge of climate models or their methodology, or of error propagation, or even of the meaning of an error statistic (it’s not a physical magnitude).

Ā±15 C uncertainty neither means Ā±15 C physical air temperatures, nor Ā±15 C projection values. It means the physically correct temperature is beyond present knowing.

And yet, evidencing no knowledge, you argue the case.

Weekly_rise
Reply to  Pat Frank
February 16, 2021 5:52 pm

“Ā±15 C uncertainty neither means Ā±15 C physical air temperatures, nor Ā±15 C projection values. It means the physically correct temperature is beyond present knowing.”

Except the physically correct air temperature is not beyond knowing. It is constrained by the laws of physics. The earth won’t be -20 degrees in 50 years or 20 degrees. We know more about the system than your uncertainty estimate would allow for.

And, again, you are avoiding this point, but it is quite an important one:

The LWCF is not independent of other components of the system being modeled, and that dependence canā€™t be ignored when trying to propagate LWCF error through to uncertainty estimates for projected surface temps.

Weekly_rise
Reply to  Weekly_rise
February 16, 2021 6:37 pm

*20 degrees warmer

Reply to  Weekly_rise
February 17, 2021 6:31 am

You are still demonstrating that you simply don’t understand what uncertainty *is*. It doesn’t tell you *what* the temperature will be. It gives you the interval in which it might lie. As the interval gets larger and larger it becomes harder and harder to determine what the true value might be. When you are trying to discern temperature changes of 2deg but the uncertainty interval is more that +/- 2deg then exactly how do you discern a 2deg change? The fact is that you can’t. The problem with the models and the modelers is that they ignore the uncertainty associated with their inputs and ASSUME their model outputs are 100% accurate. They aren’t. They will never be.

Weekly_rise
Reply to  Tim Gorman
February 17, 2021 8:51 am

“As the interval gets larger and larger it becomes harder and harder to determine what the true value might be.”

This is because you’re estimating that the true value could possibly lie further and further from your estimate. As part of an abstract mathematical exercise, this is all fine and well, the range of possible true values can blow up to infinity, but in a governed physical system, the actual range of possible true values is constrained. Therefore our uncertainty about how far our estimate might lie from the true value is also constrained.

To affect yet another analogy, if I weigh an apple and someone tells me the error range of my scale is between -1,000,000,000 and 1,000,000,000 pounds, so we have no idea what the weight of my apple could possibly be, I would say that the weight of my apple is not unbounded, but is constrained by the fact that I can lift it, so we can actually eliminate 99.99999% of the calculated uncertainty range (I would say your uncertainty estimate is incomplete).

I believe the reason Pat’s uncertainty estimate blows up in the way it does is partly because it is only considering a single component of the system treated independeltly, but that component is not actually independent of all the others. If you change LWCF, the rest of the system will also change in some way (ultimately all being constrained by TOA flux). So at the upper bounds of the LWCF error there are compnesating effects in other components that must be considered when propagating the error through to uncertainty in surface temperature.

Reply to  Weekly_rise
February 17, 2021 12:57 pm

his is because youā€™re estimating that the true value could possibly lie further and further from your estimate.”

That *is* what the uncertainty interval tells you! The true value can be anywhere in the uncertainty interval. The wider the interval the further the true value can be from the stated value. Did you go back and reread Taylor’s tome on uncertainty? I’m sure you didn’t. If you had you would understand this.

f I weigh an apple and someone tells me the error range of my scale is between -1,000,000,000 and 1,000,000,000 pounds, so we have no idea what the weight of my apple could possibly be, I would say that the weight of my apple is not unbounded, but is constrained by the fact that I can lift it, so we can actually eliminate 99.99999% of the calculated uncertainty range (I would say your uncertainty estimate is incomplete).”

You simply don’t understand physical science apparently. There *will* be an uncertainty interval associated with the measurement device you use to weight the apple. The worse the device is the wider the uncertainty interval will be. It would appear that you are trying to weigh the apple with a device that has a precision that is far wider than what you are trying to weigh. What are you using? A scale used to weigh dump trucks at a copper mine?

*I* would say you are trying to use something so unreasonable shows you know you have lost the argument about uncertainty!

As for Pat’s analysis you believe that the uncertainty of other components will cancel out the uncertainty for a single component? Uncertainties DO NOT CANCEL – they only grow.

What makes you think that if you change LWCF that anything else will change? What components will change and how will they change? If you can’t specify that then you are just making empty assertions. Tell us *EXACTLY* what other compensating effects will occur?

You are basically using the argument of all the climate modelers: “All the errors cancel out so that the model gives an accurate output”. They can never specify what the errors are or how they cancel. And I’ll bet you can’t either!

Weekly_rise
Reply to  Tim Gorman
February 17, 2021 2:09 pm

“That *is* what the uncertainty interval tells you!”

I’m glad that we agree on this point.

“You simply donā€™t understand physical science apparently. There *will* be an uncertainty interval associated with the measurement device you use to weight the apple. The worse the device is the wider the uncertainty interval will be. It would appear that you are trying to weigh the apple with a device that has a precision that is far wider than what you are trying to weigh. What are you using? A scale used to weigh dump trucks at a copper mine?”

Again, my point is that the uncertainty Pat has calculated for the climate models is far greater than the actual uncertainty about the system the models are modeling, so the uncertainty estimate does not reflect the true uncertainty. Pat’s estimate is missing something critical.

“What makes you think that if you change LWCF that anything else will change? What components will change and how will they change? If you canā€™t specify that then you are just making empty assertions. Tell us *EXACTLY* what other compensating effects will occur?”

It absolutely isn’t necessary to specify particular components to accept this point. The majority of the possible values for LWCF allowed by Pat’s error calculation would violate conservation of energy if there were no other compensating effects.

Reply to  Weekly_rise
February 17, 2021 2:55 pm

The model fails immediately when the uncertainty interval becomes wider than the temperature change trying to be identified. That happens *long* before the all of the iterations occur. If you *do* carry out all the iterations, however, Pat correctly calculated the uncertainty interval.

The modelers should *stop* their runs at the point the output uncertainty exceeds the anomaly trying to be identified. If that happens at 10 years out then that is where the model should be ended – and it should be noted that it is impossible to extrapolate any further. Extending past that point is nothing more than useless mathematical masturbation by mathematicians and computer programmers that have absolutely no knowledge of uncertainty!

the uncertainty estimate does not reflect the true uncertainty.”

Sure it does. Just because it is greater than you can accept in your worldview is *your* problem, not Pat’s and not the mathematics of uncertainty propagation.

“t absolutely isnā€™t necessary to specify particular components to accept this point.”

Of course it is. You *need* to prove your assertion. You are not an authority on this subject no matter how much your inflated ego thinks you are.

It isn’t Pat’s problem with the LWCF – it is the modelers problem!

Reply to  Weekly_rise
February 19, 2021 3:09 pm

W_r, “Again, my point is that the uncertainty Pat has calculated for the climate models is far greater than the actual uncertainty about the system the models are modeling,…”

Indicating that the climate model output has no predictive content.

“… so the uncertainty estimate does not reflect the true uncertainty. Patā€™s estimate is missing something critical.

Rather, you’re missing something critical, W_r. Namely, an understanding of uncertainty.

You’re not grasping Tim’s point. Instead you just re-word the same mistake over and yet over again.

One can only admire Tim’s patience.

Reply to  Pat Frank
February 20, 2021 5:27 am

WR is an AGW fanatic, a religious fanatic unwilling to listen to any falsification of his dogma. He pretends to know all about uncertainty but, in fact, knows nothing but the word itself.

Weekly_rise
Reply to  Pat Frank
February 21, 2021 7:55 am

“Indicating that the climate model output has no predictive content.”

To me it indicates that your uncertainty estimate is not a useful one, since it doesn’t actually reflect the uncertainty. The models indeed have predictive content, since they predict things like surface trends.

Reply to  Weekly_rise
February 21, 2021 8:35 am

The uncertainty estimate is a calculated value using standard engineering practice.

It is certainly useful. The fact that you don’t like what it says does not make it wrong in any way.

The models have zero predictive content because their uncertainty is wider than the differences they are trying to predict. It truly is just that simple.

If you think the uncertainty is wrong then SHOW where it is wrong.

BTW, when are you going to tell me how to calculate the uncertainty of f(x) = Asin(x)? The fact that you can’t do that tells me you have no actual knowledge of how to propagate uncertainty. You are just throwing crap against the wall hoping something will stick. So far you’ve just smelled up the thread.

Weekly_rise
Reply to  Tim Gorman
February 21, 2021 8:58 am

I’ve pointed out where I think Pat’s uncertainty is wrong extensively in this thread. If we have three doors with a prize behind just one, Pat would say, “if I open a door there is just a 33% change that I’ll get the prize!” And if I were to tell Pat, “but two of the doors are made of transparent glass and there is no prize behind them. We know more about this situation than your uncertainty estimate suggests is possible,” Pat would insist, “but I’ve calculated the uncertainty! It must be right!”

Reply to  Weekly_rise
February 21, 2021 10:08 am

You have *NOT* pointed anything out. All you’ve done is make declarative sentences saying he is wrong.

Your analogy about the doors shows that you know *nothing* about the subject of uncertainty.

When you can show, mathematically where Pat’s uncertainty calculations are wrong you might get someone to listen to you.

When you can show, mathematically, that combining two single, independent measurements of different things DO NOT CAUSE UNCERTAINTY TO GROW then you might get someone to listen to you.

It’s never going to happen because you can’t even tell me how to calculate the uncertainty of f(x) = Asin(x), the most simple example possible!

You can declare people wrong all you want, it is meaningless to anyone but yourself.

Reply to  Weekly_rise
February 21, 2021 9:07 pm

A truly vacuous comment, W_r.

Here’s a more appropriate analogy.

There are 10 doors, behind one of which is a prize.

We ask W_r, which of the doors is hiding the prize.

W_r says, What doors? I see no doors.

We say, the doors behind your back, W_r.

W-r says, there are only walls in front of me. Walls are all. Everyone I agree with says walls are all. There’s no point turning around.

That little story pretty much captures your entire argument.

By your own testimony, you’re an applied mathematician, W_r. Have you ever addressed physical error, or propagated calibration error through a calculation?”

All you’ve done is insistently misconstrue uncertainty as error.

Reply to  Weekly_rise
February 21, 2021 10:04 am

Even the IPCC recognizes you CAN NOT predict future climate.

They say, “The climate system is a coupled non-linear chaotic system, and therefore the long term prediction of future exact climate states is not possible.”

I can project that the world is inevitably headed to another glaciation because we are still in an ice age and interglacials never have lasted, ever. If you are wanting to invoke the Precautionary Principle.because of AGW I suggest you find a reason to mitigate the next glaciation because of the same principle.

Weekly_rise
Reply to  Jim Gorman
February 21, 2021 10:33 am

You’ve provided an incomplete quotation of the passage. What the IPCC TAR says is:

“In sum, a strategy must recognise what is possible. In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible. The most we can expect to achieve is the prediction of the probability distribution of the systemā€™s future possible states by the generation of ensembles of model solutions. This reduces climate change to the discernment of significant differences in the statistics of such ensembles.”

This statement is both true and perfectly consistent with my position in this comment thread. We can’t predict exact future climate states, but we can use models to determine the range of likely future states.

“When you can show, mathematically where Patā€™s uncertainty calculations are wrong you might get someone to listen to you.”

I believe others more qualified than myself have exhaustively discussed issues with Pat’s calculations. If you were not persuaded by them you are unlikely to be persuaded by me. My intent is to point out conceptual shortcomings I perceive with Pat’s approach and to see if Pat or anyone else can address them. To date no one has, despite days of back and forth and a discussion thread that has splintered across dozens of comment chains.

If someone presents an uncertainty estimate that doesn’t actually capture real, actual uncertainty, then their estimate is not useful. What is the point in estimating uncertainty if it doesn’t tell us how uncertain we are? My three door analogy is a nice illustration of the concept i’m attempting to relay, so you just insisting it means that I don’t understand uncertainty will never be persuasive. Elaborate on why you think so.

The principal mechanical argument I’ve presented against Pat’s calculations is that he is estimating uncertainty for surface temperatures by propagating error in a single quantity (LWCF) and that this cannot possibly result in a true accounting of the uncertainty, especially when the quantity is not independent of other processes in the system.

Reply to  Weekly_rise
February 21, 2021 2:47 pm

Ensembles of different models tell you nothing. Did you not read the other posts on this subject. An ensemble from one model using different inputs might tell you something but not a conglomeration of outputs from different models!

This statement is both true and perfectly consistent with my position in this comment thread. We canā€™t predict exact future climate states, but we can use models to determine the range of likely future states”

This is only true if you believe each of the models has no uncertainty! When you calculate the uncertainty for each model the interval overlaps the outputs of all the other models. So how then do you tell *anything* for sure? If all the models fail to take into account all of the physics, clouds for instance, then all of the models fail. An ensemble of failed models tells you nothing!

I believe others more qualified than myself have exhaustively discussed issues with Patā€™s calculations”

Then why can’t you quote from one showing that what Pat did was wrong. Pat has answered EVERY SINGLE CRITICISM made of his paper and proved his assertion to be true!

Once again you are making unfounded claims hoping we will take them as truth because they come from you! You gotta do a lot better than that!

“If someone presents an uncertainty estimate that doesnā€™t actually capture real, actual uncertainty, then their estimate is not useful.”

Pat’s uncertainty estimate has *never* been shown to wrong. Not by you, not by anyone. Again, claims to the contrary with no backup are just so much hot air!

Your three door analogy is garbage. It has nothing to do with uncertainty! It has to do with GAMBLING. It’s like comparing 3-card monty to uncertainty. And you can’t even tell the difference!

” propagating error in a single quantity (LWCF) and that this cannot possibly result in a true accounting of the uncertainty, especially when the quantity is not independent of other processes in the system.”

Again, malarky! It gives a *MINIMUM* level of uncertainty. Other uncertain process and inputs will only *increase* the uncertainty of the model, it can’t lessen it.

Going back to the model of a tractor. If you have an uncertainty interval associated with the fuel flow through the diesel pump then that defines the minimum uncertainty for the drawbar horsepower and fuel consumption, regardless of the other factors in the model. If there is uncertainty of the losses in the gear train then that does *not* cancel out the uncertainty of the fuel flow. It only adds to the total uncertainty of the tractor model.

Why this is so hard to understand is just beyond me. That’s why I categorize your assertions as religious beliefs. They have nothing to do with reality – just with what you want the outcome to be – kind of like the so-called climate scientists.

Reply to  Weekly_rise
February 21, 2021 9:38 pm

W_r quoting the IPCC: “The most we can expect to achieve is the prediction of the probability distribution of the systemā€™s future possible states by the generation of ensembles of model solutions. This reduces climate change to the discernment of significant differences in the statistics of such ensembles.ā€

And commenting, “This statement is both true and perfectly consistent with my position in this comment thread.

That statement is wrong and is not consistent with your position.

The IPCC statement is wrong because climate models are not known to deploy the correct physical theory of the terrestrial climate.

Probability distributions of parametrized extrapolations using engineering model that are not known to be physically correct (or even known how physically wrong they are) have no known relation to a physically true solution.

Their extrapolations are properly described as physically meaningless.

Your follow-up is wrong because your position all along has been that uncertainty is physical error.

You also wrote, “[propagating LWCF error] … cannot possibly result in a true accounting of the uncertainty,…]

That, at least, is correct. A true full accounting of projection uncertainty due to climate model error would easily produce 10 times the uncertainty width that LWCF error alone generates.

… especially when the quantity is not independent of other processes in the system.

It is independent, actually.

The LWCF calibration error statistic I used is the annual average of 27 independently parametrized CMIP5 climate models. Look at Figure 4 in my paper, W_r, and the section following A Lower Limit of Uncertainty … on pp. 8-9.

All the models produce their own cloud fraction error. The average LWCF calibration error statistic is independent of any given model and any given parametrization scheme.

But is applicable to the air temperature projection of any one model, as representative.

You do realize, don’t you, that offsetting errors within calibration bounds do not ensure or increase the accuracy of predictions extending beyond those bounds.

All calibration errors combine into predictive uncertainty as their root-sum-square.

Weekly_rise
Reply to  Pat Frank
February 22, 2021 11:43 am

“That, at least, is correct. A true full accounting of projection uncertainty due to climate model error would easily produce 10 times the uncertainty width that LWCF error alone generates.”

Shall I look forward to an imminent publication from you? This is seems like a vital step in your assessment.

“All the models produce their own cloud fraction error. The average LWCF calibration error statistic is independent of any given model and any given parametrization scheme.”

The error might be independent of any particular model, but the LWCF is not independent of other components of the system. The upper limits of your LWCF error would require either violating the laws of physics or inducing some substantial and offsetting change elsewhere in the system to balance energy fluxes. You can’t simply assume that an upper LWCF error results in maximal surface temperature uncertainty. You have to consider the uncertainty arising from all components of the system working together.

Reply to  Weekly_rise
February 22, 2021 2:29 pm

Again, LWCF is an INDEPENDENT factor. Other components would only matter if they impacted the uncertainty in LWCF.

How do you know the calculated uncertainty violates the laws of physics? That’s just a declarative sentence with no proof associated with it! What law of physics does it violate? What is the limiting value of that specific law of physics that invalidates the uncertainty estimate?

My guess is that you won’t answer. You can’t. Just like you can’t tell us what the uncertainty in f(x) = Asin(x) is.

What offsetting change would LWCF make? Don’t be vague. Tell us specifically.

And, YES, you have to consider the uncertainty arising from all components of the system. But that does *NOT* mean the uncertainties cancel!

Have you been talking with Nick Stokes? Because that is what he claims. All errors and uncertainties in the climate modes cancelso there is no error and no uncertainty in the output!

Religious dogma at its finest!

For instance, you are claiming that for F = PxA that the uncertainties in P will cancel out the uncertainties in A so that F is 100% accurate. Upon what logic and math is that a valid assumption?

Weekly_rise
Reply to  Tim Gorman
February 23, 2021 8:30 am

“Again, LWCF is an INDEPENDENT factor. Other components would only matter if they impacted the uncertainty in LWCF.”

I’m not talking about other components impacting the uncertainty of the LWCF, I’m talking about other components responding to changes in LWCF. This impacts the uncertainty in surface temperature projections.

Pat’s surface temperature projection uncertainty only includes LWCF error, it does not consider any other factors or how different components of the system interact, yet many of these components respond to changes in the others. I don’t have specifics on what these responses will be because they are enormously complex (probably why Pat doesn’t even attempt to open that can of worms in his paper). Conceptualy just imagine what impacts on the system there might be from LWCF at the highest end of Pat’s estimated error range. There would be changes in reflected sunlight, humidity, etc.

The uncertainty in surface temp will arise from the aggregate uncertainty of all the components that influence surface temperature, of which LWCF is just one, and they won’t all be additive (though some certainly might be!)

I have not spoken to Nick Stokes about this or any other topic, but the comments I’ve seen from him on this seem to be quite well reasoned, so I’m happy to be placed in that camp.

Reply to  Weekly_rise
February 23, 2021 9:56 am

“Iā€™m not talking about other components impacting the uncertainty of the LWCF, Iā€™m talking about other components responding to changes in LWCF. This impacts the uncertainty in surface temperature projections.”

OMG! If other factors respond to LWCF then that response will include the uncertainty associated with the LWCF! Once again you are trying to equate uncertainty and error. Will you never stop?

If you take A +/- u and the process B is dependent on A then the output of B will be K(A +/- u). There is no cancellation, only a propagation of uncertainty from A into B.

Take the formula: x = V/t. If both V and t have uncertainty then will they cancel? Of course not. The uncertainty in x will be the root sum square of the uncertainties in each component. (actually the relative uncertainties add by root sum square)

“The uncertainty in surface temp will arise from the aggregate uncertainty of all the components that influence surface temperature, of which LWCF is just one, and they wonā€™t all be additive ”

YES, ALL THE UNCERTAINTIES WILL BE ADDITIVE! You are *still* trying to equate uncertainty and error! What you are claiming is that error1 will cancel out error2. But error is not uncertainty! The uncertainty in A can’t be cancelled out by the uncertainty in B! So all the uncertainties will still add by root sum square!

PLEASE, PLEASE, PLEASE, PLEASE take a piece of paper and write 100 times: Uncertainty is not error!

You will get nowhere with all this until you internalize this truism.

Of course you are happy to be in the same company as Stokes. He refuses to admit that uncertainty is not error, just like you!

Weekly_rise
Reply to  Tim Gorman
February 23, 2021 7:43 pm

“PLEASE, PLEASE, PLEASE, PLEASE take a piece of paper and write 100 times: Uncertainty is not error!”

From An Introduction to Error Analysis, 2nd edition, by John Taylor (bolding mine):

“In science, the word error does not carry the usual connotations of the terms mistake or blunder. Error in a scientific measurement means the inevitable uncertainty that attends all measurements. As such, errors are not mistakes; you cannot eliminate them by being very careful. The best you can hope to do is to ensure that errors are as small as reasonably possible and to have a reliable estimate of how large they are. Most textbooks introduce additional definitions of error, and these are discussions later. For now, error is used exclusively in the sense of uncertainty, and the two words are used interchangeably.”

Take it up with Taylor.

Reply to  Weekly_rise
February 24, 2021 7:17 am

Write instead, ā€œUncertainty is not random error.ā€

Reply to  Weekly_rise
February 24, 2021 9:55 am

You missed bolding the most important words in the last sentence! FOR NOW!

Now go read Chapter 4!

“For this reason, uncertainties are classified into two groups: the random uncertainties, which can be treated statistically, and the systematic uncertainties, which cannot.”

You keep wanting to classify *everything* as random so you can use statistics. Not everything is a random uncertainty (call it error).

Random uncertainty is associated with multiple measurements of the same thing using the same device. Systemic uncertainty is associated with single measurements of multiple measurands by different devices.

That isn’t to say that multiple measurements of the same thing by the same device can’t also have systemic uncertainty or that single measurements of different things can’t have random uncertainty but the systematic uncertainty do *NOT* cancel in either case.

Multiple measurements of the same measurand by the same device creates a probability distribution associated with the data set created from the multiple measurements.

Single measurements of different measurands by different devices creates multiple data sets of size one. There is no statistical analysis that can be done an a data set with one member.

Why is this so hard for you to understand?

Weekly_rise
Reply to  Tim Gorman
February 24, 2021 12:29 pm

This is a position I’ve maintained throughout the entire discussion, and it is not a position consistent with your insistence that error and uncertainty are different things. Uncertainty arising from random error is reduced by taking multiple measurements, uncertainty arising from systematic errors is not. One way to identify systematic uncertainty is to compare estimated values with an observed value, once identified, the systematic error can be explored, understood, and addressed.

Reply to  Weekly_rise
February 24, 2021 3:06 pm

One way to identify systematic uncertainty is to compare estimated values with an observed value, once identified, the systematic error can be explored, understood, and addressed.”

You simply will not listen. How do you estimate what a mud dauber nest in the air intake of a measurement station does to a temperature reading? Where do the estimated value and the observed value come from for you to compare?

I want you lay out the specifics of how you get the estimated value and observed value for a measurement station with a mud dauber nest in the air intake. Don’t wave your hands. Lay it out mathematically and physically just how you get those two values!

I also want you to explain how you can measure a temperature multiple times so you can you can have “multiple VALUES”? I can’t do it with my thermometer. By the time I take a reading and get ready for a second one, the wind has changed, the sun has changed, the humidity has changed, the entire environment has changed. I get one chance and that is all. It is immediately goine into the past.

I also want to know how *YOU* ESTIMATE temperature values that you can compare with an observed value? Are you psychic? Can you tell the future? If so I want you to go with me to a casino and predict where the roulette wheel will land.

Reply to  Weekly_rise
February 24, 2021 9:43 pm

W_r, “once identified, the systematic error can be explored, understood, and addressed.

Not when the error arises from incorrect or inadequate physical theory.

That’s the source of LWCF error. Wrong and/or incomplete physical theory. How does that get fixed?

Not by climate modelers. Nearly all of them are applied mathematicians, and know nothing of science.

It gets fixed by climate physicists and physical meteorologists doing actual science. Something that modelers do not do, do not understand, and toward which they are hostile.

As they were when Lindzen and Choi proposed the Iris theory in 2001. The howls of outrage that went up from the modelers was an object lesson in their discomfort with science.

Reply to  Weekly_rise
February 23, 2021 1:05 pm

W_r, “Patā€™s surface temperature projection uncertainty only includes LWCF error, it does not consider any other factors or how different components of the system interact, yet many of these components respond to changes in the others.

The LWCF error is the annual average calibration error of 27 models from a 20-year hindcast.

It’s the net error after all the various parts of the model have done their dynamical interacting thing.

Offsetting errors cannot be assumed to produce a reliably predicted physical state. Supposing otherwise is betrays a thorough scientific naivete.

You’ve misunderstand the source of the error, W_r, you do not understand the meaning of uncertainty, you show no cognizance that model calibration error propagates through the sequential calculations of that model.

In short, your objections betray only ignorance.

W_r, “Conceptualy just imagine what impacts on the system there might be from LWCF at the highest end of Patā€™s estimated error range. There would be changes in reflected sunlight, humidity, etc.

Incredibly mindless. A statistic as a physical temperature. Really incredible. And you pretend to argue science, W_r.

Weekly_rise
Reply to  Pat Frank
February 23, 2021 7:14 pm

It is the net error in the LWCF, not the net error in the total forcing, and it certainly cannot be taken to be the only component of the uncertainty in the surface temperature projection. You have agreed to this exact point yourself in your previous comment, so why you are challenging it now is mystifying, except that you are choosing to be a contrarian for the sake of it.

Reply to  Weekly_rise
February 24, 2021 8:22 am

Uncertainties ADD. So what if LWCF is just one component Pat has considered? If other components have uncertainties then those uncertainties will *add* to the one Pat calculated.

Again, UNCERTAINTY IS NOT ERROR. Uncertainties do no cancel! Have you done your writing exercise yet?

BTW, what is the uncertainty for f(x) = Asin(x)?

Weekly_rise
Reply to  Tim Gorman
February 24, 2021 1:47 pm

I think you misunderstand. The uncertainty in the LWCF is determined by taking the difference between modeled and observed flux, but the surface temperature is not determined solely by the LWCF, it is determined by the net flux at the TOA. It is the error in the net flux, determined by the difference between modeled and observed net flux, that Pat should be using in his calculation.

I refer you to section 3.7 of Taylor’s book for propagation of uncertainty for arbitrary functions of one variable.

Reply to  Weekly_rise
February 24, 2021 3:21 pm

I don’t misunderstand *anything*. Did you *read* what Pat posted about the LWCF? It’s not obvious that you did.

ROFL. I’ve given you the arbitrary function of one variable, f(x) = Asin(x) and asked you over and over and over and over to calculate the uncertainty of that function of one variable.

*I* know how to do it. You do it using the EXACT method shown in Taylor’s Section 3.7. But apparently you don’t understand enough about uncertainty to be able to use the section you reference!

You dig yourself into a bigger and bigger whole every time you post.

BTW, the outputs of a CGM *are* a linear function of mx+b where x is actually a point in time. Each of those points in time are built up interatively one upon another. The uncertainty from iteration 0 increases in iteration 1. The uncertainty from iteration 1 increases in iteration 2. And so on. So while the output of a CGM *looks* like mx+b it is not calculated that way.

So, tell us what the uncertainty of the function of one variable is for f(x) = mx+b? Can you use 3.7 to calculate it? Or are you just going to remain a religious fanatic?

Reply to  Weekly_rise
February 24, 2021 9:52 pm

W_r, “The uncertainty in the LWCF is determined by taking the difference between modeled and observed flux, …

No, it’s not. LWCF error is determined by first determining the error in simulated cloud fraction. Simulated cloud fraction error is used to calculate the error in LWCF.

… but the surface temperature is not determined solely by the LWCF,…

Yes, it is.

W_r, “… it is determined by the net flux at the TOA.”

No, it’s not. The simulated TOA flux can be in balance, even while simulated LWCF is in error. TOA flux includes short wave energy flux, for example.

One of my reviewers in an early submission claimed that short wave forcing errors offset the long wave forcing errors, allowing for accurately predictive projections.

Utterly wrong.

W_r, “It is the error in the net flux, determined by the difference between modeled and observed net flux, that Pat should be using in his calculation.

No, it’s not. Tropospheric air temperature is determined by tropospheric thermal energy flux. Thermal energy is long wave.

Reply to  Weekly_rise
February 24, 2021 9:21 pm

I challenged nothing except your mistaken concepts, W_r.

W_r, “It is the net error in the LWCF, not the net error in the total forcing, and it certainly cannot be taken to be the only component of the uncertainty in the surface temperature projection.

I did not write that LWCF error is the error in total forcing. I wrote it’s the net error in the context of my February 21, 2021 9:38 pm comment, where I describe dthe net LWCF error as a lower limit.

I also contrasted LWCF error with total error. The distinction should have been clear to you. But I regret all those specifics were not transparent for you.

LWCF calibration error is lower limit of uncertainty.

Adding in all the other calibration errors from other parts of the model will only make the uncertainty in projected air temperature larger. Also pointed out in my February 21, 2021 9:38 pm.

More than that, however, if you’d read my paper you’d know that LWCF error represents uncertainty in simulated tropospheric thermal energy flux, which is the determinant of air temperature.

Uncertainty in simulated LWCF directly introduces uncertainty in projected air temperature.

Note that all the while the simulated LWCF includes a Ā±4 W/m^2 average annual uncertainty, the simulated climate can remain in energy balance. The simulated air temperature can be physically reasonable, all the while it caries a large uncertainty.

Science, W_r. It’s not mathematics.

Reply to  Weekly_rise
February 23, 2021 12:54 pm

W_r, “Shall I look forward to an imminent publication from you?

What would be the point. Climate models are already demonstrated to be predictively useless. In any case, a full assessment has already been done. Given the lack of progress in climate physics since then, the Soon, et al., critique is still valid.

W_r, “The upper limits of your LWCF error would require either violating the laws of physics …

Given all we’ve discussed already, your supposition that the uncertainty statistic is a physical excursion is incredibly mindless. At best.

W_r , “… or inducing some substantial and offsetting change elsewhere in the system to balance energy fluxes.

Off-setting errors do not reduce uncertainty. The underlying physics remains wrong. There is no reason to think the projected evolution in time of the physical system is correct.

Basic physical reasoning, W_r.

W_r, “You canā€™t simply assume …

It’s not an assumption. It’s a demonstration.

W_r, “You have to consider the uncertainty arising from all components of the system working together.

Wrong.

Did you know that ocean models don’t converge? Guess how that plays into the physical reliability of a GCM climate simulation.

Reply to  Pat Frank
February 23, 2021 1:18 pm

There is a very good book on Kindle by Prof. Mototaka Nakamura titled: Confessions of a Climate Scientist, The global warming hypothesis is an unproven hypothesis.

It gives very good information about the problems.

Reply to  Jim Gorman
February 23, 2021 1:47 pm

I can’t find an English edition, only Japanese. Do you have a link?

Reply to  Tim Gorman
February 24, 2021 4:49 am

The version on Kindle has a translation into English by the author contained inside the book. I’ll send you the # of the kindle.

Reply to  Jim Gorman
February 24, 2021 9:56 pm

Thanks, Jim. I know of the book and am very grateful to Prof. Nakamura for writing it. I haven’t read it, though, and don’t have a kindle.

Honestly, I think AGW doesn’t even rise to a hypothesis. Hypotheses make specific testable predictions that can falsify the hypothesis. AGW doesn’t achieve that.

Reply to  Weekly_rise
February 21, 2021 12:34 pm

W_r, “…since they predict things like surface trends.

No they don’t.

Reply to  Weekly_rise
February 19, 2021 3:05 pm

W_r, “The earth wonā€™t be -20 degrees in 50 years or 20 degrees.

Fine. We agree that the physically correct air temperature will be somewhere in the range of +20 C to – 20 C = Ā±20 C after 50 years.

Thank-you. You’ve both made my argument, and agreed with the demonstration in my paper.

W_r, “We know more about the system than your uncertainty estimate would allow for.

No you don’t. Nor does anyone else.

W_r, “The LWCF is not independent of other components …” which implies that accuracy is possible because of offsetting errors.

Wrong.

Reply to  Weekly_rise
February 20, 2021 12:05 pm

W_r, “a range of uncertainty that allows the true quantity to take on impossible values means that the uncertainty estimate is not a valid one.

The uncertainty does not indicate a range of true quantities or values. Your objection is wrong.

W_r, “You keep trying to steer the discussion to the topic of whether or not the uncertainty represents uncertainty in the projections, but that has never been what Iā€™m arguing.

That’s exactly what you’re arguing in the first italicized quote from you above.

You also wrote, …

The spread of model projections using models with differing parameterizations gives a strong indication of actual model uncertainty in projected trends.” …

… which is also wrong because your model is not known to deploy the correct physics nor to give the physically correct result.

There is no way to know that the range of projections includes the physically correct value.

Projection range on varying parameters yields only a measure of model precision. As one has no idea of the correct value, the accuracy of the projections is unknown.

But accuracy is the metric of interest.

Accuracy is the distance of a calculation or measurement from the physically true value.

Precision is the replicability of measurements or model expectation values, which is what you described.

Accuracy and precision, and their distinction, is extensively discussed in my paper. For exactly the reason it is discussed here. It is central, the modelers invariably miss it, and your argument misses it, too.

I have yet to encounter a climate modeler who understands the distinction between accuracy and precision. Or one that does not angrily reject the distinction when a glimmer of meaning imposes itself.

And why wouldn’t they. Acknowledge the distinction and their entire career goes down in flames.

Climate modeling battens on false precision. As does the rest of consensus climatology.

Weekly_rise
Reply to  Pat Frank
February 21, 2021 8:13 am

“The uncertainty does not indicate a range of true quantities or values. Your objection is wrong.”

Pat, the uncertainty does indeed indicate the possible range in which we expect the true value might lie given the error in our estimate. Both you and Tim (Jim?) keep insisting that this isn’t true, but you never substantiate your position, you just keep insisting on it. Tim claims the Taylor error analysis book proves it, but never cites chapter or passage. You just baldly assert it. I don’t find your arguments the least bit compelling on this point.

“But accuracy is the metric of interest.”

I disagree. Both accuracy and precision are metrics of interest. We cannot evaluate the accuracy without reasonable precision. The models provide reasonable precision, and we can therefore evaluate model projections against observed data to identify where the models and observations differ, and use that to guide the analysis of why they differ.

Reply to  Weekly_rise
February 21, 2021 9:00 am

You can’t even understand what we are telling you.

The uncertainty interval certainly indicates the possible range the true value might lie in.Neither Pat, Jim, or I have told you anything different than that.

The uncertainty interval, however, is *NOT* a probability distribution like you keep claiming.

And I *did* cite the chapter in the Taylor book for you to study. Chapter 3. Stop lying.

And now you are just showing that you don’t understand accuracy and precision at all either.

Precision should *never* be stated as more than the significant figures in the uncertainty estimate.Any precision past that is just opinion and not fact. If the model uncertainty is +/- 50C then the accuracy of the model should never be stated as more than the tens digit! That’s why stating that the models can predict down to the hundredth digit in temperature is simply trying to fool the people.

When you are comparing models with +/- 50c uncertainty with a global average temperature with an uncertainty a +/- 50C uncertainty of its own, exactly what do you think you are validating.

Reply to  Weekly_rise
February 21, 2021 11:35 am

Taylor, 2nd Edition, Chapter 2, page 18ff: “The meaning of the uncertainty dx is that the correct value of x probably lies between x_best – dx and x_best+dx; it is certainly possible that the correct value lies slightly outside this range. (emphasis in the original)

“On the other hand, if the accepted value is well outside the margins of error (the discrepancy is appreciably more than twice the uncertainty, say) there is reason to think something has gone wrong. …

“Finally, and perhaps most likely, [such] a discrepancy may indicate some undetected source of systematic error.”

Model error in simulated cloud cover is systematic. The model LWCF calibration error statistic enters into, and propagates through, every single step of a climate simulation.

Taylor Chapter 3 covers propagation of error. No escape, W_r.

Reply to  Weekly_rise
February 21, 2021 12:25 pm

W_r, “The models provide reasonable precision, …

Here’s your “reasonable precision,” W_r.

One model, one GHG forcing scenario, multiple runs, parameters varied, and an ensemble spread of 5 C after 100 years. And that’s mere precision. The spread says nothing about accuracy.

Each of those runs have the lower limit of physical uncertainty demonstrated in “Propagation of Error….” showing no predictive value.

The lower limit physical uncertainty of the ensemble average (never, ever published) is the root-mean-square uncertainty of the individual runs.

W_r, “… and we can therefore evaluate model projections against observed data to identify where the models and observations differ…

Which is exactly what Lauer and Hamilton did, and the resulting calibration error of which I propagated into air temperature projections in my paper, showing the resulting lower limit of uncertainty in air temperature projections.

W_r, “… and use that to guide the analysis of why they differ.”

See Figure 4. Observations and simulations differ in part because the models can’t do clouds.

They can’t do oceans, either. Or convection. Or precipitation. Or the atmosphere.

Rowlands 2012.jpg
Weekly_rise
Reply to  Pat Frank
February 22, 2021 11:52 am

Your own paper uses model vs. observation comparison to asses LWCF error. You must assume that model precision is adequate to allow for this. Painting yourself into a corner a bit, here.

Reply to  Weekly_rise
February 22, 2021 2:31 pm

You are still showing you don’t understand the difference between precision and accuracy. In other words a mathematician that believes a repeating decimal is infinitely precise.

Reply to  Weekly_rise
February 23, 2021 1:13 pm

Not precision, W_r. Accuracy.

LWCF error is derived by comparison with observations, not by comparison with other models or other runs.

The average of the cloud fraction error of 27 CMIP5 models over 20 hindcast years = 540 simulation years, is sufficient to estimate the LWCF error.

Which is what Lauer and Hamilton did, and from whose paper I took the rms LWCF calibration error statistic.

Do you assert that Lauer and Hamilton didn’t know what they were doing in deriving that error, and painted themselves into a corner?

There’s no painting in a corner, W_r. Just more evidence that you argue without knowing what you’re arguing about.

So does Nick Stokes.

Weekly_rise
Reply to  Pat Frank
February 23, 2021 7:23 pm

Yes, by a comparison of multiple models with observations. We’re trying to identify systematic error in the models, which means we need the models to exhibit relatively small random error – if they were all over the place (low precision), this would be much harder to identify.

I believe that Lauer and Hamilton knew very well what they were doing.

Reply to  Weekly_rise
February 24, 2021 8:26 am

Error is not uncertainty! Uncertainty is not error!

When the uncertainty is so high then how do you know if they are exhibiting relatively small random error? And exactly how do you measure that random error?

Remember, if your model gives you 10 separate runs with an output of 11 then it is exhibiting small random error. But if the true value is 10 then exactly what good is the output of 11 having smal lrandom error?

Reply to  Weekly_rise
February 24, 2021 9:24 pm

Good.

Look at Figure 4, Figure 5, and Table 1 in my paper. Notice that the cloud fraction error is anything but random.

Notice the inter-model correlation of cloud fraction error. Guess what that means.

Reply to  Weekly_rise
February 15, 2021 10:16 pm

W_r, “regression model can be constructed that fits the data well,

Paper eqn. 1 is not important because it ‘fits the data well.’ That was just a side show.

Eqn. 1 is important because it successfully and invariably emulates the air temperature projections of advanced climate models.

It shows beyond any doubt that climate models project air temperature merely as a linear extrapolation of forcing.

The uncertainty analysis follows directly.

Doesn’t it give you pause, W_r, that the CMIP5 cloud forcing calibration error alone is Ā±114 times larger than the 0.035 W/m^2 perturbation the model is required to resolve?

The level of resolution needed is obviously not available. That this is a fatal problem should be obvious to any even modestly science-minded individual.

Weekly_rise
Reply to  Pat Frank
February 16, 2021 9:07 am

“It shows beyond any doubt that climate models project air temperature merely as a linear extrapolation of forcing.”

There is a vast difference between saying “climate model projections can be approximated with a linear extrapolation of forcing” and saying “all climate models are doing to project surface temps is using a linear extrapolation of forcing.” The first is correct, but the second is not, and the second appears to be what you’re claiming.

Reply to  Weekly_rise
February 16, 2021 10:59 am

The second is what my paper demonstrates. CMIP6 models are likewise linear propagators.

I present 65+ discrete examples, W_r.

Let’s see you produce a climate model that does not produce an air temperature projection that can’t be emulated by a linear extrapolation of forcing.

I show my work. Let’s see yours. So far, you’ve presented incredulity, mistaken notions, and bald dismissals.

Weekly_rise
Reply to  Pat Frank
February 16, 2021 12:52 pm

From my reading, your paper never actually examines the structure of any models, so it is patently impossible for it to prove the second case I describe. Repeatedly showing that the first case is true does not provide proof of the second case.

Reply to  Weekly_rise
February 16, 2021 3:43 pm

My paper examines the output of arbitrary models. Linear output is all that is necessary to show. Model structure is irrelevant.

Reply to  Weekly_rise
February 16, 2021 5:27 am

W_r, why use the “obsolete” CMIP3 set of models when the “new and improved” CMIP5 ones are available (as we eagerly await the use of CMIP6 models in the SPM of the WG1 report for AR6 in April).

http://www.realclimate.org/images//cmp_cmip5_sat_ann-4.png

It’s not like they found it necessary to “adjust” the HIND-cast part of the graph in addition to the forecast section in order to “bend the curve to better fit reality” … is it ?

Weekly_rise
Reply to  Mark BLR
February 16, 2021 6:41 am

Mismatch between projected and observed forcing doesn’t invalidate model skill – we don’t know precisely what events will take place in the future that will impact forcing (e.g. volcanoes or random fluctuations in decadal oscillations of ocean temperatures), so it makes perfect sense to adjust the models to match the actually observed forcing when evaluating performance. The CMIP3 experiments just happened to get forcing better than CMIP5 did, but in both cases the models were able to reproduce observed surface warming. I believe, in fact, the mismatch was due to a mismatch between projected and observed volcanic forcing, per Schmidt, 2014.

It’s all a bit moot, of course, since the observations fall well within the envelope of the non-forcing adjusted ensemble.

Reply to  Weekly_rise
February 16, 2021 11:11 am

W_r, “...adjust the models to match the actually observed forcing when evaluating performance.

Can you truly believe that adjusting models to match observations constitutes an evaluation?

Climate models are engineering models. They are parameterized within calibration bounds. They reproduce the observables within those bounds. They are useless for prediction of observables outside those bounds.

Ask any engineer whether their models can predict behavior or response outside the calibration bounds used to define performance specifications. They get very cautious, and properly so. Their reputations depend on getting it right. Lives and economic survival ride on the accuracy of their judgments.

Climate modelers have no such constraints. They live on that waiver.

Or, maybe, following from your acute mode of evaluation, climate models can be adjusted to match future observables in order to evaluate predictive performance. Do I need a sarc-off here?

Weekly_rise
Reply to  Pat Frank
February 16, 2021 1:46 pm

“Can you truly believe that adjusting models to match observations constitutes an evaluation?”

Of course. When model simulations are run they use a set of forcings that may or may not match what actually happens in the real world, so assessing differences in, say, surface temperatures between models and observations while knowing that those differences exist would not be a useful evaluation of model skill in simulating surface trends. It’s better to adjust the models based on the observed forcings and then assess differences between observed and projected surface trends.

There’s a conditional “if” statement on the model projections. e.g., “if no volcanoes erupt then…” but we obviously can’t predict volcanic eruptions.

And, yet again, this whole digression is moot since the observations fall well within the envelope of the non-forcing adjusted ensemble.

Reply to  Weekly_rise
February 17, 2021 10:35 am

PF, “Can you truly believe that adjusting models to match observations constitutes an evaluation?ā€

W_r: “Of course.”

That pretty much says it all.

W_r, “When model simulations are run they use a set of forcings that may or may not match what actually happens in the real world,…

No, they don’t.

Weekly_rise
Reply to  Pat Frank
February 17, 2021 12:09 pm

“No, they donā€™t.”

Model simulations do not take forcings as inputs? Is that what you’re trying to say? This article would seem to disagree with you:

“The main inputs into models are the external factors that change the amount of the sunā€™s energy that is absorbed by the Earth, or how much is trapped by the atmosphere. These external factors are called ā€œforcingsā€. They include changes in the sunā€™s output, long-lived greenhouse gases ā€“ such as CO2, methane (CH4), nitrous oxides (N2O) and halocarbons ā€“ as well as tiny particles called aerosols that are emitted when burning fossil fuels, and from forest fires and volcanic eruptions. Aerosols reflect incoming sunlight and influence cloud formation.”

And so does the NOAA:

“To “run” a model, scientists specify the climate forcing (for instance, setting variables to represent the amount of greenhouse gases in the atmosphere) and have powerful computers solve the equations in each cell.”

Perhaps instead of taking a patronizing tone and insisting that nobody else understands the models as well as you, it would benefit you to engage in some self reflection.

Reply to  Weekly_rise
February 17, 2021 2:26 pm

What’s the difference between “forcing” and “forcings”? The only forcing the models appear to use as the control knob is CO2 – a single forcing.

Weekly_rise
Reply to  Tim Gorman
February 18, 2021 6:45 am

Please read the quoted passages in my comment above. CO2 concentration is certainly not the only forcing input into climate model simulations.

Reply to  Weekly_rise
February 18, 2021 8:26 am

Nope. The CO2 forcing linearly increases over time according to the IPCC guidelines (see 4.5 and 8.5). The models convert this into a linear increase in temperature over time. The only control knob for the linear output is CO2.

Weekly_rise
Reply to  Tim Gorman
February 18, 2021 10:23 am

Not sure what else I can offer on this topic. I’ve provided two sources unequivocally showing that you’re wrong, the models include multiple different forcings in addition to GHG concentrations. Your steadfast denial is not compelling.

Reply to  Weekly_rise
February 19, 2021 4:19 pm

Tim Gorman, “Nope. The CO2 forcing linearly increases over time according to the IPCC guidelines (see 4.5 and 8.5). The models convert this into a linear increase in temperature over time. The only control knob for the linear output is CO2

W_r, “Iā€™ve provided two sources unequivocally showing that youā€™re wrong, …

Guess what this means:

2010 Nat vs Anth Forrcings.jpg
Reply to  Weekly_rise
February 19, 2021 3:28 pm

No, they donā€™t.ā€

W_r, “Model simulations do not take forcings as inputs? Is that what youā€™re trying to say?

No. You just shifted your argument. Again.

I’m disagreeing with what you wrote, which was, “When model simulations are run they use a set of forcings that may or may not match what actually happens in the real world…”

Your own article refutes you: “[forcings are] either as a best estimate of past conditions or as part of future ā€œemission scenariosā€

Your NOAA quote likewise refutes you: scientists specify the climate forcing.

Which of those estimates of actuality, “may not match what actually happens“?

Simulations use scenarios of purportedly real past, or speculatively real future, forcings.

Hindcasts using ad hoc tuned parameters are not simulations. They are curve-fitting exercises.

Weekly_rise
Reply to  Pat Frank
February 21, 2021 7:57 am

Using a scenario of speculatively real future forcings is, quite literally, what I’m referring to. We cannot know if the speculative future forcings were correct until the future is upon us. We don’t know when a volcano will erupt, or whether, and by how much, humanity will decide to reduce CO2 emissions. We can only devise speculative future scenarios.

Reply to  Weekly_rise
February 21, 2021 8:38 am

The uncertainty of the model outputs using those assumed forcings is so wide that they are simply not useful in predicting anything! The uncertainty is so wide that they can’t even be falsified by future observations! Almost any future fits inside their uncertainty interval? It’s the same thing as going to a fortune teller!

Weekly_rise
Reply to  Tim Gorman
February 21, 2021 9:01 am

Yet 99% of the futures Pat’s uncertainty allows for are physically impossible, so Pat’s uncertainty is certainly not useful.

Reply to  Weekly_rise
February 21, 2021 10:10 am

how *you* know they are physically impossible? The Greenies like you tell us the Earth is going to turn into a cinder from CO2 warming. What’s the temperature of a cinder at our distance from the sun?

Reply to  Weekly_rise
February 21, 2021 9:54 pm

Uncertainty has nothing to do with physical states, W_r. Uncertainty is about predictive reliability.

Wide uncertainty bars means the projection provides no information.

From Section 10.2 of the Supporting Information (my bold):

A useful description of uncertainty is provided by Roy and Oberkampf, ā€œ[predictive] uncertainty [is] due to lack of knowledge by the modelers, analysts conducting the analysis, or experimentalists involved in validation. The lack of knowledge can pertain to, for example, modeling of the system of interest or its surroundings, simulation aspects such as numerical solution error and computer roundoff error, and lack of experimental data.ā€ (Roy and Oberkampf, 2011) Roy and Oberkampf term such systematic errors epistemic, as contrasted with aleatory (random) error.

“Likewise, Helton et al., describe epistemic uncertainty as one that, ā€œ… derives from a lack of knowledge about a quantity that is assumed to have a fixed, but poorly known, value in the context of a particular analysis. For example, the appropriate value to use for a spatially averaged permeability in an analysis involving groundwater flow has, by definition, a single value but this single ā€œeffectiveā€ value can never be known with certainty. Uncertainty of this type is usually referred to as epistemic uncertainty; alternative designators include state of knowledge, subjective, reducible, and Type B. … [T]he mathematical structures used to represent aleatory and epistemic uncertainty must be propagated through the analysis in a manner that maintains an appropriate separation of these uncertainties in the final results of interest.ā€ (Helton et al., 2010)

The significance of calculated uncertainty is presented in terms of measurement in the JCGM Guide to the Expression of Uncertainty in Measurement (JCGM GUM).(JCGM, 100:2008) Section 3.3.1: ā€œThe uncertainty of the result of a measurement reflects the lack of exact knowledge of the value of the measurand…. The result of a measurement (after correction) can unknowably be very close to the value of the measurand (and hence have a negligible error) even though it may have a large uncertainty. Thus the uncertainty of the result of a measurement should not be confused with the remaining unknown error.ā€
 

Reply to  Weekly_rise
February 21, 2021 12:43 pm

And the reliablity of those speculative scenario projections depends upon the predictive reliability of the models.

The models have no predictive reliability. We know that because they cannot simulate terrestrial cloud cover (among other observables). The LWCF error that results is 100x larger than the perturbation they are attempting to simulate.

No model can resolve an effect 100x smaller than the lower limit of model resolution — a simple concept of physical science that never fails to be beyond the grasp of climate modelers.

If that simple concept beyond one’s grasp there’s no comprehension of science.

Weekly_rise
Reply to  Pat Frank
February 22, 2021 12:00 pm

“And the reliablity of those speculative scenario projections depends upon the predictive reliability of the models.”

This is patently untrue. The speculative scenario projections are not based on model projections, they are assumptions underlying model projections. We assume some volcanic forcing, we assume some concentration pathway.

Reply to  Weekly_rise
February 22, 2021 2:33 pm

But you don’t do clouds, precipitation (latent heat), etc. So how can the speculative scenario projections be anything other than opinon?

Weekly_rise
Reply to  Tim Gorman
February 23, 2021 8:35 am

They are merely assumptions about future forcing. That’s why it’s important when evaluating model performance to look back and say, “how much of the difference between model and observation is due to error in the model and how much is due to the incorrect assumptions we made about future forcings?” If our model doesn’t include a volcanic eruption one year and a volcanic eruption occurs, that doesn’t mean our model response to volcanic eruptions is wrong, it means we never put in a volcanic eruption for the model to respond to.

Reply to  Weekly_rise
February 23, 2021 10:03 am

No, the CGM’s don’t account for clouds. They “parameterize” it and then claim that there is no uncertainty introduced by their guess at a magnitude!

Error in the model *is* the same thing as “incorrect assumptions”!

Models are *built* upon assumptions. Assumptions about physical processes, physical values, etc.! Any thing incorrect about the assumptions results in model error! Unless you are a climate modeler that says “all the errors cancel”. When it is pointed out that means that the models can be built using *anything* they just wave their hands and repeat: “errors cancel”.

f our model doesnā€™t include a volcanic eruption one year and a volcanic eruption occurs, that doesnā€™t mean our model response to volcanic eruptions is wrong, it means we never put in a volcanic eruption for the model to respond to.”

In other words we should believe a model that doesn’t integrate volcanoes and their results on the atmosphere is somehow correct because “all errors cancel”? That the model is right regardless?

That is* what you are saying!

Weekly_rise
Reply to  Tim Gorman
February 23, 2021 7:30 pm

“Error in the model *is* the same thing as ā€œincorrect assumptionsā€!”

These are not incorrect assumptions in the underlying model physics, but in the forcing regimes the models run the physics under. We can eliminate this source of error by applying a forcing adjustment.

“In other words we should believe a model that doesnā€™t integrate volcanoes and their results on the atmosphere is somehow correct because ā€œall errors cancelā€? That the model is right regardless?”

That is literally the opposite of what I’m saying – the models make a guess about what volcanic forcing in the future might be, but since we’re not time travelers we don’t know what the actual volcanic forcing will be. We can look back with out 20/20 hindsight and see exactly how many eruptions took place and what kind of forcing they produced, and apply that forcing correction to the model.

Again, the models are running with conditional logic – if humans emit this much CO2, or if a volcano of this size erupts in this year, we expect to see this change.

If I make a physics model that says, “if someone comes and pushes this ball, it will roll down the hill with an acceleration due to gravity,” and after after two hours the ball hasn’t moved, the model isn’t invalidated if the reason the ball hasn’t moved is because no one has come along and pushed the ball yet. Is this clearer for you?

Reply to  Weekly_rise
February 24, 2021 8:57 am

These are not incorrect assumptions in the underlying model physics, but in the forcing regimes the models run the physics under. We can eliminate this source of error by applying a forcing adjustment”

If you have to adjust your model with “forcings” or parameters then the assumptions describing the physics are wrong.

It’s like the tractor model. If the model keeps coming out with a drawbar horsepower 30hp lower than the model predicts then you can add a *forcing* of -30hp to make the output come out correct!

Does that mean the model is wrong. Damn right it does! What really should be done is to go back inside the model and figure out what physics are being calculated incorrectly. But the so-called climate scientists don’t do that, they just add in “forcings” t make the output be what they want it to be.

“That is literally the opposite of what Iā€™m saying ā€“ the models make a guess about what volcanic forcing in the future might be, but since weā€™re not time travelers we donā€™t know what the actual volcanic forcing will be.”

OMG! Then why should be believe what the models say? That’s just like rolling a dice or reading chicken entrails to figure out what the future will be!

If you can’t predict volcano’s then how do you correctly apply the forcings?

“Again, the models are running with conditional logic ā€“ if humans emit this much CO2, or if a volcano of this size erupts in this year, we expect to see this change.”

And exactly how are these conditionals actualized in the model during each iteration? Do you use the unary “probably” operator , P(a) ā‰„ t or the binary probability operator P(a) ā‰„ P(b)? How do you determine t or P(b) in each iteration, a random number generator?

My guess is that it isn’t done either way. It’s just all added in as an overall “forcing”, i.e. adjustment, that is a constant in the model.

If I make a physics model that says, ā€œif someone comes and pushes this ball, it will roll down the hill with an acceleration due to gravity,ā€ and after after two hours the ball hasnā€™t moved, the model isnā€™t invalidated if the reason the ball hasnā€™t moved is because no one has come along and pushed the ball yet. Is this clearer for you?”

And how do you actualize the button push? A random number generator? Do you use a unary or binary probability operator?

Not once, not ever, in college did I ever create a physical model that used a start/don’t-start input to determine the output. If the model never starts then the output is useless. The output of the model will always be zero. Models are only useful if they *do* something.

Weekly_rise
Reply to  Tim Gorman
February 24, 2021 2:01 pm

“OMG! Then why should be believe what the models say? Thatā€™s just like rolling a dice or reading chicken entrails to figure out what the future will be!”

One point of modeling is to help us guide decision making. If we can say, “under RCPS 4.5 the climate warms by x degrees, and under RCPS 2.6, the climate warms by y degrees,” we can use that to guide policy. We can’t control whether a volcano erupts in the future, but humans as a society can influence how much carbon we emit in the coming decades.

“My guess is that it isnā€™t done either way. Itā€™s just all added in as an overall ā€œforcingā€, i.e. adjustment, that is a constant in the model. ”

Models are run using multiple forcing scenarios, and the system is observed to see which of the forcing scenarios winds up being close to reality.

Some of what I’m explaining to you strikes me as being quite elementary and it is troubling that you’ve taken such a hard line position against models without being aware of or understanding it.

Reply to  Weekly_rise
February 24, 2021 3:30 pm

You didn’t answer my question. Why? How do you actuate the probability of a volcano eruption? Do you use the unary or binary operator? You pretend to be an expert on this but you can *never* actually answer any questions actually using math or physics.

Again, if you can’t emulate volcano eruptions then you can’t emulate human effects either. What happens under 4.5 or 2.6 depends upon *all* atmospheric physics. You can’t use just one and say this is what will happen in the future!

The CGM’s today keep on moving away from reality, not closer. So what does that tell you about the forcings?

Don’t lecture me about things being elementary. You have absolutely no expertise in the subject and no way to determine what is elementary and what isn’t. You can’t even tell the difference between an iterative model and a direct calculation model.

You’ve now reduced yourself to more and more handwaving to defend your religious beliefs on climate change. You won’t admit that the CGM’s don’t handle clouds, they don’t handle latent heat, and they don’t handle ocean movements. In other words the forcings they use don’t reflect reality and therefore the models can’t reflect reality either.

Reply to  Weekly_rise
February 24, 2021 6:27 am

Volcanoes are forecastable on a century basis. All one needs is a reasonable trend of eruptions and their results from the past. Oh, we donā€™t have that! Future predictions of climates canā€™t possibly be correct then.

This is one reason why the IPCC refuses to call model outputs predictions. They are instead called projections based on possible occurrences. In other words, they are not forecasts that we should be relying on to spend trillions of dollars on.

Weekly_rise
Reply to  Jim Gorman
February 24, 2021 2:04 pm

We can make probabilistic assessments of the likelihood of volcanic activity over long timescales and use these to inform forcing scenarios, but we cannot make deterministic predictions about specific volcanic events. This becomes especially important for assessing projections on <centennial timescales (that is to say, we can’t wait centuries to assess climate model projections, we need to do that before it’s too late for the projections to inform action).

Reply to  Weekly_rise
February 24, 2021 3:35 pm

I asked you how you actuate those probabilistic assessments and you can’t even answer such a simple question. You don’t have to make deterministic predictions. In each iteration of the model you can use unary or binary probability operators. But apparently even such a simple subject is beyond your understanding.

You can’t calculate the uncertainty associated with the simple, on variable function f(x) = A sin(x). And you don’t know how to actuate a probability operator in a model.

And you expect us to buy into *anything* you assert?

Reply to  Weekly_rise
February 23, 2021 1:16 pm

W_r, “This is patently untrue.”

No, it’s not.

Reliability means consistent with physical reality. It doesn’t mean consistent with adventitious circumstance.

Reply to  Weekly_rise
February 16, 2021 2:37 pm

“Mismatch between projected and observed forcing doesnā€™t invalidate model skill”

OF COURSE IT DOES!!!! I grew up in a family of International Harvester ag dealers. When IH was bringing out a new tractor they would take test models all around the country to make sure that their models of drawbar horsepower and fuel usage matched what was observed in the field. If the models didn’t match the field results they would go back and rework their models – from everything to the diesel pumps to compression ratios to gearbox power transmission to drive gear losses to tire slippage and every thing else you could think of. Something was *always* wrong with the models if they didn’t match field observations — meaning the model skill was invalidated.

Nor is this a matter of not knowing what will happen in the future. They can’t get the model outputs to even match the observed temperatures over the past two decades of real time observations!

so it makes perfect sense to adjust the models to match the actually observed forcing when evaluating performance. “

BUT THEY HAVEN’T ADJUSTED THE MODELS TO MATCH REALITY!!! They have adjusted the models so they run hotter and hotter and hotter and hotter – increasing the differential between the model outputs and reality!!!! And you call that “skill”?

“in both cases the models were able to reproduce observed surface warming.”

No, they did *NOT*. Look at the graph I posted in an earlier reply!!! The disparity between the model outputs and reality has grown not decreased!!!!

“Itā€™s all a bit moot, of course, since the observations fall well within the envelope of the non-forcing adjusted ensemble.”

You are as blind as a bat apparently. Either that or willfully ignorant. The attached graph shows how wrong you are whether you look at or not!

ļ»æ

cgm_model_outputs.jpg
Weekly_rise
Reply to  Tim Gorman
February 16, 2021 6:01 pm

The graphic you’re presenting appears to be comparing measurements from 3 balloons against surface air estimates from climate models, and then only for the tropics. I’m not convinced at all that this is an apples to apples comparison. I’ve provided a comparison of observed global mean surface temp and modeled global mean surface temp:

comment image

RC also has a comparison of modeled vs observed global mid troposphere temp using satellites:

comment image

Reply to  Weekly_rise
February 17, 2021 6:37 am

You are comparing manipulated data sets with the models. What did you expect to find? Did you even bother to look at the detail on the graph I provided. There *is* one model that matches observations quite closely. Why don’t the other models?

Weekly_rise
Reply to  Tim Gorman
February 17, 2021 9:15 am

Again, it appears that your graph is comparing model projected tropical surface temperatures with estimates of tropical mid-troposphere temperatures from just three weather balloons. I suspect that is why the models show a poor match.

RC also has a similar comparison of satellite estimates of tropical mid-troposphere temperatures against modeled tropical mid-troposphere temperatures (an apples to apples comparison), and there is a much closer match, although it’s clear that the models are doing better simulating global mean surface trends and global mid-troposphere temps than they are tropical mid-troposphere trends:

comment image

Reply to  Weekly_rise
February 17, 2021 11:58 am

In fact, no tropical ā€œhotspotā€ has ever been found as the models predict. Makes one have doubt about the efficacy of the models when they canā€™t even properly predict the increased gradient that is supposed to warm the poles.

If they canā€™t do that, they just spread the warmth around, right? Everywhere must be going to warm instead.

Iā€™ll give you a big clue. Start looking at regions instead of putting your hopes on a hokey ā€œglobalā€ model. You will find a number of regions with no warming and some with actual cooling. You WILL NOT find offsetting regions with hot enough temps to make the average what your models say it is.

Weekly_rise
Reply to  Jim Gorman
February 17, 2021 1:02 pm

There is at least one paper I am aware of that presents evidence for the predicted tropical tropospheric hot spot.

But, of course, even if the models are actually getting something wrong, that does not mean the models are not still useful. In fact, it has never been in question whether the models are wrong, since their being wrong is guaranteed. They are models. Being wrong doesn’t mean they aren’t useful.

Reply to  Weekly_rise
February 17, 2021 1:07 pm

You mean to say that we should be spending trillions of dollars and ruining the reliability of the electric grid for models that ARE WRONG, but useful.

You are obviously a woke Democrat that has no problem spending other peoples money by throwing it down a rat hole.

Gerald Browning
Reply to  Weekly_rise
February 18, 2021 7:06 pm

Weekly_rise,

What a pathetic statement. It is easy to show that if one is allowed to choose the forcing (parameterizations), one can obtain any result one wants even if the time dependent partial differential equations are wrong. This is exactly what the climate modelers have done. They are using the wrong system of equations (Browning, DAO, Sept 2020), have violated the Kreiss mathematical Boundary Derivative Theory for hyperbolic systems with multiple time scales and the numerical analysis theory that requires that the continuum solution be differentiable
(expandable into a Taylor series). You might try updating your supposed applied math knowledge.

Jerry

Reply to  Gerald Browning
February 20, 2021 8:15 am

Good to see you posting here, Jerry.

For those who don’t know, Jerry is a Ph.D. applied mathematician who has spent an entire career on the mathematics of atmospheric physics.

His paper, “The unique, well posed reduced system for atmospheric flows: Robustness in the presence of small scale surface irregularities” solves the problem of enstrophic cascades in an evolving atmosphere, provides a physical formalism that is insensitive to small perturbations and is continuous down to the surface boundary.

In other words his paper shows that the mathematical physics in current climate models is ill-posed (a well-known but ignored problem hidden by use of a hyperviscous atmosphere), and provides the solution.

Reply to  Weekly_rise
February 17, 2021 2:01 pm

The surface temperature record is a joke. It starts out with using mid-range values for each station – meaning the uncertainty grows from +/- 0.5C per measurement to +/- 0.8C. Then when you add all those mid-range data points together the uncertainty grows by +/- 0.8C multiplied by the square root of the number of stations you average. If you use 100 stations then the square root is 10. So your uncertainty becomes +/- 8C — a far wider interval than you are trying to determine!

Then you have the joke of using a mid-range value? What does the mid-range value tell you? It is *NOT* an average, it is no where near an average. Since the daily temperature profile approaches a sine wave the average value is not the mid-range. Do you *know* how to integrate a sine wave to get the average? If the temperature profile approaches a sine wave of Asin(x) then the average is .667A, no where near a mid-range value. So what does the mid-range value tell you about the actual climate?

The models start off calibrating against a idiotic, meaningless value with a huge uncertainty and winds up projecting this into a linear projection of one component – CO2. That’s why they can’t predict surface temperatures at all accurately.

All of these models should be converted to use degree-day values, an actual integral of the temperature profile at each location. It would be a far better indication of the actual climate at each location.

Weekly_rise
Reply to  Tim Gorman
February 17, 2021 2:21 pm

“Then when you add all those mid-range data points together the uncertainty grows by +/- 0.8C multiplied by the square root of the number of stations you average.”

By the inverse square root. Standard error of the mean is Ļƒ*1/sqrt(n).

Reply to  Weekly_rise
February 17, 2021 2:44 pm

WR,

You are *STILL* trying to look at uncertainty as a ERROR, specifically error associated with multiple measurements of the same thing that can be assumed to generate a probability distribution.

ERROR IS NOT UNCERTAINTY. Have you written that out 100 times yet?

There *IS* no standard deviation with single, independent measurements of different things. There is only an uncertainty interval that has *NO* probability distribution and therefore no standard deviation and no “n”.

Pat is correct. You are unwilling to learn anything about physical science and just want to do the same hand waving magic the climate modelers do so they can say their outputs are 100% accurate because all errors cancel.

You’ve been given two tomes by two celebrated experts on uncertainty, one of which you admit you own, and you just dismiss everything they teach about uncertainty by saying you know better than they do. If you can point me to a textbook you have written on the subject I’d be happy to get a copy and see how *you* handle uncertainty of single, independent measurements of different things. I’m guessing it will be quite a hoot to read through.

Weekly_rise
Reply to  Tim Gorman
February 17, 2021 3:00 pm

“There *IS* no standard deviation with single, independent measurements of different things.”

The measurements are measurements of the same thing; namely, surface air temperature anomaly.

Reply to  Weekly_rise
February 17, 2021 3:39 pm

One more time, there are two types of measurements.

  1. Multiple measurements of the same thing by the same device.
  2. Single measurement of different things.

Once a temperature measurement is made IT IS GONE INTO THE PAST. There is no second chance of measuring it. Any subsequent measurement of temperature is of a DIFFERENT THING.

The measurement of Tmax at a station is a totally different measurement than that of Tmin, they are not the measurement of the same thing. The two do *NOT* make up any kind of a probability distribution. They are independent data sets of size one. They are not correlated in any way. Therefore when adding them their uncertainties combine by root sum square.

If you would quit being so stubborn and actually *study* the Taylor textbook you say you have then this would be made clear in the first three chapters. Instead you just use the argumentative fallacy of Argument by Dismissal and say you know better than Taylor how uncertainty works

And yet every message you post shows that you are *NOT* even a novice when it comes to uncertainty.

Weekly_rise
Reply to  Tim Gorman
February 18, 2021 6:50 am

The temperature anomaly is measured at multiple points around the globe at the same time. Those measurements are averaged together to get the global anomaly. The average is made up of multiple measurements of the same thing.

By your logic it is impossible to take the average height of people in a room with any degree of certainty because each person’s height is an independent measurement.

Reply to  Weekly_rise
February 18, 2021 8:36 am

Her is a question that you need to answer about anomalies. Where does the % of absolute temps go when you simply average anomalies as if they were an intrinsic value?

The % of absolute temp is a measure of variance and for an average to have any meaning, variance must be treated along with the mean.

Enlighten yourself and look up calculating variance when combining populations.

Weekly_rise
Reply to  Jim Gorman
February 18, 2021 11:15 am

I’m not sure that I follow your question – a pooled standard error still varies with the inverse of the sample sizes. Perhaps you can elaborate on what you perceive to be the relevance here.

Reply to  Weekly_rise
February 18, 2021 8:51 am

Have you studied this at all?? Why are you offering up such garbage?????

The temperature anomaly is *NOT* measured at multiple points around the globe at the same time. Each station determines its own Tmax and Tmin in order to calculate a Tmid-range! Tmax and Tmin happen at different times all around the globe! Heck, there is no guarantee they will occur at the same time from day to day at the same location!

It *is* possible to take the average height of people in a room. But each measurement will have an uncertainty associated with it. And each measurement *is* an independent one. So when you add all your measurements together the uncertainty of your average will the root sum square of the individual uncertainties.

If you took one of those people and measured his height 100 times with the same tape measure then you would have 100 measurements that you could assume fit a random probability distribution, most likely the normal probability. Then that average would be the most likely true value for their height. But there would *still* be an uncertainty associated with that average. You might decrease it but you can’t eliminate it. Thus if you did that for 100 people you would *still* wind up with an average whose uncertainty is the root sum square of the individual uncertainties. Uncertainty of independent measurements always increases, it never decreases.

This simply doesn’t apply with temperatures. It would be like having 100 people in 100 different rooms being measured ONCE with 100 different tape measurements. You simply can’t build a population from this that follows the central limit theory.

I can’t emphasize enough that you keep trying to come up with arguments to justify your assertion when there simply are none! Why do you keep beating your head against the wall? You have the definitive book on your shelf (so you say) – go study it!

Weekly_rise
Reply to  Tim Gorman
February 18, 2021 11:49 am

My academic background is in applied mathematics and computational probability and statistics in earth science, so these are not unknown subjects to me. I’m of course always learning new things about these subjects, particularly how they apply or are used in different fields, and make no pretense otherwise. You adopting a patronizing tone does not set a pathway for productive dialogue.

The central limit applies whether the distribution of the variable in a population is normal or not. If we assume the variable has a “true” mean, and we take many measurements of that mean, then the distribution of those measurements will be normal about the mean, and the more measurements we take the narrower the distribution will become (the uncertainty of the mean will decrease).

Reply to  Weekly_rise
February 18, 2021 2:43 pm

My academic background is in applied mathematics and computational probability and statistics in earth science, so these are not unknown subjects to me”

Really? Then tell me how you calculate the uncertainty of the average of an integral of f(x) = Asin(x).

If I am being patronizing it is because you refuse to learn anything about the subject. When you diss Taylor’s treatise it’s an indication that you think you know it all and don’t need to learn anyrhing.

I didn’t say the central limit theory doesn’t apply if it’s not a normal distribution. Quit putting words in my mouth in order to distract from the issue at hand.

You don’t have *any* distribution when you are discussing the uncertainty associated with single, independent measurements of different things using different measurement devices. Therefore the central limit theory doesn’t apply. You don’t have anything to sample!

You keep circling around and around trying to always bring the subject back to multiple measurements of the same thing so you won’t have to admit that averaging single measurements of different things have an uncertainty of root sum square. All so you can validate your belief that the CGM predictions have no uncertainty – a religious belief if I’ve ever heard one.

Weekly_rise
Reply to  Tim Gorman
February 19, 2021 9:47 am

“Really? Then tell me how you calculate the uncertainty of the average of an integral of f(x) = Asin(x).”

Are you asking for the uncertainty in the average of the function over a continuous and bounded interval? Your wording is ambiguous.

“When you diss Taylorā€™s treatise itā€™s an indication that you think you know it all and donā€™t need to learn anyrhing. “

I have not dissed Taylor’s or anyone else’s book. Nothing that I’ve said in this discussion has contradicted anything in the book. Nor have I once indicated that I know it all or don’t need to learn anything.

Reply to  Weekly_rise
February 19, 2021 11:01 am

f(x) = Asin(x) is continuous, bounded (sunrise to sunrise, sunset to sunset, whatever – you pick), and is integrable and differentiable.

I didn’t expect you to actually answer and you didn’t.

Sure you dissed Taylor’s book. You called it a rag. You don’t get to run away and hide now.

Everything you have said contradicts the first three chapters of the book. From implying that uncertainty is error to denying that single, independent measurements of different things have the total uncertainty grow by root sum square when added together.

Again, you don’t get to run away and hide now.

Reply to  Weekly_rise
February 20, 2021 5:39 am

Where is your explanation of how to determine the uncertainty of the average of an integral of f(x) = Asin(x)?

I thought you knew all about uncertainty and its propagation?

Reply to  Weekly_rise
February 20, 2021 8:30 am

W_r, “The central limit applies whether the distribution of the variable in a population is normal or not.

The CLT allows you to find the mean of the population. If the population is not iid, then knowing the mean merely removes an offset. Removing the mean of error does not reduce the uncertainty about the measurement (or expectation value) mean.

Reply to  Weekly_rise
February 20, 2021 8:24 am

W_r, “The average is made up of multiple measurements of the same thing.”

Different instruments, of different accuracy, different calibration schedules (if scheduled at all), measuring different quantities in different locations and subject to different time-varying sets of uncontrolled environmental variables.

That’s your oh-so-reliable same thing air temperature record, W_r.

Lower limit of uncertainty due to systematic measurement error is Ā±0.5 C. Uncertainty that never, ever averages away.

See also, Essex, McKitrick, and Andresen, Does a Global Temperature Exist? for a chastening brake on a facile concept.

Consensus climate science lives on ignorance, aggressively defended.

Reply to  Pat Frank
February 20, 2021 12:59 pm

thanks for the reference Pat. I had totally forgotten about it!

Weekly_rise
Reply to  Pat Frank
February 21, 2021 8:47 am

Your paper is paywalled, so I can’t comment on its contents specifically, but I’m sure there are significant systematic errors in the surface stations records that cannot be done away with by merely averaging, as can the random error. Scientists spend enormous amounts of effort dealing with these kinds of errors.

“See also, Essex, McKitrick, and Andresen, Does a Global Temperature Exist? for a chastening brake on a facile concept.”

There has been enough discussion (also here) of this paper for me to conclude that the answer to the question is, “yes, a global temperature exists and it’s a useful thing to estimate.”

Reply to  Weekly_rise
February 21, 2021 10:03 am

go to ab0wr.net:8080 and look for Global Temp. I downloaded it and put it on my web site.

You *can* find the article for free. You just have to look like I did and download it.

Did you not read the excerpt from the GUM that I sent you? Systemic error is *STILL* error. It is a bias that can be corrected for. ERROR IS NOT UNCERTAINTY!

Uncertainty is a lot of things. Did a mud dauber wasp build a nest in the air input to a measurement station? That’s not a systemic error that can be dealt with. It is uncertainty. Did ants get in the station housing and leave grime on the sensors? That is not a systemic error that can be dealt with. It is uncertainty. Sensor drift from aging is *not* a systemic error. It is uncertainty. It can’t be corrected for by lab testing.

The global temp is a joke. Your references totally ignore physics. When you average the temp in Denver at 5000ft altitude with the temp in Kansas City at 1000ft altitude what does the average tell you? The enthalpy is totally different because of the pressure differences at the two locations. The temperature simply won’t tell you that. The humidity’s at each locations are vastly different which affects the enthalpy at each location – and the enthalpy is the *real* heat content, not the temperature. Averaging temperatures from two different locations without considering all the other factors affecting enthalpy is useless. It is the heat content of the Earth system that is the most important, not the temperature. The temperature is a piss-poor proxy for enthalpy.

And neither of your links discuss this at all. For instance, one of the links says “Temperature itself can be inferred directly from several physical laws, such as the ideal gas lawfirst law of thermodynamics

The problem is that you have to know all the other factors such as pressure, volume, and absolute humidity in order to calculate temperature. But the inverse doesn’t apply. You can infer enthalpy from temperature. And the link doesn’t address this at all. It’s a meaningless assertion.

It also states: “Even though the final temperature of two bodies in contact may not be the arithmetic mean, it will still be a weighted arithmetic mean of the temperatures”

But the average global temperature is *NOT* a weighted mean. It is a direct mean. It doesn’t weight the actual temperature at all because it uses anomalies instead of absolute temps.

That link is useless in showing Essex and Mckitrick to be wrong.

Your other link says: ” Energy appears to fit that bill, remember that for something homogeneous, E = Cv T” But the author doesn’t even seem to understand what he is quoting. He couldn’t even quote the equation correctly! It’s E= (Cv)(n)(T) where n is mass and the mass depends directly on the specific humidity of the gas at issue.

It’s obvious that you are not trained well enough to even judge which arguments are good and which ones aren’t. You pick the ones that match your preconceptions and refuse to even listen to those that can actually *show*, physically and mathematically, why the assertions are wrong.

Weekly_rise
Reply to  Tim Gorman
February 21, 2021 10:49 am

“go to ab0wr.net:8080 and look for Global Temp. I downloaded it and put it on my web site.
You *can* find the article for free. You just have to look like I did and download it.”

I can find the “does a global mean temerature exist?” paper for free, I can’t find Pat Frank’s 2010 paper for free. If you can find a copy somewhere that would be terrific. Maybe Pat himself can provide one, since he’s the author.

“The global temp is a joke. Your references totally ignore physics. When you average the temp in Denver at 5000ft altitude with the temp in Kansas City at 1000ft altitude what does the average tell you?”

The temperatures are not being averaged, the anomalies are. I agree that it is much more difficult to construct record from the station network using absolute values, which is why most organizations don’t attempt to do it.

Reply to  Weekly_rise
February 21, 2021 2:32 pm

The anomalies have the same uncertainty as the absolutes. And they have the exact same meaningless as averaging absolute temperatures from different altitudes and humidities.

Why is it harder to construct a record using absolute temps than it is to construct one from anomalies?

Calculate the absolute average for your base period and for the current period and subtract the two. You still have an anomaly. But you don’t lose as much weighting by ignoring the absolute temperature. You can still see then what the absolute temperature is doing!

You keep trying to pose religious dogma as assertions disproving factual data and mathematics.

Pat was right initially. You are wrong, you know you are wrong, but you are never going to give up your religious belief.

Reply to  Tim Gorman
February 21, 2021 10:01 pm

Tim, it’s even worse than that.

The anomalies have a total uncertainty that is the root-sum-square of the uncertainty of the measured temperature and the rms uncertainty of the normal mean.

So, the uncertianty in an anomaly is always larger than the uncertainty in the absolute air temperature measurement.

Total uncertainty is never, ever, propagated into anomalies in any published work on the global record.

Once again, the consensus world lives on false precision.

Reply to  Pat Frank
February 22, 2021 6:48 am

I’ve got to think about this one. I’m not sure there is a useful normal mean that can be used to determine the anomaly. I don’t know how you get away from the uncertainty associated with the data used to establish the normal mean. When you have a subtraction of two values the uncertainties of each value adds, as you say. But is the uncertainty of the baseline a root mean square or a root sum square just like the measured value?

Reply to  Tim Gorman
February 22, 2021 8:49 am

There is another issue that is probably just as important and that is time series analysis. Iā€™ve been doing research into this and man does it get complicated very quickly. You can not just look at a trend and decipher what causes it. There are properties like stationarity that requires multi-year data to have constant parameters like variance and standard deviation. To deal with this one must ā€œdetrendā€ and investigate the reasons. Coastal vs inland stations is one factor as is seasonality. Lots of things to consider in determining if a trend is truly growth or perhaps just an increase in variance.

I think GAT is a step too far in trying to use simple regression along with simple averages to determine a global temp.

I like cooling/heating degree days as these take more into account than just absolute temps. The baseā€™s are set more realistically for the actual local environment. They do not require weighting or determination of a base line. They are directly additive and easily trended.

Other than temps right around the cutoff points, uncertainty of absolute temps are unimportant. Much more satisfactory from a scientific standpoint.

Reply to  Tim Gorman
February 22, 2021 11:47 am

Hi Tim — determination of the normal mean either has its own rms uncertainty it it’s a mean of 30 years of temperatures during the normal period (e.g., GISS’ 1951-1980 normal period), or an estimated standard deviation if it is determined from a linear fit over those 30 years.

The absolute temperatures also have their measurement error.

The uncertainty of the normal (or its e.s.d.) and the uncertainty in the measurement get combined in quadrature when doing the subtraction to get the anomaly.

So, the uncertainty in the anomaly is always greater than the uncertainty in the original measurement.

Reply to  Weekly_rise
February 21, 2021 10:28 am

W_r, “ā€œyes, a global temperature exists and itā€™s a useful thing to estimate.ā€”

Maybe you can email that claim to Chris Essex and have a fine conversation about it.

Here is a pdf reprint of the paper on the global temperature record.

Weekly_rise
Reply to  Pat Frank
February 22, 2021 1:10 pm

Thanks for linking, currently reading through the paper. I don’t understand why you have applied the uncertainty estimate for individual station anomalies into the global trend uncertainty estimate, which should surely be smaller than the uncertainty for individual stations.

Reply to  Weekly_rise
February 22, 2021 2:40 pm

Ask yourself how the trend is calculated!

*WHY* would the uncertainty for the global trend uncertainty estimate be smaller then the uncertainty for individual stations?

Because you still want to go back to treating uncertainty as error?

Did you ever do the writing exercise I suggested?

Weekly_rise
Reply to  Tim Gorman
February 23, 2021 9:00 am

Because uncertainty arises from the measurement error. The error for a station is expressed as Ā±e, and some stations will be + and some will be -.

Reply to  Weekly_rise
February 23, 2021 10:05 am

OMG! That is why root sum square is used instead of a straight addition of the uncertainties!

You don’t have Taylor’s book like you claimed! He covers this in detail!

Get a book like Taylor’s or Bevington’s and actually LEARN SOMETHING!

Weekly_rise
Reply to  Tim Gorman
February 23, 2021 7:35 pm

We aren’t trying to determine the uncertainty of the sum of the measurements, but the uncertainty in the mean. I recommend you give Taylor’s book a read to understand this concept better, Section 4.4 would be a good starting point for you.

Reply to  Weekly_rise
February 24, 2021 6:34 am

You are arguing the wrong thing. Models use an iterative process when calculating an output. Iterative processes are a sum of individual measurements. That is, one measurement stacked on top of a previous measurement.

RSS is the appropriate way to determine combined uncertainty in this situation.

Weekly_rise
Reply to  Jim Gorman
February 24, 2021 2:06 pm

This thread/branch is not discussing model output, but the calculation of the global mean temperature anomaly from surface station measurements. Start back at the February 21, 2021 10:28 am comment from Pat.

Reply to  Weekly_rise
February 24, 2021 3:38 pm

And the mean you calculate from single, independent measurements of different things by different things *still* has an uncertainty of root sum square of each individual component.

The uncertainty associated with such a mean will be wider than any mean you calculate, much, MUCH wider. Making your mean unusable for any purpose.

Reply to  Weekly_rise
February 24, 2021 7:06 am

BTW, the first sentence says: ā€œIf X1, …, Xn are the results of N measurements of the same quantity, then as we have seen, our best estimate for the quantity X is their mean Xbar.ā€

First, same quantity is paramount. This is nothing more than saying that the mean will provide THE BEST ESTIMATE of the true value. This also assumes the same device.

If you read further, at no place does this eliminate measurement uncertainty. If the measuring device is only capable of integer measurements then the mean can only be of an integer value. Anomalies out to the 100ths place prior to about 1980 are impossible because the uncertainty in measurement far outweighs the values obtained.

Dr. Taylor says, ā€œFurthermore, we are for the moment neglecting systematic errors, and these are NOT reduced by increasing the number of measurements.ā€ (Caps by me)

Reply to  Weekly_rise
February 24, 2021 9:26 am

Exactly how do *you* calculate a mean? For most people you add up all the data and divide by the number of data points. WHEN YOU ADD THE DATA then you also add the uncertainty root sum square! So the mean has the uncertainty of the root sum square of all the components making up the mean!

And now we are back to you trying to equate random uncertainty which can be treated statistically and systemic uncertainty which can not.

Taylor’s Chapter 4 starts out: “We have seen that one of the best ways to assess the reliability of a measurement is to repeat it several time and examine the different values determined”

………………

“As noted before, not all types of experimental uncertainty can be assessed by statistical analysis based on repeated measurements. For this reason, uncertainties are classified into two groups, the random uncertainties, which can be treated statistically, and the systematic uncertainties which cannot.”

————————————

Taylor goes on in 4.1 to describe the difference. 4.4 is about RANDOM UNCERTAINTY associated with random error in multiple measurements.

YOU STILL HAVEN’T COME TO TERMS WITH THE DIFFERENCE BETWEEN UNCERTAINTY AND ERROR.

When are you going to complete your writing assignment? When are you actually going to read Taylor FOR MEANING?

Weekly_rise
Reply to  Tim Gorman
February 24, 2021 2:16 pm

“WHEN YOU ADD THE DATA then you also add the uncertainty root sum square! So the mean has the uncertainty of the root sum square of all the components making up the mean!”

This comment is fascinating to me. You are so vehement in your stance on error estimation, yet you do not seem to understand how the error of the mean is estimated. Please see Taylor, 4.4, The Standard Deviation of the Mean. The standard error, or uncertainty in the mean, is given as:

Ļƒ_mean = Ļƒ_x/sqrt(N)

Where N is the number of observations and Ļƒ_x is the standard deviation. This uncertainty is inversely proportional to the number of observations.

Physician, heal thyself, read Taylor.

Reply to  Weekly_rise
February 24, 2021 4:10 pm

NO! NO! NO!

You are *still* assuming that single, independent, uncorrelated, temperature readings somehow describe a probability distribution you can analyze statistically.

Take three data sets: data set 1 = (20), data set 2 = (8). Each of these sets are independent and uncorrelated with an uncertainty of +/- 0.5

What is the mean of each data set? What is the standard deviation of each data set?

Calculate the combined uncertainty from adding the data points in each of the data sets together.

f = a+b

use the standard uncertainty formula:
u_t^^2 = (a^^2)u_a^^2 + (b^^2)u_b^^2 + (2)(u_1^^2/a)(u_2^^2/b)r(u_1, u_2)

Where a is the partial derivative of the function with respect to u_1.

b is the partial derivative of the function with respect to u_2

r is the correlation factor between a and b. For single, independent measurements r is usually = 0. If you wish to use something greater than zero then justify the correlation using the covarriance.

My guess is that you won’t even be able accomplish this simple task. If you can’t then you need to admit that you simply don’t know how to propagate uncertainty.

Reply to  Weekly_rise
February 24, 2021 10:02 pm

Ļƒ_mean = Ļƒ_x/sqrt(N)

Only when the error is iid random.

When the error is systematic,
Ļƒ_mean = Ā±sqrt{[sum over (Ļƒ_x_i)^2]/(N-1)}

Reply to  Weekly_rise
February 23, 2021 10:40 am

Measuring different things with different devices does not cancel uncertainty. Dr. Taylorā€™s book will explain why if you read it.

Example: I hire someone to read and record 3 different pressure gauges at a nuclear plant that is subject to catastrophic explosion. Would you allow this person to simply average the readings from the three gauges and indicate that averaging eliminates uncertainty. And, I might add that you are proposing that averaging would eliminate all error too!

Would you want your DUI blood test run through 3 different measuring devices of unknown accuracy and with unknown measurement uncertainty and the 3 results simply averaged.

Thatā€™s uncertainty. What you donā€™t know and can never know. Think about it. Why donā€™t we know the speed of light down to the micron/sec? Why donā€™t we know the exact measurement of a second? Uncertainty! These are all measured multiple times with similar devices, why donā€™t the errors cancel and give us an exact answer? Uncertainty!

Reply to  Weekly_rise
February 23, 2021 1:55 pm

W_r, “The error for a station is expressed as Ā±e, and some stations will be + and some will be –.”

But you don”t know which error offset they will be because you never know the true temperature. In the event of that ignorance, one can only report the measured temperature with the known calibration accuracy bouns.

Field sensor calibration experiments show the measurement error is sometimes positive and sometimes negative, for any one field sensor, depending upon environmental variables.

Reply to  Weekly_rise
February 23, 2021 1:25 pm

W_r, “the uncertainty estimate for individual station anomalies

It’s a calibration experiment, W_r. Various instruments are maintained under ideal conditions. Their measurements are compared with the measurements of a high-accuracy standard.

The measurement errors produced under ideal conditions calibrate the instrument as a type.

The calibration errors are characteristic of the instrument under field conditions. They are applicable as representative to other homologous field-deployed instruments in other places and times.

Example: you calibrate a mercury thermometer. You learn how accurate it is. Other people use the same brand of mercury thermometer. Your calibration gives you a good idea about the accuracy of their measurements.

Weekly_rise
Reply to  Pat Frank
February 23, 2021 7:36 pm

It gives you a good idea of the uncertainty of the measurement for a single thermometer, but that doesn’t give you the uncertainty in the estimate of the mean of several measurements.

Reply to  Weekly_rise
February 24, 2021 7:14 am

Let me finish your sentence: ā€œ… the mean of several measurements …ā€ OF THE SAME THING BY THE SAME DEVICE. Not different measurements at different stations with different devices.

For most of recorded temperature history you are dealing with single measurements of a single thing with a single device.

Reply to  Weekly_rise
February 24, 2021 9:34 pm

Each calibration experiment included thousands of temperature measurements, for each sensor.

The specific error was determined for each of those calibration measurements by direct comparison with the temperature standard measured at the same moment using a high-accuracy sensor.

The total calibration uncertainty is the root-mean-square of all the thousands of individual measurement errors determined for each of the sensors.

Such calibrations indeed provide the uncertainty for any measurement subsequent to the calibration.

When a field calibrated sensor is left in the field for station use, every single one of its measurements is conditioned with the Ā±u calibration uncertainty.

And when two tĀ±u measurements are subtracted, the uncertainty in the anomaly is sqrt(2*u^2).

Reply to  Weekly_rise
February 21, 2021 10:59 am

You’re just arguing from authority. Again.

The point of Essex, McKitrick and Andresen is that the average temperature of multiply variable sites is physically meaningless. That’s true.

The only physically meaningful global temperature is the terrestrial radiant temperature as seen from space.

Reply to  Weekly_rise
February 17, 2021 6:32 pm

Standard error of the mean IS NOT UNCERTAINTY. It is not an indication of precision. It is a statistical parameter that gives you the interval of where the mean of a sample mean may lay.

The “standard error of the mean” also is only used when you are using sampling. Temperature data from a station IS NOT a sample, it is an entire population. Take the simple mean of the population, you can never do better. I have shown this. Take a population of temperatures from a station. Sample the hell out of it. You know what the mean of the sample means will be? The simple mean of the population.

Here is a definition of SEM from:

https://byjus.com/maths/standard-error-of-the-mean/#:~:text=Standard%20Error%20of%20the%20Mean%20The%20standard%20error,is%20the%20sample%20mean%2C%20in%20a%20sample%20space.

It sounds like you need some very 8n depth study if basis statistics. You won’t get it by reading climate scientist usage of statistics.

Weekly_rise
Reply to  Jim Gorman
February 18, 2021 7:16 am

“Standard error of the mean IS NOT UNCERTAINTY. It is not an indication of precision. It is a statistical parameter that gives you the interval of where the mean of a sample mean may lay.”

I’m quite bewildered by your stance on this. Uncertainty in an estimate arises from error in the measurements underlying the estimate. If there were no error in the measurements our estimate would be exactly equal to the true value.

The standard error of the mean is used when you are trying to understand the uncertainty in a calculated average. The surface station data are indeed a sample from the full population of points on earth’s surface. We are taking the stations to be representative of the entire planet.

Reply to  Weekly_rise
February 18, 2021 8:14 am

I am not here to be your statistics teacher. Suffice it to say that ā€œThe surface station data are indeed a sample from the full population of points on earthā€™s surface.ā€ is not a valid assumption. I will give you a hint, a sample must have a similar distribution as the population. A single station does not fit that rule so it canā€™t be a stand alone sample of the population.

Take some pertinent classes if you wish to learn what some of the errors are in computing a global temperature via anomalies.

The fact that you are stuck on statistical parameters should be an indication of something wrong. You are like a lot of climate scientists and are stuck in a world of probabilities. Go learn some knowledge about trending time series. Stock analysists, economists, and others will run laps around you.

Reply to  Weekly_rise
February 18, 2021 8:42 am

As above, each station is a stand alone population. You canā€™t simply combine populations into an average without also calculating the combined variance. You will soon find the variance grows to such an extent that your true value is meaningless. Youā€™ll get means and standard deviations like 1 +/- 5.0.

Weekly_rise
Reply to  Jim Gorman
February 18, 2021 12:01 pm

A station is a device that takes a measurement of the temperature at a single point on the earth’s surface. The collection of all points on the earth’s surface is the population, and the sample is the collection of points where we have measurements of the temperature from stations. The stations themselves are not populations.

Reply to  Weekly_rise
February 18, 2021 2:51 pm

The collection of all points on the earth’s surface is *NOT* a single population! it is a multiplicity of independent measurements.

Think of it this way. I go to Africa and measure the heights of 100 individual pygmies. They I measure the heights of 100 individual Watusi’s.

Can I conclude these measurements represent a total population? When I average all of the combined data points exactly what does that average tell me? I can’t order pants sized to the average and expect them to fit *anyone*! And it doesn’t matter how accurately you calculate the mean!

Consider this: The growing zone for Wisconsin is *vastly* different than the growing zone for Oklahoma. What does it tell you to average the daily temp readings for a week in Wisconsin and for the same week in Oklahoma? Is the average of all the readings going to tell you *anything* about the climate in either location? They are both locations on the earth! Why wouldn’t their temperatures be part of the same population?

Weekly_rise
Reply to  Tim Gorman
February 19, 2021 9:56 am

It is certainly a population. The population of all points on the earth’s surface. That is how we’ve defined it. We want to estimate the average temperature anomaly of every point on the surface, so we will draw a sample from the population and calculate the anomaly for that subset.

In your example, you have never defined what the population is you are trying to understand. Is it the population of pygmies and Watusis? Is it the population of people in Africa? On the earth? Your average will be representative of some quantity – whether that quantity has any relevance to you depends on what it is you’re trying to understand.

We use the anomaly precisely because different regions of the earth have different climates, and the thing we are trying to understand is whether those climates are changing (is Wisconsin warmer now than it was 20 years ago? Is Oklahoma warmer now than 20 years ago? Is everywhere on earth warmer now than 20 years ago? By how much? These are the questions we want to answer).

Reply to  Weekly_rise
February 19, 2021 11:04 am

Anomalies have so many problems it isnā€™t funny.

1) percentage weighting is not done. 1 degree at an absolute temp of 0 F is a lot different than 1 degree at 90 F.

2) much of the Southern Hemisphere have no data prior to the satellite era. This makes the distribution totally out of whack.

3) anomalies calculated to one or two decimal points on integer recorded temperatures.

This is from Washington University at St. Louis. Read it, heed it. Then work up a response about how anomalies donā€™t violate the rules practiced by real, physical scientists.

ā€œSignificant Figures: The number of digits used to express a measured or calculated quantity.
By using significant figures, we can show how precise a number is. If we express a number beyond the place to which we have actually measured (and are therefore certain of), we compromise the integrity of what this number is representing. It is important after learning and understanding significant figures to use them properly throughout your scientific career.ā€

4) it is impossible to corroborate the Global Average Temperature anomalies with any local/regional temperatures. In other words the sum of the parts DONā€™T add up to the whole. Which do you suppose is wrong?

Reply to  Weekly_rise
February 19, 2021 11:19 am

The combining the population of pygmies and Watusis is no different than combining the temperature measurements of Alaska with Texas, or Niger with Thailand.

If the quantity has no relevance then that is a perfect indicator that taking the average of the combined population is meaningless.

What does combining the temperatures in Alaska with those in South Africa tell you when you find the average? Exactly what do you expect that quantity to tell you? What are you trying to understand when you take that average?

The temperatures are thousands of individual populations with a size of one. They are *NOT* correlated members of a population such as “humans”. How do you sample from a population of size one?

First off, anomalies tell you almost nothing. An anomaly of 0.1C in Florida quite a bit different than anomaly of 0.1C in Alaska. The 0.1C anomaly where the temp is 10c is of significantly more importance than where the temp is 20C. Yet when you do nothing but add the anomalies together you totally lose the capability of differentiating between them!

Second, that anomaly carries with it the very same uncertainty that the absolute temperature does. You don’t eliminate the uncertainty by finding an anomaly. In fact it increases because the base line used for finding the anomaly has its own uncertainty and when you subtract the temperature from the baseline the uncertainties add by root sum square == in other words the anomaly uncertainty GOES UP.

So, if the anomaly in Florida is 0.1C and in Alaska is 0.1C does that tell you anything about what the global climate is doing?

Remember, the climate models don’t tell you what is happening in Florida as opposed to Alaska, they tell you they think is happening globally!

Weekly_rise
Reply to  Tim Gorman
February 21, 2021 8:04 am

“What does combining the temperatures in Alaska with those in South Africa tell you when you find the average? Exactly what do you expect that quantity to tell you? What are you trying to understand when you take that average?”

This is exactly the point of anomalies, is it not? The thing we want to know is not what the average is between the two places, but how much the temperatures is changing at all places over time.

“So, if the anomaly in Florida is 0.1C and in Alaska is 0.1C does that tell you anything about what the global climate is doing?”

It does not tell you a lot, but it does tell you both places are warmer than average. If you determine that a multitude of places all over the entire globe are getting warmer than their average over time, it will be a strong indicator of a large scale climate shift.

Reply to  Weekly_rise
February 21, 2021 8:43 am

The point is to tell you something about the CLIMATE! Anomalies, especially averaged anomalies, tell you NOTHING about the climate, not globally and not locally.

Again, a 0.1C anomaly in Alaska is a far different thing than a 0.1C anomaly in Miami. So what does the 0.1C anomaly tell you?

The fact is that it tells you NOTHING. And a 0.1C anomaly calculated using a mid-range temperature TELLS YOU NOTHING – it does *not* tell you if max temps are going up or if min temps are going up.

If I give tell you that the mid-range temp went from 20c to 20.1C can *YOU* tell me if max temps went up or if min temps went up?

Weekly_rise
Reply to  Tim Gorman
February 21, 2021 9:03 am

“The fact is that it tells you NOTHING. And a 0.1C anomaly calculated using a mid-range temperature TELLS YOU NOTHING ā€“ it does *not* tell you if max temps are going up or if min temps are going up.”

If what you’re interested in is whether the max or min is increasing through time then just calculated the max temp anomaly instead of the daily mid point. Both are useful pieces of information.

Reply to  Weekly_rise
February 21, 2021 10:12 am

why don’t the modeler’s do this? It would tell you *far* more about the climate than the mid-range would!

And, btw, I *have* looked at the integral of Tmax at all kinds of places around the world. And in most places its been on the downtrend over the past decade!

What’s that tell you? Why do the modeler’s ignore this?

Think *money*!

Weekly_rise
Reply to  Tim Gorman
February 22, 2021 1:17 pm

Scientists do look at min/max ranges. That is why we know that night time temps are warming faster than day time temps, for instance.

Reply to  Weekly_rise
February 22, 2021 2:43 pm

Tmax is *not* increasing globally. Only Tmin. There is no “faster”.

You are still spouting religious dogma from the so-called climate scientists.

Reply to  Weekly_rise
February 21, 2021 9:46 am

Actually, it does not tell you anything without also stating the variance. If you want to deal in statistical probability, you MUST also deal with variance, the two go together hand in in hand. Simply looking at a number like a 0.1 anomaly doesn’t tell if the numbers that are included are both 0.1 or if one is 1 and the other was -0.8. Only knowing the variance will tell you.

Trending anomalies with simple regression is incorrect also. These are TIME SERIES and need to be treated as such. For example, are the individual station trends stationary so they can be combined directly? Are some only partial records? On and on.

Reply to  Weekly_rise
February 18, 2021 6:04 pm

By using the term “the sample is the collection of points” tells me you have no idea what you are talking about. Sampling is taking groups of samples from the collections of measurements, finding the mean of each group, then finding the mean of the sample means.

Give up. You are a troll.

Reply to  Weekly_rise
February 18, 2021 8:59 am

We are taking the stations to be representative of the entire planet.”

And this is your entire problem. The representation has such an inherent uncertainty that you simply don’t know what it is telling you!

Reply to  Tim Gorman
February 17, 2021 10:47 am

Tim, W_r is going to believe what he likes, no matter what.

His comments show no understanding of what modelers actually do, or how the models are used, or about calibration error, or about propagation of error or what it means.

He waves his hands and all the problems disappear. It’s hopeless. He’s a believer.

Marcus
February 11, 2021 7:10 am

“The models are obviously not reproducing the natural climate cycles or oscillations, like the AMO, PDO and ENSO. As can be seen in Figure 2 they often are completely out-of-phase for years, even when they are just two runs from the same model.”

But climate model runs shouldn’t be expected to be in phase with the observed cycles/oscillations or with each other! If we were to build a time machine and go back to 1900, kill a butterfly, and then watch the climate evolve again, Earth2 and Earth1 would drift out of phase with each other too! This is basic Lorenz chaos theory. Climate models are an attempt to simulate Earth2s, 3s, 4s, etc., not Earth1s (except for a few exceptions that use observed data for adjustments, like reanalysis models, but reanalysis can’t be used for projections because we don’t have observations of the future!). This is also exactly why ensemble means are useful – they tease out the forced changes from the internal variability. And then they can be compared against an observed temperature record that is adjusted for the most obvious sources of variability like using MEI regressions to adjust for ENSO variability.

FYI: there’s a lot of active research on how models do in terms of ENSO, AMO, PDO, and so forth – not on whether they are in phase (because they aren’t expected to be!), but whether the frequency, amplitude, and other phase-independent characteristics of these cycles look realistic. And while models have improved in terms of their ENSO-type patterns, they still have a long way to go, so if you are going to critique them, critique them for that, not for being out-of-phase. Using google scholar you can easily find papers that look at ENSO behavior in climate models – see https://www.nature.com/articles/s41467-020-17983-y for an example, with a conclusion that “While climate models reproduce observed ENSO amplitude relatively well, the majority still simulates its asymmetry between warm (El NiƱo) and cold (La NiƱa) phases very poorly.”

Weekly_rise
Reply to  Andy May
February 11, 2021 12:33 pm

I don’t think you’ve demonstrated that >50 year oscillations are out of phase in the CMIP6. As far as I can see the >15 years oscillations appears to be pretty well in phase. See, e.g., your figures 2-4.

Weekly_rise
Reply to  Andy May
February 11, 2021 2:03 pm

Andy, my choice of 15 years was arbitrary, but as I understood your comment to Marcus you were arguing that the models fail to show coherency for ~60 year + oscillations, which does not seem to be true.

Weekly_rise
Reply to  Andy May
February 12, 2021 9:46 am

I fail to see where the models lack coherency on multidecadal oscillations. That isn’t evident from Curry’s paper and it isn’t evident in your post. I am also entirely unconvinced that showing coherency over interannual timescales is actually a test of model skill, since my understanding is that the models aren’t intended to show the same internal variability (weather), they should only show similar forced changes.

Weekly_rise
Reply to  Andy May
February 12, 2021 12:39 pm

Curry’s stadium wave hypothesis predicted that the mid 2000’s global warming hiatus should extend into the 2030s. In fact this seems to have been a central prediction of the hypothesis.

comment image

So color me unimpressed with Curry;’s stadium wave hypothesis.

And, again, your figure 3 does not show incoherency among the model results over multi-decadal oscillations (not least perhaps because it only covers two decades). This “yeah-huh” “nuh-uh” merry go round doesn’t seem especially productive. You just keep insisting that something is contained in the figure that simply isn’t.

Weekly_rise
Reply to  Andy May
February 15, 2021 9:39 am

The satellites do not show a continuation of the pause either:

comment image

There’s simply no way to look at what has happened since 2013 and think Curry’s hypothesis has any predictive capability whatsoever.

Reply to  Marcus
February 11, 2021 8:03 am

Marcus,

But climate model runs shouldnā€™t be expected to be in phase with the observed cycles/oscillations or with each other!”

Did you *actually* read what you just wrote? If they can’t reproduce OBSERVED CYCLES/OSCILLATIONS then what good are they? Why even have models? Just draw a line on a piece of paper and call it good!

Lrp
Reply to  Marcus
February 11, 2021 2:00 pm

Yes, the models work very well based on a preconceived conclusion. Just like any work of fiction.

February 11, 2021 7:47 am

OT, but interesting

Glacial episodes of a freshwater Arctic Ocean covered by a thick ice shelf
Following early hypotheses about the possible existence of Arctic ice shelves in the past1,2,3, the observation of specific erosional features as deep as 1,000 metres below the current sea level confirmed the presence of a thick layer of ice on the Lomonosov Ridge in the central Arctic Ocean and elsewhere4,5,6. Recent modelling studies have addressed how an ice shelf may have built up in glacial periods, covering most of the Arctic Ocean7,8.
So far, however, there is no irrefutable marine-sediment characterization of such an extensive ice shelf in the Arctic, raising doubt about the impact of glacial conditions on the Arctic Ocean. Here we provide evidence for at least two episodes during which the Arctic Ocean and the adjacent Nordic seas were not only covered by an extensive ice shelf, but also filled entirely with fresh water, causing a widespread absence of thorium-230 in marine sediments.
We propose that these Arctic freshwater intervals occurred 70,000ā€“62,000 years before present and approximately 150,000ā€“131,000 years before present, corresponding to portions of marine isotope stages 4 and 6. Alternative interpretations of the first occurrence of the calcareous nannofossil Emiliania huxleyi in Arctic sedimentary records would suggest younger ages for the older interval.
ļ»æOur approach explains the unexpected minima in Arctic thorium-230 records9 that have led to divergent interpretations of sedimentation rates10,11 and hampered their use for dating purposes. About nine million cubic kilometres of fresh water is required to explain our isotopic interpretation, a calculation that we support with estimates of hydrological fluxes and altered boundary conditions. A freshwater mass of this sizeā€”stored in oceans, rather than landā€”suggests that a revision of sea-level reconstructions based on freshwater-sensitive stable oxygen isotopes may be required, and that large masses of fresh water could be delivered to the north Atlantic Ocean on very short timescales

February 11, 2021 7:52 am

<a href=”https://www.nature.com/articles/s41586-021-03186-y.epdf?sharing_token=8j6ZFdrdRGwsJ6xqIslP_9RgN0jAjWel9jnR3ZoTv0MgVZIE1CyIetfa5DtHHVe66_x9u0V2IBfbLmqeFQQw9ksVxZ0NbFNdFL396W29VYsEDBxZWcqnHp3Nwl7m2LjD7Y9IAQssa92km5lf9rmgbQV9VdqEFx9DBHYFVJA6OBE%3D”>Seems to be the full paper</a>

Reply to  Krishna Gans
February 11, 2021 7:54 am

Seems to be the full paper
Sorry formatting error !

Kevin
February 11, 2021 8:17 am

If historical forcings are used until 2014, does this mean that the models have been making independent predictions for only six years? Prior to 2014, if they know the CO2 levels, they presumably know the global avg. temperature. Then what is to stop tuning the models to the answer key?

Kevin
Reply to  Andy May
February 11, 2021 9:17 am

By ā€œknowā€ I mean the temperature target to which they are tuning which would be the averages that they have been calculating from their datasets.

Yes, the process is an example of circular reasoning: CO2 causes temperature rise -> take average of global temperatures skewed to show increases -> tune climate models to show temperatures increasing -> conclude that models demonstrate CO2 causes temperature to rise. *

  • except where it doesnā€™t which is attributed to reductions in air particulates.
Reply to  Andy May
February 11, 2021 10:51 am

I’m a little confused. If the models are tuned to the observations prior to 2014, doesn’t that mean the models will ‘predict’ observations of known cycles like La Nina and El Nino prior to 2014? How could they possibly be tuned to predict these things, yet also assume natural forcings amount to zero.

Reply to  Chris Nisbet
February 11, 2021 12:00 pm

The models are good enough to predict future La Nina’s and El Nino’s. They aren’t even good enough to show the ones in the past!

February 11, 2021 8:36 am

Just to remind everyone, CMIP6 climate models have no predictive value.

The AR6 is based on false science. Pseudo-science is the operative term.

The whole of consensus climatology lives — battens — on false precision.

Reply to  Pat Frank
February 11, 2021 4:58 pm

My predictive model down thread, covers all CMIP6 SSP-RCPs. I just need to get it accepted by the UN-IPCC.

Reply to  RickWill
February 12, 2021 3:12 pm

I took a look, Rick. I’d say you nailed the best likelihood outcome. šŸ™‚

Alasdair Fairbairn
February 11, 2021 8:46 am

The IPCC is fighting off the back foot now and we can expect a raft of Cognitive Dissonance papers coming out from this coming COP 26 boondoggle.
It is all just politics now. The science has gone with the wind.

Alasdair Fairbairn
February 11, 2021 9:00 am

The UN and its acolytes has morphed into a very dangerous entity in that there is no global constitutional mechanism to challenge its activities. It thus has agendas above its station with eyes on getting control of the levers of power.

Reply to  Alasdair Fairbairn
February 11, 2021 4:52 pm

Yes – the fact that Taiwan is not recognised as a sovereign state by the UN shows its true colours. The would-be autocrats at the UN are beholding to sovereign states for their funding. They have been continually hatching plans to establish a guaranteed source of revenue. Back in the late 90s they proposed an email tax. If that had got past the starter’s gun they would now have a UN tax on every electronic transaction. It would only need be a fraction of a cent to provide the independent income they crave.

Imagine the audacity of Trump to cut the finding to the UN-WHO. No place in leadership for a sovereign head who does not bow to the UN; unless they are backed by the CCP of course.

Climate “ambition” is the golden egg that the UN is trying to hatch right now. They only need to cream a 5% administration fee to get a healthy permanent income stream if they get the “ambition” they seek.

Bill Rocks
February 11, 2021 9:34 am

Fig. 1 looks like a hockey stick to me.

Rob_Dawg
February 11, 2021 10:46 am

All these model runs use the ssp245 emissions scenario, which is the CMIP6 version of RCP 4.5, as far as I can tell. Thus, it is the middle scenario.

I would suggest that since RCP 8.5 has been completely debunked that leaves RCP 4.5 as the high scenario not middle.

Lrp
February 11, 2021 11:08 am

Whatā€™s the point of the models?

February 11, 2021 11:13 am

Bottom line, is AR6 going to be any different from AR4 or AR5? Are any of these documents worth the paper and ink?

Answer: No. GIGO.

Fran
February 11, 2021 1:41 pm

The conclusion that the ignorant will draw from figures such as these is that the climate (weather) was predictable before 1990. It will help sell the ‘extreme weather’ meme.

February 11, 2021 1:50 pm

No anomalies for me. My model comes straight out with the forecast Global Average Surface Temperature for the next 80 years. It also excludes the natural cycles. I consider the ocean cycles just noise in the context of climate. The orbital changes matter but there is nothing of consequence there yet.

In the longer term, the tropical Atlantic is the location to monitor. If it fails to achieve the 30C controlled maximum temperature in an annual cycle then that will be a hint for the start of the next glaciation.

I know I will not be around to see it, but I will make certain my grandchildren are aware of my forecast. Maybe I can convince a few of the new generation of true scientists to take a fresh look at reality rather than the highly manipulated view presently on offer.

Temp_Forecast.png
February 11, 2021 2:10 pm

Andy,
A little exercise since you have the data. Rather than comparing anomalies, compare a few of the extreme examples showing the actual temperature forecast from say 2020 to 2030.

That will give more significance to the actual variation between models.

Then do the same thing over the Nino4 region, where the temperature cannot exceed 30C. That makes it very clear if the models are unphysical.

Reducing models to anomalies avoids any debate on what the models are using as the current surface temperature. Looking at the next decade is a time frame of interest.

Reply to  RickWill
February 11, 2021 4:25 pm

Just to sample the nonsense purveyed as science. The attached chart is from BCC-CSM2.

By inspection the average for 2020 is 289K.

Screen Shot 2021-02-12 at 11.23.22 am.png
Reply to  RickWill
February 11, 2021 4:33 pm

Then take a look at AWI-CM-1 showing an average of 288K for 2020.

So a “whopping” 1 degree difference between the two models for last year.

Isn’t that the entire warming that is going to send us all to hell in 12 years?

That is only the output from two of these computational turds. I bet it would get worse if I sampled them all.

The CMIP6 data on DNMI no longer has the pie, which enabled the trend in water vapour to be integrated. They only now offer pr; just one part of the story.

Screen Shot 2021-02-12 at 11.27.27 am.png
Reply to  RickWill
February 11, 2021 7:23 pm

The average of FGOALS-g3 is not far from my prediction of 287K (14C).

This model is already 2K cooler than the BCC model – I pick this one.

Screen Shot 2021-02-12 at 2.19.30 pm.png
Keith Harrison
February 11, 2021 2:42 pm

Those interested in watching a 45 minute Jan 21presentation on climate go to this site, find the password provided for the video which equally destroys the CIMP model runs, sea level, forest fires. Good charts and graphs. https://clintel.org/new-presentation-by-john-christy-models-for-ar6-still-fail-to-reproduce-trends-in-tropical-troposphere/?mc_cid=1f85683f49&mc_eid=8edf2b0091

Frank Hansen
February 11, 2021 3:25 pm

The method of averaging model outputs may look like a plausible approach to people who donā€™t really understand statistics. It is based on a false analogy with continued sampling from a distribution with unknown parameters. We know that the averaging of polling tends to give a more accurate estimate, but that is thanks to the central limit theorem, which depends of a number of assumptions. One of these is that you continue sampling from the same population. This setting has no meaningful counter part when it comes to modeling. There is no central limit theorem for models. To the very least modelers should try to prove such a theorem before applying it.

February 11, 2021 7:50 pm

Basic failure to communicate?

The statement ā€œActual vapor pressure is a measurement of the amount of water vapor in a volume of airā€ is from University of Illinois meteorology web site at http://ww2010.atmos.uiuc.edu/(Gh)/guides/mtr/cld/dvlp/rh.rxml  It is incorrect. What they are referring to is the ā€˜partial pressureā€™ of the WV in the total pressure of the atmosphere. Vapor pressure is a property of (in this case) liquid water that depends only on its temperature. The correct description of vapor pressure, as commonly used is given here: https://tinyurl.com/yjqy7r5x

I wonder how widespread this mistake is and whether it is contributing to the failure of the GCMs.

Aintsm 1850 2020.jpg
Geoff Sherrington
Reply to  Dan Pangburn
February 12, 2021 2:49 am

DP,
There are more errors.
For example, in general climate research, you often see pH defined as the negative logarithm if the Hydrogen ion concentration.
The proper definition uses “activity”, not “concentration”.
Activity is related to concentration by factors dominated by other species in the solution. For example, the Na+ and Cl- ions in salt water influence the connection, expressed in part by the Debye-Huckel equations. Sadly, the presence of suspended solids also affects some methods for the determination of pH, in ways that were unsolved last time I looked at the topic.
Yet, using a wrong definition, they (some get it right, to be fair) go on to express pH in tiny increments, like measuring sea water to 2 or 3 significany figures when you are lucky to do better than 1, as in (say for sea water) pH 8.1, which is moderately alkaline and not at all acidic. Geoff S Chemist

February 12, 2021 8:40 am

Iā€™ve purchased some very strong hip waders to prepare for the events. For those who donā€™t already know, sturdy hip waders are required when wading into sewage.

Glad you show a sense-of-humor (and an accurate one), as this “event” isn’t funny or of any real scientific importance other than to the warmarxists. Bottom line — do they narrow the ECS or just maintain the same nonsense-range? Rhetorical question of course.

Neville
February 12, 2021 3:05 pm

Andy what do you think about Dr John Christy’s lecture at the GWPF in London in 2019?
He put nearly all their claims about their so called CAGW to the test and found only the Russian model was close. Your thoughts please?
He also found very little evidence for their HOT SPOT when compared to data or observations since 1979. BTW what will happen when the AMO moves to the cool phase and perhaps by 2030? Just asking?
Here’s that link to Dr Christy’s lecture and he puts so many of their claims( or wishful thinking) to the test. Any thoughts?

https://www.thegwpf.com/putting-climate-change-claims-to-the-test/

sylvesterdeal61
February 14, 2021 7:37 am

William Haas
February 15, 2021 12:55 am

If they really knew what they were doing they would by now be running only one model, the one that is the best fit to what has actually happened. Parameterization should not be allowed because all that does is cover up modeling errors. The average of errant results from errant models is also an errant result and is really nonsense.