Guest Post by Willis Eschenbach
There’s a new open access paper in Nature Magazine, entitled “A tighter constraint on Earth-system sensitivity from long-term temperature and carbon-cycle observations“, by Wong et al., hereinafter Wong2021. Gavin Schmidt, GISS programmer to the stars, lauds it on Twitter. The Abstract says:
The long-term temperature response to a given change in CO2 forcing, or Earth-system sensitivity (ESS), is a key parameter quantifying our understanding about the relationship between changes in Earth’s radiative forcing and the resulting long-term Earth-system response. Current ESS estimates are subject to sizable uncertainties. Long-term carbon cycle models can provide a useful avenue to constrain ESS, but previous efforts either use rather informal statistical approaches or focus on discrete paleoevents. Here, we improve on previous ESS estimates by using a Bayesian approach to fuse deep-time CO2 and temperature data over the last 420 Myrs with a long-term carbon cycle model. Our median ESS estimate of 3.4 °C (2.6-4.7 °C; 5-95% range) shows a narrower range than previous assessments. We show that weaker chemical weathering relative to the a priori model configuration via reduced weatherable land area yields better agreement with temperature records during the Cretaceous. Research into improving the understanding about these weathering mechanisms hence provides potentially powerful avenues to further constrain this fundamental Earth-system property.
So I got to thinking about their paper. The first thing that made my urban legend detector start ringing was a statement in the Abstract above that you might have gone right past, viz:
We show that weaker chemical weathering relative to the a priori model configuration via reduced weatherable land area yields better agreement with temperature records during the Cretaceous.
Translated from Scientese into English, one possible meaning of this is:
We adjusted the climate model’s tunable parameters so the output agrees better with our theory that CO2 controls the climate.
Not an auspicious start …
All of this is based around a computer model called GEOCARBSULF, which is a long-term (millions of years) carbon and sulfur cycle model used to estimate past CO2 levels. So I got to wondering … just how many tunable parameters are there in the GEOCARBSULF model?
But before I discuss the number of GEOCARBSULF tunable parameters, why is the number of tunable parameters important? There’s a famous story about Freeman Dyson and Enrico Fermi that explains this issue well. Here it is in Dyson’s own words:
We began by calculating meson–proton scattering, using a theory of the strong forces known as pseudoscalar meson theory. By the spring of 1953, after heroic efforts, we had plotted theoretical graphs of meson–proton scattering. We joyfully observed that our calculated numbers agreed pretty well with Fermi’s measured numbers. So I made an appointment to meet with Fermi and show him our results. Proudly, I rode the Greyhound bus from Ithaca to Chicago with a package of our theoretical graphs to show to Fermi.
When I arrived in Fermi’s office, I handed the graphs to Fermi, but he hardly glanced at them. He invited me to sit down, and asked me in a friendly way about the health of my wife and our newborn baby son, now fifty years old.
Then he delivered his verdict in a quiet, even voice. “There are two ways of doing calculations in theoretical physics”, he said. “One way, and this is the way I prefer, is to have a clear physical picture of the process that you are calculating. The other way is to have a precise and self-consistent mathematical formalism. You have neither.”
I was slightly stunned, but ventured to ask him why he did not consider the pseudoscalar meson theory to be a self-consistent mathematical formalism. He replied, “Quantum electrodynamics is a good theory because the forces are weak, and when the formalism is ambiguous we have a clear physical picture to guide us. With the pseudoscalar meson theory there is no physical picture, and the forces are so strong that nothing converges. To reach your calculated results, you had to introduce arbitrary cut-off procedures that are not based either on solid physics or on solid mathematics.”
In desperation I asked Fermi whether he was not impressed by the agreement between our calculated numbers and his measured numbers. He replied, “How many arbitrary parameters did you use for your calculations?” I thought for a moment about our cut-off procedures and said, “Four.” He said, “I remember my friend Johnny von Neumann used to say, with four parameters I can fit an elephant, and with five I can make him wiggle his trunk.” With that, the conversation was over. I thanked Fermi for his time and trouble, and sadly took the next bus back to Ithaca to tell the bad news to the students.
So … how many tunable parameters does the GEOCARBSULF model have? From the Wong2021 paper …
There are 68 GEOCARB model parameters, of which 56 are constants and 12 are time series parameters. The constant parameters have well-defined prior distributions from previous work, and the time series parameters have central estimates and independent uncertainties defined for each time point15.
Hmmm, sez I … 68 parameters … not a good sign.
So to see if “the constant parameters have well-defined prior distributions from previous work” as claimed above, I went to look at reference 15 listed in the above quote. It’s called “ERROR ANALYSIS OF CO2 AND O2 ESTIMATES FROM THE LONG-TERM GEOCHEMICAL MODEL GEOCARBSULF“. There, the Abstract concludes by saying:
The model-proxy mismatch for the late Mesozoic can be eliminated with a change in GYM within its plausible range, but no change within plausible ranges can resolve the early Cenozoic mismatch. Either the true value for one or more input parameters during this interval is outside our sampled range, or the model is missing one or more key processes.
Hmmm, sez I … doesn’t sound like that backs up the Wong2021 claim that “the constant parameters have well-defined prior distributions from previous work, and the time series parameters have central estimates and independent uncertainties defined for each time point15.”
So, setting aside the fact that the model has enough tunable parameters to make an elephant put on a tutu and do the Swan Lake ballet, I looked at their results. First, here is their graph of their results.

Figure 1. This is Figure 4 in Wong2021. ORIGINAL CAPTION: “Model hindcast, using both CO2 and temperature data, for precalibration and a %outbound threshold of 30% (shaded regions). The gray-shaded regions show the data compilations for CO2 (ref. 26) and temperature12. The lightest colored shaded regions denote the 95% probability range from the precalibrated ensemble, the medium shading denotes the 90% probability range, the darkest shading denotes the 50% probability range, and the solid-colored lines show the ensemble medians. To depict the marginal value of each data set, the dashed lines depict the 95% probability range from the precalibrated ensemble, when only temperature data is used (a) and when only CO2 data is used (b).”
(A short digression. Looking at Figure 1, I considered the fact that dinosaurs lived on the planet from about 245 million years ago to 66 million years ago. Mammals first appeared 178 million years ago. During that time, according to Figure 1, temperatures were between 6°C and 12°C warmer than at present. And folks hyperventilate about a further half of a degree °C warming being an “emergency” that will ruin our lives and drive extinctions through the roof? … but I digress.)
Now, their claim is that their results gave tighter constraints on the sensitivity of the planetary temperature (lower panel) to atmospheric CO2 levels (upper panel). Squinting at that graphic, I said “Hmmm …”. Didn’t look too likely.
So I did what I usually do when the authors are not conscientious enough to archive their results. I digitized the Wong 2021 temperature and CO2 data, and I graphed it up. Figure 2 shows that result.

Figure 2. Scatterplot, paleo temperatures versus the log (base 2) of paleo CO2 levels from Wong2021
Now, if CO2 levels actually were the control knob regulating the global temperature, we’d see all of the points falling on a nice straight line … but we don’t, far from it. There’s no statistically significant relationship between the temperature and the CO2 levels reported by Wong et al.
So I gotta say, the data reported in the Wong2021 paper is a long, long way from establishing the claims made in their Abstract. In fact, even after they’ve carefully adjusted the tunable parameters of the GEOCARBSULF model in their favor, their results support the null hypothesis, which is that CO2 is not the global temperature control knob.

My best to everyone, dinosaurs and mammals alike,
w.
PLEASE: When you comment, quote the exact words you are discussing. I can and am happy to defend what I wrote. But I can’t defend your interpretation of what I wrote.
All of the geological observations, including paradoxes need a physical explanation. The period 541 to 490 million years ago is the start of massive worldwide tectonic plate movement, reconstruction/alteration of the surface of the earth (terraforming).
There must be a physical reason, a cause to explain the start of modern tectonic plate movement 541 million years ago and why there is the appearance of negative C13/C12 ratio carbonates.
The paradigm that CO2 is the driver of the climate is an urban legend. Geology has become a cottage industry to push that fake theory and to hide observations that killed it.
Why did modern tectonic plate movement start 541 million years ago? What happened at that time to earth to start tectonic plate movement and to create the deep oceans on the planet at time? Why did advance life sudden appear on the earth during this period?
What happened in the geological record during this period? Observations not theories. Strip away the dead theories.
Geology hides/is hiding research findings which show there was a series of massive injection of primordial CH4 into the biosphere during that period. These injection started roughly 541 million years ago.
This period coincides with the appearance of advanced life on the planet and the appearance of the first deep oceans on the earth. See figure 1 in that attached paper that is a graph that shows the C13/C12 changes in this period of time.
The C13/C12 ratio in the carbonate record is close to modern levels except during the period 541 to 490 million years ago. In the last 15 years it has been determined that that C13/C12 ratio in the carbonate rocks deposited in the period 541 to 490 million years ago dropped to negative around negative more than a dozen times and then when positive.
The 541 to 490 million year ago geological record shows there were a series of massive injections of low C13/C12 ratio (this primordial CH4 that has been sitting in the liquid core of the planet until the core of the planet starts to crystallize which drives/starts tectonic plate actions and movements at that time).
This CH4 explains the massive hydrocarbon deposits which all contain heavy metals and explains the fact that there are deep oceans that cover 70% of the surface of the planet and that advanced life on the planet did not appear until 490 million there were deep oceans on the earth.
The CH4 is extrude from the liquid core of the planet when it crystallizes. Metals at high pressure and temperature bond with CH4 and carried CH4 down into the core of the planet. The liquid core is saturated with CH4 so when it crystallizes, the CH4 is extrude at high pressure for the liquid core.
The metals in the mantel form a sheath around the extrude CH4 which creates a tube. The tube carries the CH4 to surface of the planet. This is the force that moves the tectonic plates.
Because the earth was struck by a Mars size object, about 100 million years after it was formed, most of the CH4 that was in the mantel was lost to space as well as the early earth’s Venus like atmosphere.
The tubes extrude core CH4 is what created the earth’s deep oceans and is what is pushing around the tectonic plates now.
https://www.sciencedirect.com/science/article/pii/S1631071303000117
A methane fuse for the Cambrian explosion: carbon cycles and true polar wander
Early Cambrian time, punctuated by a unique and dramatic series of geological and biological events, has fascinated and puzzled geologists and paleontologists for more than a century [109,110]. Wide spread preservation of a detailed animal fossil record was facilitated by the almost simultaneous evolution of the ability to precipitate biominerals such as calcium phosphates and carbonates in nearly forty phyletic-level groups of animals [6,58,71].
1.3. Cambrian carbon cycles
As compiled here in Fig. 1, one of the most puzzling and as yet unexplained features of the Cambrian Explosion is the sequence of over a dozen accompanying oscillations in inorganic δ 13C values preserved in carbonate, with typical negative-shift magnitudes of – 4%/% Delta C13/C12 and sometimes larger [9–11,20,61,65,75,76, 91,107].
The composite record shows a large negative drop starting at the base of the first Cambrian biozone, coincident with the appearance of T. pedum [19,42], followed by a general rise to positive values of about +6%/% Delta C13/C12 near the base of the Tommotian (carbon cycle I′ [65]).
https://www.sciencedaily.com/releases/2018/12/181212134354.htm
Why deep oceans gave life to the first big, complex organisms
Why did the first big, complex organisms spring to life in deep, dark oceans where food was scarce? A new study finds great depths provided a stable, life-sustaining refuge from wild temperature swings in the shallows.
Willis, I always enjoy reading your posts and accept that you are good at interrogating datasets to extract what they are telling us. I have a challenge for you, but to be honest, it is more of a request.
The HITRAN dataset contains IR transmittance data for various greenhouse gases under a range of conditions, concentrations and even mixtures. It can be used to determine the atmospheric absorbance of longwave radiation. Happer and Wijngaarden, Smirnov, Schildtknecht and others have demonstrated that the IR absorption bands are saturated and that further CO2, or indeed, methane or NO2 emissions will cause very little additional warming. This is an incredibly important result because global warming is literally finished, but amazingly, the idea creates very little traction on this site.
It is an important finding in other ways too. If the greenhouse effect is limited by the normal spectroscopic parameters such as concentration and pathlength, then it means that the warming is beneficial and limited with no danger of overheating. This explains how our planet created the ideal conditions for the evolution of life but very high CO2 concentrations did not pose any threat to life. It is believed that CO2 reached 8,000 ppm while life was developing.
If we step back and think of the often bitter divide between alarmist climate scientists and those who disagree, absorption band saturation actually presents an honorable solution for all. It accepts that greenhouse gases are powerful enough to absorb outgoing IR to the extent that atmospheric opacity is achieved. When the concentration reaches a level when the atmosphere is opaque to IR then it does not matter how much more opacifying gas is added, transmission of IR has already ceased. This is band saturation and additional greenhouse gas addition has no more effect. This conclusion provides the unique opportunity to declare that alarmists and sceptics are both correct.
So I am asking you, Willis, to have a look at the HITRAN database and at least think about doing a study to check out the points I am making. It could be the most important study of your life.
Schrodinger’s Cat May 30, 2021 1:50 pm
Thanks, SC. If you have a link to the HITRAN dataset I’m happy to look at it. Also, links to the various papers you mention would be appreciated.
However, the idea that the “IR absorption bands are saturated” is a vast oversimplification. It’s not like a coat of paint that blocks out all transmission. What happens is that upwelling photons are absorbed and radiated more than once on their way through the atmosphere. And although the bands may be saturated at ground level, as you go up in the atmosphere the molecules of the atmosphere have more and more distance between them … and at those higher levels the bands are not saturated. So when more CO2 is added to the air, more upwelling radiation is indeed absorbed.
There’s a simplified but still accurate version of HITRAN, called MODTRAN, available online. You should go and mess around with it some. You can see how absorption occurs at different atmospheric levels with different CO2 concentrations.
Best regards,
w.
There is a ‘saturation’ flaw in both (SC) Hitran and Modtran arguments. It was visually and qualitatively described in essay Sensitive Uncertainty in ebook Blowing Smoke for those unablr to do the math. CO2 can never saturate, since it is a non condensing gas whose ‘saturation window’ widens as overlapping wvf is removed by its lapse rate effect (colder means less WV, duhoh)
So, with increasing CO2 concentration its effective radiative level (ERL) just rises higher in altitude, so colder with less overlapping WV. Colder is less efficient at energetic radiative heat removal, which fully explains Callendar’s 1938 logarithmic CO2 concentration curve.
YES PLEASE WILLIS, what Schrodinger’s cat said above about the HITRAN data.
PLEASE.
I have spent some time with HITRAN and both MODTRAN and MODTRAN6. They are used to help explain why/how CO2 has no significant effect on climate and what actually does at http://globalclimatedrivers2.blogspot.com . Included in that analysis are links to those codes. The applications there provide some examples of what parameters to select to get useful output.
Climate change is driven by model mania.
Yet more GIGO.
So what ? That model matches observations …
I love this line in the Wong-2021 Methods section on “Parameter Precalibration”:
“Then, we rule out any combinations of parameters that yield simulations that do not agree well with the CO2 proxy or temperature data, given their uncertainties.”
=========
Also it is worth reading the provided Reviewer comments:
number 1’s comments (start at page 6).
Rev #1:
“I have some fundamental concerns with this study listed below, from which I have to conclude that I can not recommend to publish the paper.”
Then he writes immediately as the first comment: “ESS in their approach is not an output of the model, but one of the parameters(input).”
This reviewer immediately recognizes the circular logic of ESS is an input that they get as output as built into the Wong-2021 methodology. All of reviewer #1 take-down of the Wong-2021 paper is worth reading.
Reviewer #2 scolds them for not including the actual parameter values and uncertainty ranges for reproducibility.
“It is mentioned that values and uncertainty ranges were used, but the reader is nowhere shown what these values and ranges are in the experiments. That makes the work essentially unreproducible, and that’s not good enough in my view. Parameters are mentioned in the text using their acronyms, but these are not always introduced/spelled out. This ought to be added, and/or reference needs to be made in appropriate places to the tables of parameter explanations.”
==================
The final point I think worth noting is in Supp Figure 7. They show a liklihood time slices at 240 Myr and 50 Myr. The 50 Myr time slice is comic gold. CO2 from their model could be 450 ppm, or just slightly less less likely ~750 ppm or ~1,000 ppm.
This study is junk science in action. Amazing that this is what now passes as science in Nature.
Joel O’Bryan,
A quite-excellent comment posting all around! Thanks 1E6.
Thanks for an excellent overview, Joel. I was going to mention that “climate sensitivity” is a tunable parameter in their model, but I operate on the “KISS principle” for my posts. Your comment is an excellent deeper dive into the lunacy.
w.
The Big Final point to make is their GEOG(t) input parameter. The GEOG parameter is a time series and they graphically show what it looks like in Supp Figure 1 J. They describe the GEOG parameter as, “GEOG, the change in global surface temperature relative to present-day, assuming present-day CO2and solar luminosity, ”
In other words, they assume on input, what they want the output Global Temperature to be. This is what reviewer #1 meant when he/she wrote, “ESS in their approach is not an output of the model, but one of the parameters(input).”
They show how the GEOG(t) input parameter is inserted into the model in Equation 1 on the manuscript (see attached image):
I have to laugh at the worry-warts who obsess over a little global warming. If you want to obsess, consider global cooling. Tell me how many crops you can grow when the growing season temperature is below 68 F.
Neil Lock posted: “The Standard Model of particle physics has either 18, or 19 parameters . . . The model is quite accurate.”
The heck, you say . . . “accurate” by whose standard of measurement?
Per https://www.symmetrymagazine.org/article/five-mysteries-the-standard-model-cant-explain , “Despite its great predictive power, however, the Standard Model fails to answer five crucial questions, which is why particle physicists know their work is far from done.”
The biggest problems facing the Standard Model are (my underlining emphasis added in the following quotes taken from the above-linked website):
1) “Three of the Standard Model’s particles are different types of neutrinos. The Standard Model predicts that, like photons, neutrinos should have no mass. However, scientists have found that the three neutrinos oscillate, or transform into one another, as they move. This feat is only possible because neutrinos are not massless after all.”
2) “Scientists realized they were missing something when they noticed that galaxies were spinning much faster than they should be based on the gravitational pull of their visible matter . . . Something we can’t see, which scientists have dubbed “dark matter,” must be giving additional mass—and hence gravitational pull—to these galaxies. Dark matter is thought to make up 27 percent of the contents of the universe. But it is not included in the Standard Model.“
3) “Scientists suppose that when the universe was formed in the Big Bang, matter and antimatter should have been produced in equal parts. However, some mechanism kept the matter and antimatter from their usual pattern of total destruction, and the universe around us is dominated by matter. The Standard Model cannot explain the imbalance.“
4) “The latest measurements by the Hubble Space Telescope and the European Space Agency observatory Gaia indicate that galaxies are moving away from us at 45 miles per second. That speed multiplies for each additional megaparsec, a distance of 3.2 million light years, relative to our position. This rate is believed to come from an unexplained property of space-time called dark energy, which is pushing the universe apart. It is thought to make up around 68 percent of the energy in the universe. ‘That is something very fundamental that nobody could have anticipated just by looking at the Standard Model‘ ”
5) “The Standard Model was not designed to explain gravity. This fourth and weakest force of nature does not seem to have any impact on the subatomic interactions the Standard Model explains.”
But perhaps adding another 3 or 4 tunable parameters can fix all of the above. 🙂
Neil, I fear you are talking at cross-purposes with Gordon. He’s listed the known problems with the Standard Model. He did NOT say that those make it useless. He did NOT say it should not be used in situations where it is known to give valid results.
To disagree with him, you need to quote which of the known problems he claims exist and show us why his claim is wrong. Because saying the Standard Model doesn’t have practical applications is a straw man—Gordon neither said nor implied that.
w.
How accurate is the standard model at quantifying the amount of matter vs. antimatter?
How accurate is it at quantifying the mass of the neutrino?
I’d say “Not very”, but of course YMMV. I’d say it’s very accurate about some things, but not about others.
w.
The fact that a model explains, say, 50% of observations and claiming it’s accurate because nothing else does better, does not, in fact, make it “accurate”. It is only useful over a restricted range of circumstances. When it explains the 5 elements mentioned, then we can talk about accurate.
Neil, thanks for the perfect segue to close this loop and return to the main thrust of Willis’ article above . . . it invites the question, does the “best” climate model win if it comes closest to (a) matching AGW/CAGW predictions (estimates?) of ECS, or (b) if it comes closest to matching observations (aka, field data) concerning “global-average” atmospheric temperature?
Once more, you’ve built a straw man and successfully demolished it … I never said there was a better model. That’s all you.
I do note however that curiously, in the process you’ve also successfully avoided answering either of the questions I asked.
Final score: straw person obliteration, 1; questions answered, 0.
w.
Neil, I am not on a journey to develop a Theory of Everything, so don’t wait for a related announcement from me in this regard.
As to your other comments directed to me, I hope you consider what Willis posted before me in this matter (Thanks, Willis!).
As a historical (even current) example: Issac Newton developed a terrific set of physical laws that “accurately” described the kinetics of forces influencing the positions and motions of objects. (Newton’s laws have such predictive accuracy that they are commonly used today across a range of scientific and engineering disciplines.) The along comes Albert Einstein with physical laws that “more accurately” describe forces influencing body positions and motions (and even their masses and the “curvature” of space itself) via his theories of special and general relativity.
Did Einstein improve predictive accuracy beyond the limits of Newton’s laws? Most definitely. Did doing such invalidate the useful accuracy of Newton’s laws for situations in which they are applicable? No.
I did, and there is no trace of Axions, nor dark matter, and even worse when they found the Higgs, a plethora of other particles predicted by a myriad of papers simply refused to show up! Lots of noise about Higgs covered that up!
There is a huge problem, very well addressed by Prof. Lee Smolin :
THE TROUBLE WITH PHYSICS – YouTube
In a related lecture, he is confronted on accuracy, and mentioned how good Ptolemy model results were for 1000 years. Physics has gone back to Ptolemy, not just Climate.
Thanks, Neil. I find the following:
w.
Neil Lock May 30, 2021 3:20 pm
Neil, since I said NOTHING about whether we can trust the Standard Model nor about naval officers and nuclear reactors, I have no clue why this is directed at me. But let me comment on it.
In engineering, people use all kinds of “heuristic” equations. These often have a variety of tunable parameters, but they are known to be accurate because they have been tested against reality in various circumstances. They also often have known operating ranges, outside of which a cautious engineer wouldn’t use them.
Finally, I’ve always found your quote to be simplistic. My version is.
My best regards to you,
w.
I say again:
w.
Depends. A thorough risk assessment (which indeed frequently assesses “dangerous” and “lethal” possibilities), as commonly performed, involves quite a bit of scientific rigor when done properly. It almost always avoids ethical judgements.
A risk assessment is a form of modeling possible outcomes from certain actions.
A model that is used to drive dangerous “scientific” hysteria about a purported impending “Thermageddon™”? That’s a scientific judgment.
w.
Aren’t we talking apples and oranges here. As Mr. Lock points out the standard model can be tested experimentally. The climate models not so much.
Ebor,
Nature and the progress of time are testing the climate models for us.
So far, as concerns CO2 driving global lower atmosphere temperature (GLAT), there is a null result.
Simply put, Earth has experienced two significant periods of nearly-constant GLAT (approximately, from 1940-1975 and from 1997-present) based on accurate scientific measurements; this DESPITE global atmospheric CO2 levels having a smooth, continuously-increasing exponential trend, also accurately measured, over these same time periods . . . and there being a very significant 35% total ppm increase from 1940 to present.
Note that both NASA and NOAA define “climate” as being weather averaged over a specified geographic area for a period of 30 years or more. The span of 1940-1975 meets this criteria; the timespan from 1997 to present is just 6 years short of this.
But also note much more CO2 (on a volume basis) has been put into the atmosphere in this latter time span than was put into the atmosphere during the span of 1940-1975.
As Nobel prize-winning physicist Richard Feynman is famously quoted as saying, ““It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you are. If it doesn’t agree with experiment, if it doesn’t agree with observation, it’s wrong. That’s all there is to it.”
Again, I have to ask why you’re directing this at me. I’ve said nothing about nuclear energy, aircraft carriers, or Chernobyl. Perhaps you’re posting in the wrong thread?
w.
As an engineer in power generation, with a master degree in nuclear science and technology, I can tell you for sure that the Standard Model (SM) is not used to model a nuclear reactor.
I doubt the SM can compute reactions between neutrons and Uranium-235 nuclei since the number of particles involved is 236 per reaction. CERN uses a state of the art cluster just to compute the collision of two hellium-4 nuclei (8 particles in total)
In any case if it can in fact be achieved it would require a computing cluster that would use most the whole available space in the ship.
Nuclear reactors are model with neutron transport equation using experimental cross-sections for the reaction rates, coupled with thermo-hydraulics and nuclear species radiation decay equations.
These equations have many parameters, from half-life of que species to viscosity of the fluid, and the so mentioned cross-sections which depend on the energy of the incoming neutron, but they are not free parameters, they have physical meaning and can and are measured independently. I am not aware of any free parameter in these models.
PD: this same thing happens at least with most of the parameters of the standard model, they are the masses of the particles which are measured, so they are not free parameters, their values are very tightly constrained by the experiments. Not like the climate model described in WE article.
Thanks for that most lucid explanation, Sebastian. One of the best features of this site is that very often, we have comments from folks like yourself who have, not just book learning, but a lifetime of practical experience in the subject under discussion.
w.
@Sebastian – this is exactly the point I was making: parameters such as collision cross-sections etc can be experimentally determined and then, ideally, the predictions of any model that uses those parameters can be experimentally verified. This to me is a fundamental concern with climate modeling. That and the “breadth” of the models – i.e., what natural mechanisms do they not account for and how significant are they? For example, some have theorized that the apparent correlation between sun spots (as proxy for sun’s magnetic field strength) and climate (see e.g. Maunder Minimum) derives from cosmic rays playing a significant role in cloud formation but verifying that in a laboratory setting would require lots of funding that’s not made available (I’m sure that simulating an atmospheric environment and probing high energy particles interacting with water vapor would be pretty darn expensive) so, this possible mechanism (could be important but who knows?) is not incorporated in the models – sensibly I might add because how do you model an unknown? – but where does that leave the models???
“So I gotta say, the data reported in the Wong2021 paper is a long, long way from establishing the claims made in their Abstract.”
But Wong, et al., passed editorial and peer review at Nature Communications, the world’s ever so eminent journal of science.
Well, the paper has only four authors, so that’s a plus. 🙂
Mr. Eschenbach, you do another simplified take-down of an equally brief report that is just a far off as you are far right(not politically. Extracting the temperatures used show pretty dramatically that the published paper couldn’t be either precise or correct.
Thanks.
ps. must be self taught. I NEVER heard any analysis as sharp. It reminds me of the upper level Prof going through a derivation of the Schrodinger equation.
Disney’s Fantasia.
I guess I’m just simple but don’t their graphs show that as co2 concentrations have steadily decreased the temps increased and then slightly decreased?
If CO2 controls temp doesn’t this graph show that more CO2 decreases temperature?
This is what is passing as ‘science’ in publications once trusted.. There is a crucial difference between ‘expert opinion’ and ‘science.’ I had thought that the journals had a passing familiarity with that distinction. It would seem, I trusted too much. Whoever is in charge of scientific rigor has been purged. It is consensus, all the way.
Mr. Eschenbach, if your “urban legend detector start ringing”, I’m curious if your m.b.e. (male bovine excrement) pie detector is olfacting?
Another bloviated abstract beating up on poor little CO2.
I produced the attached comparison table for the Cretaceous period in a question Bob Wentworth raised.
It is quite feasible for ocean warm pools to regulate to 3 degrees warmer than present if the atmospheric pressure was 1100mb given that the oxygen partial pressure was 30kPa as observed by proxy and Nitrogen still 80kPa.
It may be “feasible, but it doesn’t seem to have happened. Never over 30°C.
Stable sea surface temperatures in the western Pacific warm pool over the past 1.75 million years
w.
I do not know what relevance your link has. The Cretaceous period was more than 60M years ago.
This gives proxies for oxygen level:
https://geology.com/usgs/amber/
As the table shows, the tropical warm pool temperature would limit to annual average of 33C and could be as high as 35C on a monthly basis; both 3C higher than present.
Willis, of you somehow simultaneously reduced evaporation and lowered the albedo of a large area of ocean then I suspect you’d exceed the usual limiting temperature.
If only Benjamin Franklin were around, he Wight have a suggestion
JF
if, might… sigh…
JF
Julian, if you hover over the text of your comment with your cursor, you’ll see the gear icon at the lower right … click on that and you can edit your text.
w.
Um…So I tried connecting all those dots in figure 2 in various ways and indeed I finally got an elephant in a tutu, but for the life of me I cannot make him perform Swan Lake… I think I need more control knobs.
Well done Robert, I’m still trying to get the elephant…
Maybe the paper should be referenced as Wrong2021?
Willis,
You state that: “if CO2 levels actually were the control knob regulating the global temperature, we’d see all of the points falling on a nice straight line … but we don’t, far from it. There’s no statistically significant relationship between the temperature and the CO2 levels reported by Wong et al.”
However Wong et al. have an explicit equation (Eq. 1) that gives the relationship in their
model between CO2 concentrations and temperature. Importantly it also explicitly includes the effects of solar irradiation change and geophysical changes. So they don’t claim that CO2 is the control knob but only one of a number of effects. Hence nobody expects CO2 levels when plottted against temperature to fall on a “nice straight line”.
Correct – as prior to industrialisation, the “control knob” was (largely) orbital eccentricity, with the carbon cycle shifting concentrations between land sea and air. No new carbon. Since circa 1850 man has added ~50% more to the atmosphere and an equal amount to the oceans. Now it’s the “control knob”.
passingby, none of that is visible in their plot.
w.
In their results, when log2(co2) < 10 CO2 is positively correlated with temperature.
But when log(co2) > 10 CO2 is negatively correlated with temperature.
I’m sorry, but that makes no sense under any theory.
w.
Willis,
It makes sense in regards Wong et al.’s eq. 1. That equation says that temperature
depends on three difference functions of time. i.e.
Temp(t) = f1(t)+f2(t)+f3(t)
where only the first function describes CO2 as a function of time. What you did was plot f1(t) against Temp(t) and then you are surprised by the lack of correlation. When in fact nobody should expect to see any simple correlation between Temp(t) and f1(t).
Izaak, for 5 decades we’ve been told that “climate sensitivity”, AKA “lambda”, is a constant which is defined as
∆T = λ ∆F
where T is temperature and F is forcing.
But now, your claim is that nobody “nobody should expect to see any simple correlation” between temperature and forcing … it would have been nice if you or anyone had mentioned this at some time prior to the publication of this paper …
w.
Willis,
Nobody has ever claimed that climate sensitivity is as simple as you state. For starters the climate is clearly bistable with oscillations between an ice age state and a non ice age state. So “lambda” depends on temperature.
Secondly even if it was as simple as you suggest then you still would not expect a linear relationship between CO2 and temperature in Wong et al.’s paper since it is only one of three forcing terms in their Eq. 1.
Say what? They absolutely have claimed that. This discusses a widely-cited paper making that exact claim.
And while you are correct about it being only one of three terms, I just re-ran the analysis. The solar luminosity term is linear so it only changes the slope, not the linearity. Even with the GEOG factored in, it’s still very far from linear, and the trend is not statistically significant.
There’s another interesting point I found thanks to your question. Their formula for the temperature change from the change in solar luminosity is
Ws * t/570
where Ws is a constant and t is time, million years before present. They say Ws is a priori 7.4, and a posteriori 7.1, so I used 7.25 as a midrange value. Using those values, the increasing luminosity of the sun has increased the global temperature by 5.34°C since 420 million years ago.
And how much has the sun’s luminosity changed? Well, according to the formula here, luminosity has increased by 12.1 W/m2 (24/7 global average) over the last 420 million years.
And this, in turn, makes their climate sensitivity 0.44 °C per W/m2 … or, using the most accurate estimate of the forcing from the doubling of CO2 (3.15 W/m2 per doubling, Mhyre et al.), that’s 1.4°C per doubling.
Assuming business as usual (CO2 w/a slight exponential increase), by 2050 we’d see about 0.4°C from increased forcing.
And for that indetectable rise, people want to totally through our entire energy supply system and replace it with intermittent expensive renewables …
… pass.
w.
Willis,
Yes, the implied “sensitivity to solar forcing” is very low. I think the assumed high climate sensitivity to CO2 as one parameter of the model, and a completely different (lower) parameter for sensitivity to solar forcing should have been understood by the authors and reviewers to suggest problems with the whole paper; energy should be pretty much fungible. Seems to me there is no reason for such a complicated approach as in Wong. Simple energy balance models the put the sensitivity near 1.8C to 2C per doubling of CO2. That is probably good enough that it is reasonable to discount most of the GCM’s as wildly wrong on sensitivity.
It also makes no sense in that, as Reviewer #1 says, it’s not an output of their model, it’s an input in the form of a tuned parameter.
w.
It makes sense considering that about 50 times as much carbon in the form of carbonaceous ions is in the ocean as is in the atmosphere. http://www.whoi.edu/oceanus/viewArticle.do?id=17726 . When the CO2 level finally declined enough, i.e. to about 900 ppmv, the level in the atmosphere followed the temperature of the oceans.
Willis,
Regards your proposed thunderstorm thermostat paper, this is the problem you will get when you submit it to any journal (see the video). So you have to cover all bases.
I had to argue for weeks that some of my paper had no references because it was A NEW THEORY. They did not understand the concept of new theories.
Short video by Tony Heller (aka Steven Goddard).
https://youtu.be/qkXZ3_ZmKzw
Ralph
Thanks, Ralph, good to hear from you.
w.
Thanks Willis for this important article rebuttal.
The timescale back to 420 Mya nicely sidesteps the most problematic period for the notion of CO2 driven temperature: the end Ordovician (Saharan-Andean) glaciation during 460-440 Mya. CO2 levels increased during the inception of this glaciation and remained high throughout it.
https://ptolemy2.wordpress.com/2020/07/05/the-ordovician-glaciation-glaciers-spread-while-co2-increased-in-the-atmosphere-a-problem-for-carbon-alarmism/
The common sense observation of the Andean/Saharan that the temperature not only plunged into an ice age but recovered from it. This rules out the usual excuse that the sun was weaker back then.
The reason for dinosaurs during the Jurassic, is high CO2 concentrations. Because CO2 is plant-food.
More CO2 = larger plants = larger herbivores = larger carnivores.
QED.
And they are thin at one end, much thicker in the middle, and thin at the other end.
QED.
Ralph